#Google Cloud Platform Setup
If required signup for the Google Cloud Platform trial.
Google Cloud Platform products will be used:
- Google Kubernetes Engine
- Google Container Registry
- Google Stackdriver Logging
- Google Stackdriver Monitoring
While almost everything described in this demo can be achieved through the Google Cloud Console UI, I would like to provide the command line code - for consistency.
Please refer to the Tooling Section for the detailed instructions on command line installation.
Start with setting up the environment and authenticating to GCP. Make sure to replace PROJECT_ID
in the script below:
export PROJECT_ID=silver-ribbon-717
export CLUSTER_NAME=camel-demo
export CLUSTER_ZONE=australia-southeast1-b
gcloud config set project $PROJECT_ID
gcloud config set compute/zone $CLUSTER_ZONE
gcloud config set container/cluster $CLUSTER_NAME
This step can be skipped if running in Google Cloud Shell environment.
# Generic gcloud access
gcloud auth login
# Default application access for python or JDK
gcloud auth application-default login
This, as everything else with GCP, can be done either via API management console or through the command line:
gcloud services enable --project $PROJECT_ID bigquery-json.googleapis.com
gcloud services enable --project $PROJECT_ID monitoring.googleapis.com
gcloud services enable --project $PROJECT_ID containerregistry.googleapis.com
gcloud services enable --project $PROJECT_ID pubsub.googleapis.com
gcloud container clusters create "$CLUSTER_NAME" \
--cluster-version "1.8.2-gke.0" \
--machine-type "n1-standard-2" \
--disk-size 10 \
--scopes https://www.googleapis.com/auth/compute,\
https://www.googleapis.com/auth/devstorage.read_only,\
https://www.googleapis.com/auth/logging.write,\
https://www.googleapis.com/auth/monitoring \
--zone $CLUSTER_ZONE \
--num-nodes 1 \
--max-nodes 3 \
--disable-addons KubernetesDashboard \
--enable-autoupgrade \
--enable-autoscaling \
--network "default" \
--enable-cloud-logging \
--enable-cloud-monitoring
gcloud container clusters get-credentials $CLUSTER_NAME
Granting the Service Account that owns and runs the cluster the access to Google Cloud Storage - that's where Container Registry will put the uploaded Docker images.
export PROJECT_NUMBER=$(gcloud projects list --filter='projectId='$PROJECT_ID --format='value(projectNumber)')
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$PROJECT_NUMBER[email protected] \
--role roles/storage.objectViewer
gcloud beta pubsub topics create demo.event
gcloud beta pubsub subscriptions create demo.event.bigquery --topic demo.event --ack-deadline 60
The schema is also available as schemas/bigquery/demo_event.json
.
echo 'project_id = '$PROJECT_ID >> ~/.bigqueryrc
bq mk -d --data_location=US demo
cat << EOF > ./bigquery_demo_event.json
[
{"name":"id", "type":"STRING", "mode": "REQUIRED", "description": "Unique Correlation ID"},
{"name":"updated", "type":"INTEGER", "mode": "REQUIRED", "description": "Timestamp in milliseconds"},
{"name":"text", "type":"STRING"}
]
EOF
bq mk --time_partitioning_type=DAY --schema=bigquery_demo_event.json demo.demo_event
This is the service account Camel Application will be using to connect to PubSub and BigQuery. It is different from the Service Account used by the cluster - the blueprint allows for multiple service accounts with different permissions, when required.
gcloud iam service-accounts create demo-cluster
gcloud iam service-accounts keys create \
./demo-cluster-key.json \
--iam-account demo-cluster@$PROJECT_ID.iam.gserviceaccount.com
# BigQuery
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:demo-cluster@$PROJECT_ID.iam.gserviceaccount.com \
--role "roles/bigquery.dataEditor"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:demo-cluster@$PROJECT_ID.iam.gserviceaccount.com \
--role "roles/bigquery.user"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:demo-cluster@$PROJECT_ID.iam.gserviceaccount.com \
--role "roles/bigquery.jobUser"
# PubSub
gcloud beta pubsub subscriptions add-iam-policy-binding \
demo.event.bigquery \
--member=serviceAccount:demo-cluster@$PROJECT_ID.iam.gserviceaccount.com \
--role "roles/pubsub.subscriber"
The newly created service account key needs to be accessible by the applications from within the Kubernetes cluster.
Adding one as cluster secret.
kubectl create secret generic demo-cluster-key --from-file=./demo-cluster-key.json
Adding project id as a configuration map entry:
kubectl create configmap project --from-literal=id=$PROJECT_ID
The Google Cloud Project has been set up for the Camel/Kubernetes/PubSub/Bigquery action.