Skip to content

Commit

Permalink
Adds multi-region instructions (#438)
Browse files Browse the repository at this point in the history
(using CloudSQL + Multi-cluster Ingress)
  • Loading branch information
askmeegs authored Feb 1, 2021
1 parent 6de4073 commit 5b52870
Show file tree
Hide file tree
Showing 20 changed files with 404 additions and 54 deletions.
7 changes: 5 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -153,12 +153,15 @@ dmypy.json
cython_debug/


# workload identity generated manifests
# workload identity generated manifests
wi-kubernetes-manifests/

# ledgermonolith Istio config
# ledgermonolith Istio config
src/ledgermonolith/istio/send-to-vm/
src/ledgermonolith/istio/istio-1.*

# Cypress
.github/workflows/ui-tests/cypress/screenshots

# Ingress for Anthos service account key
register-key.json
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,7 @@ EXTERNAL-IP

- **Workload Identity**: [See these instructions.](docs/workload-identity.md)
- **Cloud SQL**: [See these instructions](extras/cloudsql) to replace the in-cluster databases with hosted Google Cloud SQL.
- **Multicluster with Cloud SQL**: [See these instructions](extras/cloudsql-multicluster) to replicate the app across two regions using GKE, Multi-cluster Ingress, and Google Cloud SQL.
- **Istio**: Apply `istio-manifests/` to your cluster to access the frontend through the IngressGateway.
- **Anthos Service Mesh**: ASM requires Workload Identity to be enabled in your GKE cluster. [See the workload identity instructions](docs/workload-identity.md) to configure and deploy the app. Then, apply `istio-manifests/` to your cluster to configure frontend ingress.
- **Java Monolith (VM)**: We provide a version of this app where the three Java microservices are coupled together into one monolithic service, which you can deploy inside a VM (eg. Google Compute Engine). See the [ledgermonolith](src/ledgermonolith) directory.
Expand Down
178 changes: 178 additions & 0 deletions extras/cloudsql-multicluster/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,178 @@
# Multi-cluster Bank of Anthos with Cloud SQL

This doc contains instructions for deploying the Cloud SQL version of Bank of Anthos in a multi-region high availability / global configuration.

The use case for this setup is to demo running a global, scaled app, where even if one cluster goes down, users will be routed to the next available cluster. These instructions also show how to use [Multi-cluster Ingress](https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-ingress) to route users to the closest GKE cluster, demonstrating a low-latency use case.

![multi-region](architecture.png)

Note that in this setup, there is no service communication between the two clusters/regions. Each cluster has a dedicated frontend and set of backends. Both regions, however, share the same Cloud SQL instance, which houses the two databases (Accounts and Ledger).

## Prerequisites

Install the kubectx command line tool

Anthos license

## Steps

1. **Create a [Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects)** if you don't already have one.

2. **Set environment variables**, where `DB_REGION` is where the Cloud SQL instance will be deployed


```
export PROJECT_ID="my-project"
export DB_REGION="us-central1"
export CLUSTER_1_NAME="boa-1"
export CLUSTER_1_ZONE="us-central1-b"
export CLUSTER_2_NAME="boa-2"
export CLUSTER_2_ZONE="europe-west3-a"
export NAMESPACE="default"
```

3. **Create two GKE clusters, one per region.**

```
gcloud container clusters create ${CLUSTER_1_NAME} \
--project=${PROJECT_ID} --zone=${CLUSTER_1_ZONE} \
--machine-type=e2-standard-4 --num-nodes=4 \
--workload-pool="${PROJECT_ID}.svc.id.goog" --enable-ip-alias
gcloud container clusters create ${CLUSTER_1_NAME} \
--project=${PROJECT_ID} --zone=${CLUSTER_2_ZONE} \
--machine-type=e2-standard-4 --num-nodes=4 \
--workload-pool="${PROJECT_ID}.svc.id.goog" --enable-ip-alias
```

4. **Configure kubectx for the clusters.**

```
gcloud container clusters get-credentials ${CLUSTER_1_NAME} --zone ${CLUSTER_1_ZONE} --project ${PROJECT_ID}
kubectx cluster1="gke_${PROJECT_ID}_${CLUSTER_1_ZONE}_${CLUSTER_1_NAME}"
gcloud container clusters get-credentials ${CLUSTER_2_NAME} --zone ${CLUSTER_2_ZONE} --project ${PROJECT_ID}
kubectx cluster2="gke_${PROJECT_ID}_${CLUSTER_2_ZONE}_${CLUSTER_2_NAME}"
```

5. **Set up Workload Identity** for both clusters. When the script is run for the second time, you'll see some errors (GCP service account already exists), this is ok.

```
kubectx cluster1
../cloudsql/setup_workload_identity.sh
kubectx cluster2
../cloudsql/setup_workload_identity.sh
```

6. **Run the Cloud SQL instance create script** on both clusters. You'll see errors when running on the second cluster, this is ok.

```
../cloudsql/create_cloudsql_instance.sh
```

7. **Create Cloud SQL admin secrets** in your GKE clusters. This gives your in-cluster Cloud SQL clients a username and password to access Cloud SQL. (Note that admin/admin credentials are for demo use only and should never be used in a production environment.)

```
INSTANCE_NAME='bank-of-anthos-db-multi'
INSTANCE_CONNECTION_NAME=$(gcloud sql instances describe $INSTANCE_NAME --format='value(connectionName)')
kubectx cluster1
kubectl create secret -n ${NAMESPACE} generic cloud-sql-admin \
--from-literal=username=admin --from-literal=password=admin \
--from-literal=connectionName=${INSTANCE_CONNECTION_NAME}
kubectx cluster2
kubectl create secret -n ${NAMESPACE} generic cloud-sql-admin \
--from-literal=username=admin --from-literal=password=admin \
--from-literal=connectionName=${INSTANCE_CONNECTION_NAME}
```


8. **Deploy the DB population jobs.** These are one-off bash scripts that initialize the Accounts and Ledger databases with data. You only need to run these Jobs once, so we deploy them only to cluster1.

```
kubectx cluster1
kubectl apply -n ${NAMESPACE} -f ../cloudsql/kubernetes-manifests/config.yaml
kubectl apply -n ${NAMESPACE} -f ../cloudsql/populate-jobs
```

9. Wait a few minutes for the Jobs to complete. The Pods will be marked as `0/3 - Completed` when they finish successfully.

```
NAME READY STATUS RESTARTS AGE
populate-accounts-db-js8lw 0/3 Completed 0 71s
populate-ledger-db-z9p2g 0/3 Completed 0 70s
```

10. **Deploy Bank of Anthos services to both clusters.**

```
kubectx cluster1
kubectl apply -n ${NAMESPACE} -f ../cloudsql/kubernetes-manifests
kubectx cluster2
kubectl apply -n ${NAMESPACE} -f ../cloudsql/kubernetes-manifests
```

11. **Run the Multi-cluster Ingress setup script.** This registers both GKE clusters to Anthos with "memberships," and sets cluster 1 as the "config cluster" to administer the Multi-cluster Ingress resources.

```
./register_clusters.sh
```


12. **Create Multi-cluster Ingress resources for global routing.** This YAML file contains two resources a headless Multicluster Kubernetes Service ("MCS") mapped to the `frontend` Pods, and a multi cluster Ingress resource, `frontend-global-ingress`, with `frontend-mcs` as the MCS backend. Note that we're only deploying this to Cluster 1, which we've designated as the multicluster ingress "config cluster."

```
kubectx cluster1
kubectl apply -n ${NAMESPACE} -f multicluster-ingress.yaml
```


13. **Verify that the multicluster ingress resource was created.** Look for the `Status` field to be populated with two Network Endpoint Groups (NEGs) corresponding to the regions where your 2 GKE clusters are running. This may take a few minutes.

```
watch kubectl describe mci frontend-global-ingress -n ${NAMESPACE}
```

Expected output:

```
Status:
...
Network Endpoint Groups:
zones/europe-west3-a/networkEndpointGroups/k8s1-dd9eb2b0-defaul-mci-frontend-mcs-svc-0xt1kovs-808-7e472f17
zones/us-west1-b/networkEndpointGroups/k8s1-6d3d6f1b-defaul-mci-frontend-mcs-svc-0xt1kovs-808-79d9ace0
Target Proxies:
mci-ddwsrr-default-frontend-global-ingress
URL Map: mci-ddwsrr-default-frontend-global-ingress
VIP: 34.120.172.105
```


14. **Copy the `VIP` field** to the clipboard and set as an env variable:

```
export VIP=<your-VIP>
```

15. **Test the geo-aware routing** by curling the `/whereami` frontend endpoint using the global VIP you copied. You could also create a Google Compute Engine instance in a specific region to test further. **Note that you may see a `404` or `502` error** for several minutes while the forwarding rules propagate.

```
watch curl http://${VIP}:80/whereami
```

Example output, from a US-based client where the two GKE regions are `us-west1` and `europe-west3-a`:

```
Cluster: boa-1, Pod: frontend-74675b56f-w4rdf, Zone: us-west1-b
```

Example output, from an EU-based GCE instance:

```
Cluster: boa-2, Pod: frontend-74675b56f-2ln5w, Zone: europe-west3-a
```

🎉 **Congrats!** You just deployed a globally-available version of Bank of Anthos!
Binary file added extras/cloudsql-multicluster/architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
38 changes: 38 additions & 0 deletions extras/cloudsql-multicluster/multicluster-ingress.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: networking.gke.io/v1
kind: MultiClusterService
metadata:
name: frontend-mcs
spec:
template:
spec:
selector:
app: frontend
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.gke.io/v1
kind: MultiClusterIngress
metadata:
name: frontend-global-ingress
spec:
template:
spec:
backend:
serviceName: frontend-mcs
servicePort: 8080
74 changes: 74 additions & 0 deletions extras/cloudsql-multicluster/register_clusters.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# !/bin/bash

gcloud config set project ${PROJECT_ID}

export MEMBERSHIP_NAME="boa-membership"
export HUB_PROJECT_ID=${PROJECT_ID}
export SERVICE_ACCOUNT_NAME="register-sa"


# Do this only once
echo "🌏 Enabling APIs..."
gcloud services enable \
--project=${PROJECT_ID} \
container.googleapis.com \
gkeconnect.googleapis.com \
gkehub.googleapis.com \
cloudresourcemanager.googleapis.com

gcloud services enable anthos.googleapis.com
gcloud services enable multiclusteringress.googleapis.com


echo "🌏 Creating cluster registration service account..."
gcloud iam service-accounts create ${SERVICE_ACCOUNT_NAME} --project=${HUB_PROJECT_ID}

gcloud projects add-iam-policy-binding ${HUB_PROJECT_ID} \
--member="serviceAccount:${SERVICE_ACCOUNT_NAME}@${HUB_PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/gkehub.connect"

echo "🌏 Downloading service account key..."
gcloud iam service-accounts keys create register-key.json \
--iam-account=${SERVICE_ACCOUNT_NAME}@${HUB_PROJECT_ID}.iam.gserviceaccount.com \
--project=${HUB_PROJECT_ID}


echo "🌏 Registering cluster 1..."
GKE_URI_1="https://container.googleapis.com/v1/projects/${PROJECT_ID}/zones/${CLUSTER_1_ZONE}/clusters/${CLUSTER_1_NAME}"
gcloud container hub memberships register ${CLUSTER_1_NAME} \
--project=${PROJECT_ID} \
--gke-uri=${GKE_URI_1} \
--service-account-key-file=register-key.json


echo "🌏 Registering cluster 2..."
GKE_URI_2="https://container.googleapis.com/v1/projects/${PROJECT_ID}/zones/${CLUSTER_2_ZONE}/clusters/${CLUSTER_2_NAME}"
gcloud container hub memberships register ${CLUSTER_2_NAME} \
--project=${PROJECT_ID} \
--gke-uri=${GKE_URI_2} \
--service-account-key-file=register-key.json

echo "🌏 Listing your Anthos cluster memberships:"
gcloud container hub memberships list


echo "🌏 Adding cluster 1 as the Multi-cluster ingress config cluster..."
gcloud alpha container hub ingress enable \
--config-membership=projects/${PROJECT_ID}/locations/global/memberships/${CLUSTER_1_NAME}

gcloud alpha container hub ingress describe

echo "⭐️ Done."
Loading

0 comments on commit 5b52870

Please sign in to comment.