This repository contains all the base infrastructure for XebiKart, including but not limited to:
- The Kubernetes (GKE) cluster(s)
- The infrastructure components running on these K8s clusters:
- DNS records controllers
- TLS certificates controllers
- Common message broker for applications
- Monitoring
- ...
The entire infrastructure is created under the XebiKart folder located as follow :
xebia.fr (Org) > Conferences > XebiCon > Xebikart
We use Terraform to manage the entire infrastructure as code. You can take a look at the ADR #003 for more details about this choice.
You obviously need Terraform to be installed to use it.
Terraform needs to authenticate on GCP in order to manages resources through
GCP APIs. Terraform knows how to use your gcloud
CLI credentials, so you can
simply run this to be able to run Terraform locally from your machine:
gcloud auth application-default login
gcloud auth login
For more advanced Terraform/GCP authentication methods, please see the [GCP documentation about this topic] (https://cloud.google.com/docs/authentication/production).
You may also need your own GitHub API token in order to run the Terraform stff
found in repositories
. Read the README in this directory to know more about
it.
The concepts of locals and variables of Terraform are used to "configure" the infrastructure that is being created.
All of these settings can be found under
terraform/inputs.tf
Since we will maybe use multiple Kubernetes clusters at some point, you might find useful the official Kubernetes documentation on configuring access to multiple clusters for kubectl
The monitoring is done thanks to
Stackdriver. In short Stackdriver is
comprised of 2 parts: Stackdriver Logging (logging.googleapis.com
API), and
Stackdriver Monitoring (monitoring.googleapis.com
API). Warning, there is
actually a Legacy Stackdriver integration and a beta one dedicated to
Kubernetes. You can learn more about this on the Overview of Stackdriver
support for GKE .
For this XebiKart project, we chose to use the dedicated Stackdriver Kubernetes
Monitoring (Beta), as you can see it in the cluster description in
Terraform with monitoring.googleapis.com/kubernetes
and
logging.googleapis.com/kubernetes
Everything on GKE is configured out-of-the-box to ship monitoring informations
to Stackdriver, mainly through services running in the kube-system
namespace:
daemonset/prometheus-to-sd
daemonset/fluentd-gcp
deployment/event-exporter
deployment/metrics-server
deployment/stackdriver-metadata-agent-cluster-level
deployment/heapster
This image from a post on Medium summarizes it pretty well:
Unfortunately, there is a bunch of stuff to configure in order to be able to enable Stackdriver for Kubernetes:
- Enabling APIs on the project - done in project.tf
- Configuring GKE to use the dedicated Stackdriver Kubernetes Monitoring (beta) - done in gke.tf as explained above.
- Creating a Stackdriver workspace
- Associate the project to the Stackdriver workspace
The problem is, the last 2 steps cannot be done with Terraform as you can see in the corresponding GitHub issue. They have consequently been done manually while waiting for the API primitives in Stackdriver to automate it :(
The Stackdriver workspace containing the xebikart-dev-1
project is the one
created from the xebikart-deployment-infra
project, in order to avoid
repeating these manual steps too much for future projects/clusters.
You can access it on the xebikart-deployment-infra Stackdriver workspace
- Deploy Terraform base infra with Google Cloud Deployment Manager
- Deploy infrastructure using Terraform
- Deploy Kubernetes services
Note: the current RabbitMQ release on the GKE cluster is name
rabbitmq-ha-release-4
for some iteration reasons. This might be changed
later.
Step #1 - Setup Helm chart dependency:
helm dependency build rabbitmq
Step #2 - Install/Deploy RabbitMQ with Helm:
helm install rabbitmq -n <release-name> --set rabbitmq-ha.rabbitmqUsername=<admin_user>,rabbitmq-ha.rabbitmqPassword=<adminPassword>
Step #3 - Upgrade the RabbitMQ deployment:
helm upgrade <release_name> rabbitmq