Skip to content

Latest commit

 

History

History
65 lines (49 loc) · 4.03 KB

README.md

File metadata and controls

65 lines (49 loc) · 4.03 KB

CarboNata 🍝+🗡️

Measuring the energy and carbon usage of Camunda Workers.

🏗Develop

All test resources are under src/ and tf/. while the first folder contains the test workers and processes, the latter contains the Terraform files to setup a Camunda 8 cluster based on AWS EKS and deploys the necessary resources.

Running it on a local K8s cluster using C8 SaaS

Create a local .env file base on our env_template. Fill the variables using the values obtained by creating a new client in your C8 SaaS Cluster.

To get easily going execute the start_locally.sh script. If you want to clean up your setup afterward you could use destroy_locally.sh.

The login to Grafana is admin and password prom-operator.

If you want to use C8 Self-Managed, please set TF_VAR_CAMUNDA_CLIENT_MODE='self-managed' (see https://docs.camunda.io/docs/apis-tools/spring-zeebe-sdk/getting-started/#self-managed).

C8 on AWS EKS via Terraform

Important

There is a bug inside the eks module provided by Camunda which MAY break the destroying of resources, in case that happens, you have to execute terraform apply -destroy twice

  • Make sure you have AWS CLI (and configured), Terraform and Zeebe CLI installed
  • Clone Repo
  • Setting up the environments
    • Running C8 on the locally configured kubernetes cluster i.e. the cluster which is active via the .kubeconf file
      • Use terraform init and terraform apply in the tf/c8 of the project after setting TF_VAR_CAMUNDA_CLIENT_MODE='self-managed'
    • (Optional) If you want to use an AWS EKS cluster configured according to Camunda and setup your .kubeconf accordingly
      • Use terraform init and terraform apply in the tf/eks of the project and afterwards setup C8
  • (Camunda 8) When everything is finished you can access the following
    • (🗡️ Convenience 🗡️) Execute the script tf/scripts/forwards.sh to open all needed port-forwards in the background
    • Port-forwards needed for local dev:
    • (🗡️ Important 🗡️) If you have Camunda Identity and Keycloak enabled you should keep the ports of these open via:
      • kubectl port-forward -n camunda svc/camunda-platform-keycloak 18080:80 and kubectl port-forward -n camunda svc/camunda-platform-identity 8080:80 afterwards via localhost:8083 and creds demo:demo
    • Camunda Operate via:
      • kubectl port-forward -n camunda svc/camunda-platform-operate 8081:80 and afterwards via localhost:8081 and creds demo:demo
    • Camunda Tasklist via:
      • kubectl port-forward -n camunda svc/camunda-platform-tasklist 8082:80 and afterwards via localhost:8082 and creds demo:demo
    • Camunda Zeebee Gateway via:
      • kubectl port-forward -n camunda svc/camunda-platform-zeebe-gateway 26500:26500 and afterwards via localhost:26500 and creds demo:demo
      • The Sample_Process and its worker should already be deployed, therefore you can kick off an instance via zbctl create instance Fibonacciprocess --insecure
      • Alternatively, look at the job resource in the camunda namespace
    • Grafana via:
      • kubectl port-forward -n monitoring svc/kube-prometheus-stack-grafana 9102:80 and afterwards via localhost:9102 and creds admin:prom-operator
    • Camunda Optimize via:
      • kubectl port-forward -n camunda svc/camunda-platform-optimize 8083:80 and afterwards via localhost:8083 and creds demo:demo

📨Contact

If you have any questions or ideas feel free to start a discussion.

This open source project is being developed by envite consulting GmbH.

envite consulting GmbH envite consulting GmbH