Skip to content
This repository has been archived by the owner on Jan 28, 2022. It is now read-only.

Commit

Permalink
Deploy load test to kind
Browse files Browse the repository at this point in the history
  • Loading branch information
EliiseS committed Mar 4, 2020
1 parent 79e5dbb commit ff20675
Show file tree
Hide file tree
Showing 5 changed files with 102 additions and 117 deletions.
20 changes: 15 additions & 5 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -157,18 +157,20 @@ else
kind delete cluster --name ${KIND_CLUSTER_NAME}
endif
@echo "creating kind cluster"
kind create cluster --name ${KIND_CLUSTER_NAME}
kind create cluster --name ${KIND_CLUSTER_NAME} --config ./kind-cluster.yaml

set-kindcluster: install-kind
make create-kindcluster
kubectl cluster-info
@echo "deploying controller to cluster"
make deploy-kindcluster
make install
make install-prometheus
make deploy-mock-api
make deploy-locust

# Deploy controller
deploy-kindcluster:
@echo "deploying controller to cluster"
#create image and load it into cluster
$(eval newimage := "docker.io/controllertest:$(timestamp)")
IMG=$(newimage) make docker-build
Expand All @@ -188,6 +190,7 @@ ifeq (,$(shell which kind))
else
@echo "kind has been installed"
endif

install-kubebuilder:
ifeq (,$(shell which kubebuilder))
@echo "installing kubebuilder"
Expand Down Expand Up @@ -247,19 +250,26 @@ apply-manifests-mock-api:
cat ./mockapi/manifests/deployment.yaml | sed "s|mockapi:latest|${MOCKAPI_IMG}|" | kubectl apply -f -
kubectl apply -f ./mockapi/manifests/service.yaml

kind-load-image-mock-api: create-kindcluster docker-build-mock-api install-prometheus
kind-load-image-mock-api: docker-build-mock-api
kind load docker-image ${MOCKAPI_IMG} --name ${KIND_CLUSTER_NAME} -v 1

kind-deploy-mock-api: kind-load-image-mock-api apply-manifests-mock-api
deploy-mock-api:kind-load-image-mock-api apply-manifests-mock-api

kind-deploy-locust: create-kindcluster install-prometheus
kind-deploy-mock-api: create-kindcluster install-prometheus deploy-mock-api

deploy-locust:
docker build -t ${LOCUST_IMG} -f locust/Dockerfile .
kind load docker-image ${LOCUST_IMG} --name ${KIND_CLUSTER_NAME} -v 1
cat ./locust/manifests/deployment.yaml | sed "s|locust:latest|${LOCUST_IMG}|" | sed "s|behaviours/scenario1_run_submit_delete.py|${LOCUST_FILE}|" | kubectl apply -f -

kind-deploy-locust: create-kindcluster install-prometheus deploy-locust

format-locust:
black .

test-locust:
pip install -e ./locust -q
pytest

port-forward:
./portforwards.sh
165 changes: 58 additions & 107 deletions docs/locust.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,26 +5,18 @@ The load testing project for the [azure-databricks-operator](https://github.com/
## Table of contents <!-- omit in toc -->

- [Load testing with locust](#load-testing-with-locust)
- [Deploying dependencies](#deploying-dependencies)
- [Build and Test](#build-and-test)
- [Deploy to kind](#deploy-to-kind)
- [Run tests](#run-tests)
- [Adding tests](#adding-tests)
- [Build and run](#build-and-run)
- [Run unit tests](#run-unit-tests)
- [Run load tests in kind](#run-load-tests-in-kind)
- [Set error conditions](#set-error-conditions)
- [Contribute](#contribute)
- [Extending the supported Databricks functionality](#extending-the-supported-databricks-functionality)
- [How do I update a dashboard](#how-do-i-update-a-dashboard)
- [Adding unit tests](#adding-unit-tests)
- [Prometheus Endpoint](#prometheus-endpoint)
- [Running test under docker](#running-test-under-docker)
- [Test locally against cluster](#test-locally-against-cluster)
- [Deploy into the cluster and run](#deploy-into-the-cluster-and-run)
- [How do I update a dashboard](#how-do-i-update-a-dashboard)
- [How do I set error conditions](#how-do-i-set-error-conditions)
- [Known issues](#known-issues)

## Deploying dependencies

For documentation on deploying the `azure-databricks-operator` and `databricks-mock-api` for testing see [deploy/README.md](deploy/README.md)

## Build and Test
## Build and run

Everything needed to build and test the project is set up in the dev container.

Expand All @@ -41,23 +33,7 @@ To run the project without the dev container you need:
pip install -r requirements.txt
```

### Deploy to kind

> Before proceeding make sure your container or environment is up and running

1. Deploy locust to local KIND instance. Set `LOCUST_FILE` to the the locust scenario you'd like to run from `locust/behaviours`.
```bash
make kind-deploy-locust LOCUST_FILE="behaviours/scenario1_run_submit_delete.py"
```
2. Start the test server
```bash
locust -f behaviours/<my_locust_file>.py
```
### Run tests
### Run unit tests

Tests are written using `pytest`. More information [is available here](https://docs.pytest.org/en/latest/).

Expand All @@ -82,112 +58,87 @@ Tests are written using `pytest`. More information [is available here](https://d
test/unit/db_run_client_test.py ........
```

### Adding tests
The project is setup to automatically discover any tests under the `locust/test` folder. Provided the following criteria are met:
- your test `.py` file follows the naming convention `<something>_test.py`
- within your test file your methods follow the naming convention `def test_<what you want to test>()`
## Contribute
- Test files are added to the `/behaviours` directory
- These files take the recommended format described by the Locust documentation representing the behvaiour of a single (or set of) users
### Extending the supported Databricks functionality
### Run load tests in kind

- `/locust_files/db_locust` contains all files related to how Locust can interact with Databricks using K8s via the [azure-databricks-operator](https://github.com/microsoft/azure-databricks-operator/)
- `db_locust`: The brain of the behaviour driven tests. Inherits from the default `Locust`, read more [here](https://docs.locust.io/en/stable/testing-other-systems.html)
- `db_client.py`: Core client used by the `db_locust`. It is a wrapper of "sub" clients that interface to specific databricks operator Kinds
- `db_run_client.py`: all actions relating to `run` api interfaces
- More clients to be added - ***this is where the majority of contributions will be made***
- `db_decorator.py`: A simple decorator for Databricks operations that gives basic metric logging and error handling
## Prometheus Endpoint
> Before proceeding make sure your container or environment is up and running

This suite of Locust tests exposes stats to Prometheus via a web endpoint.
1. Deploy locust to local KIND instance. Set `LOCUST_FILE` to the the locust scenario you'd like to run from `locust/behaviours`.
The endpoint is exposed at `/export/prometheus`. When running the tests with the web endpoints enabled, you can visit <http://localhost:8089/export/prometheus> to see the exported stats.
```bash
make set-kindcluster LOCUST_FILE="behaviours/scenario1_run_submit_delete.py"
```
## Running test under docker
2. Once all services are up, port-forward them for access
This guide assumes you have used `./deploy/README.md` to deploy an AKS Engine cluster and have the `KUBECONFG` set correctly and also used `./deploy/prometheus-grafana` to setup the `prometheus` operator.
```bash
make port-forward
```
### Test locally against cluster
3. Visit `http://localhost:8089` to start the load test from the locust web UI.
To build and test the locust image locally againt the cluster you can run:
4. View the dashboards for the test on http://localhost:8080
```bash
make docker-run-local
```
```text
Username: admin
Password: prom-operator
```
This will build a docker image **which contains the kubeconfig** file.
> Note: If one of these port-forwards stops working use `ps aux | grep kubectl` and look for the process id of the one thats broken then use `kill 21283` (your id in there) to stop it. Then rerun the port-forward command
> Why put the file in the docker image? As we're using a devcontainer the normal approach of mounting a file doesn't work as the path on the host to the file (which is what the dameon uses) isn't the same as the path in the devcontainer so the file is never mounted.
#### Set error conditions
### Deploy into the cluster and run
For some of the load test scenarios we want to trigger error behaviour in the mockAPI during a test run.
1. To deploy into the cluster run and port forward:
1. Port-forward the mockAPI service
```bash
CONTAINER_REGISTRY=$ACR_LOGIN_SERVER make deploy-loadtest
k port-forward service/locust-loadtest 8089:8089 9090:9090 -n locust
# port-forward to localhost:8085
kubectl port-forward -n databricks-mock-api svc/databricks-mock-api 8085:8080
```
2. Visit `http://localhost:8089` to start the loadtest from the locust web UI.
2. Issue a `PATCH` request to update the error rate, e.g. to set 20% probability for status code 500 responses
3. View stats on the test
```bash
curl --request PATCH \
--url http://localhost:8085/config \
--header 'content-type: application/json' \
--data '{"DATABRICKS_MOCK_API_ERROR_500_PROBABILITY":20}'
```
```bash
kubectl port-forward service/prom-azure-databricks-operator-grafana 8080:80
```
> For more information see [mockAPI features](mockapi.md#Features)
```text
Username: admin
Password: prom-operator
http://localhost:8080
```
## Contribute
Then navigate to the locust dashboard to view the results.
- Test files are added to the `/behaviours` directory
- These files take the recommended format described by the Locust documentation representing the behvaiour of a single (or set of) users
If you want to setup port forwards for all the things then do the following:
### Extending the supported Databricks functionality
```bash
k port-forward service/prom-azure-databricks-oper-prometheus 9091:9090 &
k port-forward service/locust-loadtest 8089:8089 9090:9090 &
k port-forward service/prom-azure-databricks-operator-grafana 8080:80 &
- `/locust_files/db_locust` contains all files related to how Locust can interact with Databricks using K8s via the [azure-databricks-operator](https://github.com/microsoft/azure-databricks-operator/)
- `db_locust`: The brain of the behaviour driven tests. Inherits from the default `Locust`, read more [here](https://docs.locust.io/en/stable/testing-other-systems.html)
- `db_client.py`: Core client used by the `db_locust`. It is a wrapper of "sub" clients that interface to specific databricks operator Kinds
- `db_run_client.py`: all actions relating to `run` api interfaces
- More clients to be added - ***this is where the majority of contributions will be made***
- `db_decorator.py`: A simple decorator for Databricks operations that gives basic metric logging and error handling
Browse to locust webui -> http://localhost:8089/
Browse to locust metrics -> http://localhost:9090/
Browse to Prometheus -> http://localhost:9091/targets
Browse to Grafana -> http://localhost:8080/
```
### How do I update a dashboard
> Note: If one of these port forwards stops working use `ps aux | grep kubectl` and look for the process id of the one thats broken then use `kill 21283` (your id in there) to stop it. Then rerun the port forward command.
Best way I've found is to import the JSON for the board into the grafana instance, edit it using the UI then export it back to JSON and update the file in the repo.

#### How do I update a dashboard
### Adding unit tests

Best way I've found is to import the JSON for the board into the grafana instance, edit it using the UI then export it back to JSON and update the file in the repo.
The project is setup to automatically discover any tests under the `locust/test` folder. Provided the following criteria are met:

#### How do I set error conditions
- your test `.py` file follows the naming convention `<something>_test.py`
- within your test file your methods follow the naming convention `def test_<what you want to test>()`
For some of the load test scenarios we want to trigger error behaviour in the mock-api during a test run.
## Prometheus Endpoint
First step for this is to port-forward the mock-api service
This suite of Locust tests exposes stats to Prometheus via a web endpoint.
```bash
# port-forward to localhost:8085
kubectl port-forward -n databricks-mock-api svc/databricks-mock-api 8085:8080
```
The endpoint is exposed at `/export/prometheus`. When running the tests with the web endpoints enabled, you can visit <http://localhost:8089/export/prometheus> to see the exported stats.
Next we can issue a `PATCH` request to update the error rate, e.g. to set 20% probability for status code 500 responses
```bash
curl --request PATCH \
--url http://localhost:8085/config \
--header 'content-type: application/json' \
--data '{"DATABRICKS_MOCK_API_ERROR_500_PROBABILITY":20}'
```
## Known issues
- When the port you're forwarding your Locust server to is not exposed from the container, you cannot hit it from your localhost machine. Use the [VSCode temporary port forwarding](https://code.visualstudio.com/docs/remote/containers#_temporarily-forwarding-a-port) to resolve this.
- When the port you're forwarding your Locust server to is not exposed from the container, you cannot hit it from your localhost machine. Use the [VSCode temporary port-forwarding](https://code.visualstudio.com/docs/remote/containers#_temporarily-forwarding-a-port) to resolve this.
14 changes: 9 additions & 5 deletions docs/mockapi.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Mock Databricks API

The API found under `/mockapi` is a Databricks mock API for following success scenarios:
The API found under `/mockapi` is a mock Databricks API for following success scenarios:

- [Jobs/](https://docs.databricks.com/dev-tools/api/latest/jobs.html):
- Create
Expand Down Expand Up @@ -55,13 +55,13 @@ To allow rate-limiting requests to match Databricks API behaviour, a rate limit

### Configurable Errors

To configure a percentage of responses that return a status code 500 response in the mock-api you can set `DATABRICKS_MOCK_API_ERROR_500_PROBABILITY`.
To configure a percentage of responses that return a status code 500 response in the mockAPI you can set `DATABRICKS_MOCK_API_ERROR_500_PROBABILITY`.

E.g. setting `DATABRICKS_MOCK_API_ERROR_500_PROBABILITY` to `20` will return a status code 500 response for roughly 20% of responses.

To configure a percentage of calls that should sink-hole, i.e. return no response and keep the connection open for 10 minutes, you can set `DATABRICKS_MOCK_API_ERROR_SINKHOLE_PROBABILITY`. Probabilities are as for `DATABRICKS_MOCK_API_ERROR_500_PROBABILITY`.

To configure a percentage of calls that should respond xml response with status code 200 response in the mock-api you can set`DATABRICKS_MOCK_API_ERROR_XML_RESPONSE_PROBABILITY`.Probabilities are as for `DATABRICKS_MOCK_API_ERROR_500_PROBABILITY`.
To configure a percentage of calls that should respond xml response with status code 200 response in the mockAPI you can set`DATABRICKS_MOCK_API_ERROR_XML_RESPONSE_PROBABILITY`.Probabilities are as for `DATABRICKS_MOCK_API_ERROR_500_PROBABILITY`.

> NB: The combined probabilities must be <=100
Expand All @@ -85,9 +85,13 @@ Once the devcontainer has built and started, use `make run-mock-api` to run the

## Running in Kind

To run the mock api in Kind run `make kind-deploy-mock-api`. This will ensure a Kind cluster is created, deploy promethous with helm, build and load a docker image for the mock api into the Kind cluster and then create a Deployment and Service.
1. Create kind cluster, deploy promethous with helm, build and load a docker image for the mockAPI into the cluster and then create a deployment and service.

To test, run `kubectl port-forward svc/databricks-mock-api 8085:8080 -n databricks-mock-api` and make a request to <http://localhost:8085> to verify that the API is running
```bash
make kind-deploy-mock-api
```

2. Run `kubectl port-forward svc/databricks-mock-api 8085:8080 -n databricks-mock-api` and make a request to <http://localhost:8085> to verify that the API is running

## Running in a separate cluster

Expand Down
7 changes: 7 additions & 0 deletions kind-cluster.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
13 changes: 13 additions & 0 deletions portforwards.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
#!/bin/bash

ps aux | grep [k]ubectl | awk '{print $2}' | xargs kill

echo "-------> Open port-forwards"
kubectl port-forward service/prom-azure-databricks-operator-grafana -n default 8080:80 &
kubectl port-forward service/prom-azure-databricks-oper-prometheus -n default 9091:9090 &
kubectl port-forward service/locust-loadtest 8089:8089 9090:9090 -n locust &

echo "Browse to locust webui -> http://localhost:8089/"
echo "Browse to locust metrics -> http://localhost:9090/"
echo "Browse to Prometheus -> http://localhost:9091/targets"
echo "Browse to Grafana -> http://localhost:8080/"

0 comments on commit ff20675

Please sign in to comment.