Skip to content

Commit

Permalink
Add note about system_auth RF configuration to scaling docs
Browse files Browse the repository at this point in the history
  • Loading branch information
rzetelskik committed Sep 25, 2023
1 parent 038a0aa commit 59efaa7
Showing 1 changed file with 19 additions and 19 deletions.
38 changes: 19 additions & 19 deletions docs/source/generic.md
Original file line number Diff line number Diff line change
Expand Up @@ -273,19 +273,31 @@ To change it simply remove the secret. Operator will create a new one. To pick u

To set up monitoring using Prometheus and Grafana follow [this guide](monitoring.md).

## Scale Up
## Scale out / scale down

The operator supports scale up of a rack as well as addition of new racks. To make the changes, you can use:
The operator supports adding new nodes to existing racks, adding new racks to the cluster, as well as removing both single nodes and entire racks. To introduce the changes, edit the cluster with:
```console
kubectl -n scylla edit ScyllaCluster simple-cluster
kubectl -n scylla edit scyllaclusters.scylla.scylladb.com simple-cluster
```
* To scale up a rack, change the `Spec.Members` field of the rack to the desired value.
* To add a new rack, append the `racks` list with a new rack. Remember to choose a different rack name for the new rack.
* After editing and saving the yaml, check your cluster's Status and Events for information on what's happening:
* The modify the number of nodes in a rack, update the `members` field of a selected rack to a desired value.
* To add a new rack, append it to the `.spec.datacenter.racks` list. Remember to choose a different rack name for the new rack.
* To remove a rack, first scale it down to zero nodes, and then remove it from `.spec.datacenter.racks` list.

Having edited and saved the yaml, you can check your cluster's Status and Events to retrieve information about what's happening:
```console
kubectl -n scylla describe ScyllaCluster simple-cluster
kubectl -n scylla describe scyllaclusters.scylla.scylladb.com simple-cluster
```

:::{note}
If you configured Scylla with `authenticator` set to `PasswordAuthenticator`, you need to manually configure the replication factor of the `system_auth` keyspace after every scaling operation.

```console
kubectl -n scylla exec -it pods/simple-cluster-us-east-1-us-east-1a-0 -c scylla -- cqlsh -e "ALTER KEYSPACE system_auth WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'us-east-1' : <new_replication_factor>};"
```

It is recommended to set `system_auth` replication factor to the number of nodes in a datacenter.
:::

## Benchmark with cassandra-stress

After deploying our cluster along with the monitoring, we can benchmark it using cassandra-stress and see its performance in Grafana. We have a mini cli that generates Kubernetes Jobs that run cassandra-stress against a cluster.
Expand Down Expand Up @@ -336,18 +348,6 @@ After the Jobs finish, clean them up with:
kubectl delete -f scripts/cassandra-stress.yaml
```

## Scale Down

The operator supports scale down of a rack. To make the changes, you can use:
```console
kubectl -n scylla edit ScyllaCluster simple-cluster
```
* To scale down a rack, change the `Spec.Members` field of the rack to the desired value.
* After editing and saving the yaml, check your cluster's Status and Events for information on what's happening:
```console
kubectl -n scylla describe ScyllaCluster simple-cluster
```

## Clean Up

To clean up all resources associated with this walk-through, you can run the commands below.
Expand Down

0 comments on commit 59efaa7

Please sign in to comment.