Skip to content

Commit

Permalink
Merge pull request #1 from arangodb/custom-resource-spec
Browse files Browse the repository at this point in the history
Added specification of custom resource
  • Loading branch information
ewoutp authored Feb 8, 2018
2 parents bb9b731 + 2eb8118 commit df6fd67
Show file tree
Hide file tree
Showing 5 changed files with 316 additions and 35 deletions.
1 change: 1 addition & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
- [ArangoDB configuration & secrets](./config_and_secrets.md)
- [Metrics](./metrics.md)
- [Scaling](./scaling.md)
- [Services & Load balancer](./services_and_loadbalancer.md)
- [Storage](./storage.md)
- [Upgrading](./upgrading.md)

Expand Down
17 changes: 6 additions & 11 deletions docs/config_and_secrets.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,11 @@ arguments configured in the Pod-spec.

## Other configuration options

### Options 1
All commandline options of `arangod` (and `arangosync`) are available
by adding options to the `spec.<group>.args` list of a group
of servers.

Use a `ConfigMap` per type of ArangoDB server.
The operator passes the options listed in the configmap
as commandline options to the ArangoDB servers.

TODO Discuss format of ConfigMap content. Is it `arangod.conf` like?

### Option 2

Add ArangoDB option sections to the custom resource.
These arguments are added to th commandline created for these servers.

## Secrets

Expand All @@ -41,5 +35,6 @@ metadata:
name: "example-arangodb-cluster"
spec:
mode: cluster
jwtTokenSecretName: <name-of-JWT-token-secret>
auth:
jwtSecretName: <name-of-JWT-token-secret>
```
218 changes: 215 additions & 3 deletions docs/custom_resource.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,20 +26,232 @@ metadata:
spec:
mode: cluster
agents:
servers: 3
count: 3
args:
- --log.level=debug
resources:
requests:
storage: 8Gi
storageClassName: ssd
dbservers:
servers: 5
count: 5
resources:
requests:
storage: 80Gi
storageClassName: ssd
coordinators:
servers: 3
count: 3
image: "arangodb/arangodb:3.3.3"
```
## Specification reference
Below you'll find all settings of the `Cluster` custom resource.
Several settings are for various groups of servers. These are indicated
with `<group>` where `<group>` can be any of:

- `agents` for all agents of a `cluster` or `resilientsingle` pair.
- `dbservers` for all dbservers of a `cluster`.
- `coordinators` for all coordinators of a `cluster`.
- `single` for all single servers of a `single` instance or `resilientsingle` pair.
- `syncmasters` for all syncmasters of a `cluster`.
- `syncworkers` for all syncworkers of a `cluster`.

### `spec.mode: string`

This setting specifies the type of cluster you want to create.
Possible values are:

- `cluster` (default) Full cluster. Defaults to 3 agents, 3 dbservers & 3 coordinators.
- `resilientsingle` Resilient single pair. Defaults to 3 agents and 2 single servers.
- `single` Single server only (note this does not provide high availability or reliability).

This setting cannot be changed after the cluster has been created.

### `spec.environment: string`

This setting specifies the type of environment in which the cluster is created.
Possible values are:

- `development` (default) This value optimizes the cluster for development
use. It is possible to run a cluster on a small number of nodes (e.g. minikube).
- `production` This value optimizes the cluster for production use.
It puts required affinity constraints on all pods to avoid agents & dbservers
from running on the same machine.

### `spec.image: string`

This setting specifies the docker image to use for all ArangoDB servers.
In a `development` environment this setting defaults to `arangodb/arangodb:latest`.
For `production` environments this is a required setting without a default value.
It is highly recommend to use explicit version (not `latest`) for production
environments.

### `spec.imagePullPolicy: string`

This setting specifies the pull policy for the docker image to use for all ArangoDB servers.
Possible values are:

- `IfNotPresent` (default) to pull only when the image is not found on the node.
- `Always` to always pull the image before using it.

### `spec.storageEngine: string`

This setting specifies the type of storage engine used for all servers
in the cluster.
Possible values are:

- `mmfiles` (default) To use the MMfiles storage engine.
- `rocksdb` To use the RocksDB storage engine.

This setting cannot be changed after the cluster has been created.

### `spec.rocksdb.encryption.keySecretName`

This setting specifies the name of a kubernetes `Secret` that contains
an encryption key used for encrypting all data stored by ArangoDB servers.
When an encryption key is used, encryption of the data in the cluster is enabled,
without it encryption is disabled.
The default value is empty.

This requires the Enterprise version.

The encryption key cannot be changed after the cluster has been created.

### `spec.auth.jwtSecretName: string`

This setting specifies the name of a kubernetes `Secret` that contains
the JWT token used for accessing all ArangoDB servers.
When a JWT token is used, authentication of the cluster is enabled, without it
authentication is disabled.
The default value is empty.

If you specify a name of a `Secret` that does not exist, a random token is created
and stored in a `Secret` with given name.

Changing a JWT token results in stopping the entire cluster
and restarting it.

### `spec.ssl.keySecretName: string`

This setting specifies the name of a kubernetes `Secret` that contains
a PEM encoded server certificate + private key used for all TLS connections
of the ArangoDB servers.
The default value is empty.

If you specify a name of a `Secret` that does not exist, a certificate + key is created
using the values of `spec.ssl.serverName` & `spec.ssl.organizationName`
and stored in a `Secret` with given name.

### `spec.ssl.organizationName: string`

This setting specifies the name of an organization that is put in an automatically
generated SSL certificate (see `spec.ssl.keySecretName`).
The default value is empty.

### `spec.ssl.serverName: string`

This setting specifies the name of a server that is put in an automatically
generated SSL certificate (see `spec.ssl.keySecretName`).
Besides this name, the internal DNS names of all ArangoDB servers are added
to the list of valid hostnames of the certificate. It is therefore not possible
to use this feature when scaling the cluster to more servers, since the newly
added servers will not be listed in the certificate.
The default value is empty.

**TODO Really think this through. Restriction does not sound right.**

### `spec.sync.enabled: bool`

This setting enables/disables support for data center 2 data center
replication in the cluster. When enabled, the cluster will contain
a number of `syncmaster` & `syncworker` servers.
The default value is `false`.

### `spec.sync.image: string`

This setting specifies the docker image to use for all ArangoSync servers.
When not specified, the `spec.image` value is used.

### `spec.sync.imagePullPolicy: string`

This setting specifies the pull policy for the docker image to use for all ArangoSync servers.
For possible values, see `spec.imagePullPolicy`.
When not specified, the `spec.imagePullPolicy` value is used.

### `spec.sync.auth.jwtSecretName: string`

This setting specifies the name of a kubernetes `Secret` that contains
the JWT token used for accessing all ArangoSync master servers.
When not specified, the `spec.auth.jwtSecretName` value is used.

If you specify a name of a `Secret` that does not exist, a random token is created
and stored in a `Secret` with given name.

### `spec.sync.auth.clientCASecretName: string`

This setting specifies the name of a kubernetes `Secret` that contains
a PEM encoded CA certificate used for client certificate verification
in all ArangoSync master servers.
This is a required setting when `spec.sync.enabled` is `true`.
The default value is empty.

### `spec.sync.mq.type: string`

This setting sets the type of message queue used by ArangoSync.
Possible values are:

- `direct` (default) for direct HTTP connections between the 2 data centers.
- `kafka` for using kafka queues.

### `spec.sync.ssl.keySecretName: string`

This setting specifies the name of a kubernetes `Secret` that contains
a PEM encoded server certificate + private key used for the TLS connections
of all ArangoSync master servers.
This is a required setting when `spec.sync.enabled` is `true`.
The default value is empty.

### `spec.sync.monitoring.tokenSecretName: string`

This setting specifies the name of a kubernetes `Secret` that contains
the bearer token used for accessing all monitoring endpoints of all ArangoSync
servers.
When not specified, no monitoring token is used.
The default value is empty.

### `spec.ipv6.disabled: bool`

This setting prevents the use of IPv6 addresses by ArangoDB servers.
The default is `false`.

### `spec.<group>.count: number`

This setting specifies the number of servers to start for the given group.
For the agent group, this value must be a positive, odd number.
The default value is `3` for all groups except `single` (there the default is `1`
for `spec.mode: single` and `2` for `spec.mode: resilientsingle`).

For the `syncworkers` group, it is highly recommended to use the same number
as for the `dbservers` group.

### `spec.<group>.args: [string]`

This setting specifies additional commandline arguments passed to all servers of this group.
The default value is an empty array.

### `spec.<group>.resources.requests.storage: storageUnit`

This setting specifies the amount of storage required for each server of this group.
The default value is `8Gi`.

This setting is not available for group `coordinators`, `syncmasters` & `syncworkers`
because servers in these groups do not need persistent storage.

### `spec.<group>.storageClassName: string`

This setting specifies the `storageClass` for the `PersistentVolume`s created
for each server of this group.

This setting is not available for group `coordinators`, `syncmasters` & `syncworkers`
because servers in these groups do not need persistent storage.
41 changes: 20 additions & 21 deletions docs/resource_and_labels.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,61 +7,60 @@ cluster deployment models.

For a single server deployment, the following k8s resources are created:

- Pod running ArangoDB single server named `<cluster-name>_arangodb`.
- `Pod` running ArangoDB single server named `<cluster-name>`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: single`
- PersistentVolumeClaim for, data stored in the single server, named `<cluster-name>_arangodb_pvc`.
- `PersistentVolumeClaim` for, data stored in the single server, named `<cluster-name>_pvc`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: single`
- Service for accessing the single server, named `<cluster-name>_arangodb`.
The service will provide access to the single server from within the k8s cluster.
- `Service` for accessing the single server, named `<cluster-name>`.
The service will provide access to the single server from within the Kubernetes cluster.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: single`

## Full cluster

For a full cluster deployment, the following k8s resources are created:
For a full cluster deployment, the following Kubernetes resources are created:

- Pods running ArangoDB agent named `<cluster-name>_agent_<x>`.
- `Pods` running ArangoDB agent named `<cluster-name>_agent_<x>`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: agent`
- PersistentVolumeClaims for, data stored in the agents, named `<cluster-name>_agent_pvc_<x>`.

- `PersistentVolumeClaims` for, data stored in the agents, named `<cluster-name>_agent_pvc_<x>`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: agent`

- Pods running ArangoDB coordinators named `<cluster-name>_coordinator_<x>`.
- `Pods` running ArangoDB coordinators named `<cluster-name>_coordinator_<x>`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: coordinator`
- PersistentVolumeClaims for, data stored in the agents, named `<cluster-name>_agent_pvc_<x>`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: agent`
- Note: Coordinators are configured to use an `emptyDir` volume since
they do not need persistent storage.

- Pods running ArangoDB dbservers named `<cluster-name>_dbserver_<x>`.
- `Pods` running ArangoDB dbservers named `<cluster-name>_dbserver_<x>`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: dbserver`
- PersistentVolumeClaims for, data stored in the dbservers, named `<cluster-name>_dbserver_pvc_<x>`.

- `PersistentVolumeClaims` for, data stored in the dbservers, named `<cluster-name>_dbserver_pvc_<x>`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: dbserver`

- Service (no cluster IP) for accessing the all server, named `<cluster-name>_arangodb_internal`.
- Headless `Service` for accessing the all server, named `<cluster-name>_servers`.
The service will provide access all server server from within the k8s cluster.
- Labels:
- `app=arangodb`
Expand All @@ -70,7 +69,7 @@ For a full cluster deployment, the following k8s resources are created:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`

- Service (normal cluster IP) for accessing the all coordinators, named `<cluster-name>`.
- `Service` for accessing the all coordinators, named `<cluster-name>`.
The service will provide access all coordinators from within the k8s cluster.
- Labels:
- `app=arangodb`
Expand All @@ -87,17 +86,17 @@ For a full cluster with datacenter replication deployment,
the same resources are created as for a Full cluster, with the following
additions:

- Pods running ArangoSync workers named `<cluster-name>_syncworker_<x>`.
- `Pods` running ArangoSync workers named `<cluster-name>_syncworker_<x>`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: syncworker`

- Pods running ArangoSync master named `<cluster-name>_coordinator_<x>`.
- `Pods` running ArangoSync master named `<cluster-name>_coordinator_<x>`.
- Labels:
- `app=arangodb`
- `arangodb_cluster_name: <cluster-name>`
- `role: syncmaster`

- Service for accessing the sync masters & workers, named `<cluster-name>-sync`.
The service will provide access to all syncmaster & workers from within the k8s cluster.
- `Service` for accessing the sync masters, named `<cluster-name>_sync`.
The service will provide access to all syncmaster from within the Kubernetes cluster.
Loading

0 comments on commit df6fd67

Please sign in to comment.