Skip to content

Commit

Permalink
Configuration docs technical writer updates (k0sproject#883)
Browse files Browse the repository at this point in the history
* Twriter edits to k0s Configuration content.

Signed-off-by: KoryKessel-Docker <[email protected]>

* Edit in response to first wave of comments.

Signed-off-by: KoryKessel-Mirantis <[email protected]>

* Small fix to merge conflict (moved Configuation Validation to a separate topic).

Signed-off-by: KoryKessel-Mirantis <[email protected]>
  • Loading branch information
KoryKessel-Mirantis authored May 20, 2021
1 parent b9c534c commit d58cbef
Show file tree
Hide file tree
Showing 11 changed files with 286 additions and 268 deletions.
18 changes: 7 additions & 11 deletions docs/cloud-providers.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,16 @@
# Using cloud providers

k0s builds Kubernetes components in "providerless" mode. This means that there is no cloud providers built into k0s managed Kubernetes components.
k0s builds Kubernetes components in *providerless* mode, meaning that cloud providers are not built into k0s-managed Kubernetes components. As such, you must externally configure the cloud providers to enable their support in your k0s cluster (for more information on running Kubernetes with cloud providers, refer to the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/).

This means the cloud providers have to be configured "externally". The following steps outline how to enable cloud providers support in your k0s cluster.
## 1. Enable cloud provider support in kubelet

For more information on running Kubernetes with cloud providers see the [official documentation](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/).
Even when all components are built with providerless mode, you must be able to enable cloud provider mode for kubelet. To do this, run the workers with `--enable-cloud-provider=true`, to enable `--cloud-provider=external` on the kubelet process.

## Enabling cloud provider support in kubelet
## 2. Deploy the cloud provider

Even when all components are built with "providerless" mode, we need to be able to enable cloud provider "mode" for kubelet. This is done by running the workers with `--enable-cloud-provider=true`. This enables `--cloud-provider=external` on kubelet process.
The easiest way to deploy cloud provider controllers is on the k0s cluster.

## Deploying the actual cloud provider
Use the built-in [manifest deployer](manifests.md) built into k0s to deploy your cloud provider as a k0s-managed stack. Next, just drop all required manifests into the `/var/lib/k0s/manifests/aws/` directory, and k0s will handle the deployment.

From Kubernetes point of view, it does not really matter how and where the cloud providers controller(s) are running. Of course the easiest way is to deploy them on the cluster itself.

To deploy your cloud provider as k0s managed stack you can use the built-in [manifest deployer](manifests.md). Simply drop all the needed manifests under e.g. `/var/lib/k0s/manifests/aws/` directory and k0s will deploy everything.

Some cloud providers do need some configuration files to be present on all the nodes or some other pre-requisites. Consult your cloud providers documentation for needed steps.
**Note**: The prerequisites for the various cloud providers can vary (for example, several require that configuration files be present on all of the nodes). Refer to your chosen cloud provider's documentation as necessary.

13 changes: 13 additions & 0 deletions docs/configuration-validation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@

k0s command-line interface has the ability to validate config syntax:

```
$ k0s validate config --config path/to/config/file
```

`validate config` sub-command can validate the following:

1. YAML formatting
2. [SANs addresses](#specapi-1)
3. [Network providers](#specnetwork-1)
4. [Worker profiles](#specworkerprofiles)
200 changes: 108 additions & 92 deletions docs/configuration.md

Large diffs are not rendered by default.

157 changes: 82 additions & 75 deletions docs/containerd_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

[containerd](https://github.com/containerd/containerd) is an industry-standard container runtime.

**NOTE:** In most use cases changes to the containerd configuration will not be required.
**NOTE:** Changes to the containerd configuration is not required in most use cases.

To make changes to containerd configuration you must first generate a default containerd configuration, with the default values set to `/etc/k0s/containerd.toml`:

In order to make changes to containerd configuration first you need to generate a default containerd configuration by running:
```
containerd config default > /etc/k0s/containerd.toml
```
This command will set the default values to `/etc/k0s/containerd.toml`.

`k0s` runs containerd with the following default values:
```
Expand All @@ -19,7 +19,8 @@ This command will set the default values to `/etc/k0s/containerd.toml`.
--config=/etc/k0s/containerd.toml
```

Before proceeding further, add the following default values to the configuration file:
Next, add the following default values to the configuration file:

```
version = 2
root = "/var/lib/k0s/containerd"
Expand All @@ -30,7 +31,7 @@ state = "/var/lib/k0s/run/containerd"
address = "/var/lib/k0s/run/containerd.sock"
```

Next if you want to change CRI look into this section
Finally, if you want to change CRI look into:

```
[plugins."io.containerd.runtime.v1.linux"]
Expand All @@ -40,84 +41,90 @@ Next if you want to change CRI look into this section

## Using gVisor

> [gVisor](https://gvisor.dev/docs/) is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system.
First you must install the needed gVisor binaries into the host.
```sh
(
set -e
URL=https://storage.googleapis.com/gvisor/releases/release/latest
wget ${URL}/runsc ${URL}/runsc.sha512 \
${URL}/gvisor-containerd-shim ${URL}/gvisor-containerd-shim.sha512 \
${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512
sha512sum -c runsc.sha512 \
-c gvisor-containerd-shim.sha512 \
-c containerd-shim-runsc-v1.sha512
rm -f *.sha512
chmod a+rx runsc gvisor-containerd-shim containerd-shim-runsc-v1
sudo mv runsc gvisor-containerd-shim containerd-shim-runsc-v1 /usr/local/bin
)
```

See gVisor [install docs](https://gvisor.dev/docs/user_guide/install/)

Next we need to prepare the config for `k0s` managed containerD to utilize gVisor as additional runtime:
```sh
cat <<EOF | sudo tee /etc/k0s/containerd.toml
disabled_plugins = ["restart"]
[plugins.linux]
shim_debug = true
[plugins.cri.containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
EOF
```

Then we can start and join the worker as normally into the cluster:
```sh
k0s worker $token
```

By default containerd uses nromal runc as the runtime. To make gVisor runtime usable for workloads we must register it to Kubernetes side:
```sh
cat <<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
EOF
```

After this we can use it for our workloads:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-gvisor
spec:
runtimeClassName: gvisor
containers:
- name: nginx
image: nginx
```
We can verify the created nginx pod is actually running under gVisor runtime:
```
# kubectl exec nginx-gvisor -- dmesg | grep -i gvisor
[ 0.000000] Starting gVisor...
```
[gVisor](https://gvisor.dev/docs/) is an application kernel, written in Go, that implements a substantial portion of the Linux system call interface. It provides an additional layer of isolation between running applications and the host operating system.

1. Install the needed gVisor binaries into the host.

```sh
(
set -e
URL=https://storage.googleapis.com/gvisor/releases/release/latest
wget ${URL}/runsc ${URL}/runsc.sha512 \
${URL}/gvisor-containerd-shim ${URL}/gvisor-containerd-shim.sha512 \
${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512
sha512sum -c runsc.sha512 \
-c gvisor-containerd-shim.sha512 \
-c containerd-shim-runsc-v1.sha512
rm -f *.sha512
chmod a+rx runsc gvisor-containerd-shim containerd-shim-runsc-v1
sudo mv runsc gvisor-containerd-shim containerd-shim-runsc-v1 /usr/local/bin
)
```

Refer to the [gVisor install
docs](https://gvisor.dev/docs/user_guide/install/) for more information.

2. Prepare the config for `k0s` managed containerD, to utilize gVisor as additional runtime:

```sh
cat <<EOF | sudo tee /etc/k0s/containerd.toml
disabled_plugins = ["restart"]
[plugins.linux]
shim_debug = true
[plugins.cri.containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
EOF
```
3. Start and join the worker into the cluster, as normal:
```sh
k0s worker $token
```
4. Register containerd to the Kubernetes side to make gVisor runtime usable for workloads (by default, containerd uses normal runc as the runtime):
```sh
cat <<EOF | kubectl apply -f -
apiVersion: node.k8s.io/v1beta1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
EOF
```
At this point, you can use gVisor runtim for your workloads:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-gvisor
spec:
runtimeClassName: gvisor
containers:
- name: nginx
image: nginx
```
5. (Optional) Verify tht the created nginx pod is running under gVisor runtime:
```
# kubectl exec nginx-gvisor -- dmesg | grep -i gvisor
[ 0.000000] Starting gVisor...
```
## Using custom `nvidia-container-runtime`
By default CRI is set tu runC and if you want to configure Nvidia GPU support you will have to replace `runc` with `nvidia-container-runtime` as shown below:
By default, CRI is set to runC. As such, you must configure Nvidia GPU support by replacing `runc` with `nvidia-container-runtime`:
```
[plugins."io.containerd.runtime.v1.linux"]
shim = "containerd-shim"
runtime = "nvidia-container-runtime"
```
**Note** To run `nvidia-container-runtime` on your node please look [here](https://josephb.org/blog/containerd-nvidia/) for detailed instructions.

**Note** Detailed instruction on how to run `nvidia-container-runtime` on your node is available [here](https://josephb.org/blog/containerd-nvidia/).
After changes to the configuration, restart `k0s` and in this case containerd will be using newly configured runtime.
After editing the configuration, restart `k0s` to get containerd using the newly configured runtime.
12 changes: 4 additions & 8 deletions docs/custom-cri-runtime.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,9 @@
# Custom CRI runtime

k0s supports users bringing their own CRI runtime (for example, docker). In which case, k0s will not start nor manage the runtime, and it is fully up to the user to configure it properly.
**Warning**: You can use your own CRI runtime with k0s (for example, `docker`), however k0s will not start or manage the runtime, and configuration is solely your responsibility.

To run a k0s worker with a custom CRI runtime use the option `--cri-socket`.
It takes input in the form of `<type>:<socket>` where:
Use the option `--cri-socket` to run a k0s worker with a custom CRI runtime. the option takes input in the form of `<type>:<socket_path>` (for `type`, use `docker` for a pure Docker setup and `remote` for anything else).

- `type`: Either `remote` or `docker`. Use `docker` for pure docker setup, `remote` for anything else.
- `socket`: Path to the socket, examples: `unix:///var/run/docker.sock`
To run k0s with a pre-existing Docker setup, run the worker with `k0s worker --cri-socket docker:unix:///var/run/docker.sock <token>`.

To run k0s with pre-existing docker setup run the worker with `k0s worker --cri-socket docker:unix:///var/run/docker.sock <token>`.

When `docker` is used as a runtime, k0s will configure kubelet to create the dockershim socket at `/var/run/dockershim.sock`.
When `docker` is used as a runtime, k0s configures kubelet to create the dockershim socket at `/var/run/dockershim.sock`.
33 changes: 15 additions & 18 deletions docs/dual-stack.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Dual-stack networking
# Dual-stack Networking

**Note:** Dual stack networking setup requires Calico or a custom CNI to be configured as the CNI provider.
**Note:** Dual stack networking setup requires that you configure Calico or a custom CNI as the CNI provider.

Use the following `k0s.yaml` as a template to enable dual-stack networking. This configuration will set up bundled calico CNI, enable feature gates for the Kubernetes components, and set up `kubernetes-controller-manager`.

To enable dual-stack networking use the following k0s.yaml as an example.
This settings will set up bundled calico cni, enable feature gates for the Kubernetes components and set up kubernetes-controller-manager.
```
spec:
network:
Expand All @@ -16,22 +16,19 @@ spec:
IPv6podCIDR: "fd00::/108"
IPv6serviceCIDR: "fd01::/108"
```
## CNI settings

### Calico settings
## CNI Settings: Calico

Calico doesn't support tunneling for the IPv6, so "vxlan" and "ipip" backend wouldn't work.
If you need to have cross-pod connectivity, you need to use "bird" as a backend mode.
In any other mode the pods would be able to reach only pods on the same node.
For cross-pod connectivity, use BIRD for the backend. Calico does not support tunneling for the IPv6, and thus VXLAN and IPIP backends do not work.

### External CNI
The `k0s.yaml` dualStack section will enable all the neccessary feature gates for the Kubernetes components but in case of using external CNI it must be set up with IPv6 support.

## Additional materials
https://kubernetes.io/docs/concepts/services-networking/dual-stack/
**Note**: In any Calico mode other than cross-pod, the pods can only reach pods on the same node.

https://kubernetes.io/docs/tasks/network/validate-dual-stack/
## CNI Settings: External CNI

https://www.projectcalico.org/dual-stack-operation-with-calico-on-kubernetes/
Although the `k0s.yaml` dualStack section enables all of the neccessary feature gates for the Kubernetes components, for use with an external CNI it must be set up to support IPv6.

## Additional Resources

https://docs.projectcalico.org/networking/ipv6
* https://kubernetes.io/docs/concepts/services-networking/dual-stack/
* https://kubernetes.io/docs/tasks/network/validate-dual-stack/
* https://www.projectcalico.org/dual-stack-operation-with-calico-on-kubernetes/
* https://docs.projectcalico.org/networking/ipv6
23 changes: 11 additions & 12 deletions docs/high-availability.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,22 @@
## Control Plane High Availability
# Control Plane High Availability

The following pre-requisites are required in order to configure an HA control plane:

### Requirements
##### Load Balancer
A load balancer with a single external address should be configured as the IP gateway for the controllers.
The load balancer should allow traffic to each controller on the following ports:
The configuration of a high availability control plane for k0s requires the deployment of both a load balancer and a cluster configuration file.

## Load Balancer

Configure a load balancer with a single external address as the IP gateway for the controllers. Set the load balancer to allow traffic to each controller through the following ports:

- 6443
- 8132
- 8133
- 9443

##### Cluster configuration
On each controller node, a k0s.yaml configuration file should be configured.
The following options need to match on each node, otherwise the control plane components will end up in very unknown states:
## Cluster configuration

Configure a `k0s.yaml` configuration file for each controller node with the following options:

- `network`
- `storage`: Needless to say, one cannot create a clustered control plane with each node only storing data locally on SQLite.
- `storage`
- `externalAddress`

[Full configuration file refrence](configuration.md)
For greater detail, refer to the [Full configuration file reference](configuration.md).
Loading

0 comments on commit d58cbef

Please sign in to comment.