Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] kubeadm and add-on docs #1265

Merged
merged 9 commits into from
Sep 26, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions _data/guides.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,12 @@ toc:
section:
- title: What is Kubernetes?
path: /docs/whatisk8s/
- title: Installing Kubernetes on Linux with kubeadm
path: /docs/getting-started-guides/kubeadm/
- title: Hello World on Google Container Engine
path: /docs/hellonode/
- title: Downloading or Building Kubernetes
path: /docs/getting-started-guides/binary_release/
- title: Hello World Walkthrough
path: /docs/hellonode/
- title: Online Training Course
path: https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615

Expand Down Expand Up @@ -250,6 +252,8 @@ toc:
path: /docs/admin/
- title: Cluster Management Guide
path: /docs/admin/cluster-management/
- title: Installing Addons
path: /docs/admin/addons/
- title: Sharing a Cluster with Namespaces
path: /docs/admin/namespaces/
- title: Namespaces Walkthrough
Expand Down
25 changes: 25 additions & 0 deletions docs/admin/addons.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
---
Copy link
Member

@caseydavenport caseydavenport Sep 19, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a bit confusing that these are called "addons", given there are a bunch of addons that already exist in the cluster/addons directory that have different semantics than what is described here. How does this relate to the ones here? https://github.com/kubernetes/kubernetes/tree/master/cluster/addons

I think we either want to document the existing addons in that directory / the relationship this document has to the things in that directory, or rename the things in this document to something else (e.g "extensions", whatever).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking what makes most sense is to split the definition of add-ons into "built-in add-ons" and "3rd party add-ons". The former will obey the semantics in the cluster addons README and the latter will be "just kubectl apply -f it" style external add-ons like Weave Net. How does that sound @caseydavenport?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think there is some overlap between this comment and the comments below.

The best thing in the short term is probably to:

  • keep calling all of these things addons, but not mandating an installation method. Each addon should document its own installation procedure. This keeps the docs leaner and easier to maintain, since the documentation for each addon lives with that addon, and some addons will be more / less complex than others.
  • We can link to each of the existing /cluster/addons/X addons in this doc.
  • As the old /cluster directory is torn down, those existing addons will either find new homes or go away.

WDPT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to see all addons move to being kubectl apply -f. It'll be much easier to install and manage. They were done the way they were done because we didn't have daemonsets at the time, etc.

Copy link
Contributor Author

@lukemarsden lukemarsden Sep 22, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll have a go at wordsmithing this to get a balance between simplicity and explaining to users that we're transitioning from built-in addons to "self-hosted" ones.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have attempted to simplify the add-ons page per this (and other) discussion: https://lukemarsden.github.io/docs/admin/addons/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment here: https://github.com/kubernetes/kubernetes.github.io/pull/1265/files#r80270865

To summarize, I think the clarification re: addons vs the "/cluster/addons" directory is much better! I still think we could improve the rest of the page so that it comes across less like marketing.

---

## Overview

Add-ons extend the functionality of Kubernetes.

This page lists some of the available add-ons and links to their respective installation instructions.

## Networking and Network Policy
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, reading this now all three of these still just feel like marketing speak, and I really don't want this to be a marketing page - especially as more solutions get added to the list. I think this has a lot of potential to become a "my technology is better than your technology" fest. Still feel pretty strongly that the one sentence descriptions do not belong here.

On a secondary note, I had thought lists like this should be in alphabetical order. I understand that puts Calico at the top, but the intention is just so that some level of order is maintained rather than everyone scrambling for the top. An alternative that puts Weave at the top would be to use reverse alphabetical order :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be happy to try and make the sentences feel less like marketing speak, but I feel strongly that giving the user a description of what each thing is makes for a much better UX than forcing them to click through to each page, which would be the case if it was a bare list of project names. (Remember, to a new user, the projects' names themselves will likely be meaningless.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, my feelings are that there is scope for "cookie-cutter" type statements about each project that give users a 1000ft view of what each does:

  • X provides networking and network policy.
  • Y provides networking.
  • Z provides network policy.

Things like "simple, scalable, secure", "easy, fast and reliable", etc certainly rub me the wrong way (I understand that I provided some of that verbiage...), and listing the various features of each project is either going to be super verbose, or provide only a limited picture of each to the reader.

If I were a user picking a networking option, this is not the page I'd use to make my decision, though it might be where I start looking for names to investigate.

I understand we might just be diametrically opposed here, and I don't want to be the last one holding up these docs. Does anyone else have an opinion here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've toned down the 'marketing speak' in b2458c4 😄


* [Weave Net](https://github.com/weaveworks/weave-kube) provides networking and network policy, will carry on working on both sides of a network partition, and does not require an external database.
* [Calico](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) is a secure L3 networking and network policy provider.
* [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm) unites Flannel and Calico, providing networking and network policy.

## Visualization & Control

* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) is a tool for graphically visualizing your containers, pods, services etc. Use it in conjunction with a [Weave Cloud account](https://cloud.weave.works/) or host the UI yourself.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mind adding dashboard here also? That's kind of helpful for new users

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, thanks

* [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) is a dashboard web interface for Kubernetes.

## Legacy Add-ons

There are several other add-ons documented in the deprecated [cluster/addons](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) directory.

Well-maintained ones should be linked to here. PRs welcome!
254 changes: 254 additions & 0 deletions docs/getting-started-guides/kubeadm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,254 @@
---
---

<style>
li>.highlighter-rouge {position:relative; top:3px;}
</style>

## Overview

This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04 or CentOS 7.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about Fedora 24 and RHEL?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we haven't tested them so I'm not going to advertise that we work with them. We can do so once we have tested them.

The installation uses a tool called `kubeadm` which is part of Kubernetes 1.4.

This process works with local VMs, physical servers and/or cloud servers.
It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc).

**The `kubeadm` tool is currently in alpha but please try it out and give us [feedback](/docs/getting-started-guides/kubeadm/#feedback)!**

## Prerequisites

1. One or more machines running Ubuntu 16.04 or CentOS 7
1. 1GB or more of RAM per machine (any less will leave little room for your apps)
1. Full network connectivity between all machines in the cluster (public or private network is fine)

## Objectives

* Install a secure Kubernetes cluster on your machines
* Install a pod network on the cluster so that application components (pods) can talk to each other
* Install a sample microservices application (a socks shop) on the cluster

## Instructions

### (1/4) Installing kubelet and kubeadm on your hosts

You will install the following packages on all the machines:

* `docker`: the container runtime, which Kubernetes depends on.
* `kubelet`: the most core component of Kubernetes.
It runs on all of the machines in your cluster and does things like starting pods and containers.
* `kubectl`: the command to control the cluster once it's running.
You will only use this on the master.
* `kubeadm`: the command to bootstrap the cluster.

For each host in turn:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mention that apt-transport-https is required if it doesn't exist

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

<!--
# curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://packages.cloud.google.com/apt kubernetes-xenial main
EOF
# apt-get update
# apt-get install -y kubeadm docker.io§
-->


* SSH into the machine and become `root` if you are not already (for example, run `sudo su -`).
* If the machine is running Ubuntu 16.04, run:

# apt-get install -y docker.io socat apt-transport-https
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These instructions need to be updated when @mikedanese's official packages land.

# curl -s -L \
https://storage.googleapis.com/kubeadm/kubernetes-xenial-preview-bundle.txz | tar xJv
# dpkg -i kubernetes-xenial-preview-bundle/*.deb

If the machine is running CentOS 7, run:

# cat <<EOF > /etc/yum.repos.d/k8s.repo
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These instructions need to be updated when @mikedanese's official packages land.

[kubelet]
name=kubelet
baseurl=http://files.rm-rf.ca/rpms/kubelet/
enabled=1
gpgcheck=0
EOF
# yum install docker kubelet kubeadm kubectl kubernetes-cni
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dgoodwin When you get the deps right, it should be just kubeadm
Deps:
kubeadm => kubelet, kubectl
kubelet => kubernetes-cni

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll wait for @dgoodwin to ask me to change the instructions in the docs

# systemctl enable docker && systemctl start docker
# systemctl enable kubelet && systemctl start kubelet

The kubelet is now restarting every few seconds, as it waits in a crashloop for `kubeadm` to tell it what to do.

### (2/4) Initializing your master

The master is the machine where the "control plane" components run, including `etcd` (the cluster database) and the API server (which the `kubectl` CLI communicates with).
All of these components run in pods started by `kubelet`.

To initialize the master, pick one of the machines you previously installed `kubelet` and `kubeadm` on, and run:

# kubeadm init --use-kubernetes-version v1.4.0-beta.11
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to remove --use-kubernetes-version v1.4.0-beta.11 when @mikedanese's official packages land.


This will download and install the cluster database and "control plane" components.
This may take several minutes.

The output should look like:

<master/tokens> generated token: "f0c861.753c505740ecde4c"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 61.346626 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 4.506807 seconds
<master/discovery> created essential addon: kube-discovery
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns

Kubernetes master initialised successfully!

You can connect any number of nodes by running:

kubeadm join --token <token> <master-ip>

Make a record of the `kubeadm join` command that `kubeadm init` outputs.
You will need this in a moment.
The key included here is secret, keep it safe &mdash; anyone with this key can add authenticated nodes to your cluster.

The key is used for mutual authentication between the master and the joining nodes.

By default, your cluster will not schedule pods on the master for security reasons.
If you want to be able to schedule pods on the master, for example if you want a single-machine Kubernetes cluster for development, run:

# kubectl taint nodes --all dedicated-
node "test-01" tainted
taint key="dedicated" and effect="" not found.
taint key="dedicated" and effect="" not found.

This will remove the "dedicated" taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.

### (3/4) Joining your nodes

The nodes are where your workloads (containers and pods, etc) run.
If you want to add any new machines as nodes to your cluster, for each machine: SSH to that machine, become root (e.g. `sudo su -`) and run the command that was output by `kubeadm init`.
For example:

# kubeadm join --token <token> <master-ip>
<util/tokens> validating provided token
<node/discovery> created cluster info discovery client, requesting info from "http://138.68.156.129:9898/cluster-info/v1/?token-id=0f8588"
<node/discovery> cluster info object received, verifying signature using given token
<node/discovery> cluster info signature and contents are valid, will use API endpoints [https://138.68.156.129:443]
<node/csr> created API client to obtain unique certificate for this node, generating keys and certificate signing request
<node/csr> received signed certificate from the API server, generating kubelet configuration
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

A few seconds later, you should notice that running `kubectl get nodes` on the master shows a cluster with as many machines as you created.

**YOUR CLUSTER IS NOT READY YET!**

Before you can deploy applications to it, you need to install a pod network.

### (4/4) Installing a pod network

You must install a pod network add-on so that your pods can communicate with each other when they are on different hosts.
**It is necessary to do this before you try to deploy any applications to your cluster.**

Several projects provide Kubernetes pod networks.
You can see a complete list of available network add-ons on the [add-ons page](/docs/admin/addons/).

By way of example, you can install [Weave Net](https://github.com/weaveworks/weave-kube) by logging in to the master and running:

# kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created

If you prefer [Calico](https://github.com/projectcalico/calico-containers/tree/master/docs/cni/kubernetes/manifests/kubeadm) or [Canal](https://github.com/tigera/canal/tree/master/k8s-install/kubeadm), please refer to their respective installation guides.
You should only install one pod network per cluster.

Once a pod network has been installed, you can confirm that it is working by checking that the `kube-dns` pod is `Running` in the output of `kubectl get pods --all-namespaces`.
**This signifies that your cluster is ready.**

### (Optional) Installing a sample application

As an example, install a sample microservices application, a socks shop, to put your cluster through its paces.
To learn more about the sample microservices app, see the [GitHub README](https://github.com/microservices-demo/microservices-demo).

# git clone https://github.com/microservices-demo/microservices-demo
# kubectl apply -f microservices-demo/deploy/kubernetes/manifests

You can then find out the port that the [NodePort feature of services](/docs/user-guide/services/) allocated for the front-end service by running:

# kubectl describe svc front-end
Name: front-end
Namespace: default
Labels: name=front-end
Selector: name=front-end
Type: NodePort
IP: 100.66.88.176
Port: <unset> 80/TCP
NodePort: <unset> 31869/TCP
Endpoints: <none>
Session Affinity: None

It takes several minutes to download and start all the containers, watch the output of `kubectl get pods` to see when they're all up and running.

Then go to the IP address of your cluster's master node in your browser, and specify the given port.
So for example, `http://<master_ip>:<port>`.
In the example above, this was `31869`, but it is a different port for you.

If there is a firewall, make sure it exposes this port to the internet before you try to access it.

### Explore other add-ons

See the [list of add-ons](/docs/admin/addons/) to explore other add-ons, including tools for logging, monitoring, network policy, visualization &amp; control of your Kubernetes cluster.


## What's next

* Learn more about [Kubernetes concepts and kubectl in Kubernetes 101](/docs/user-guide/walkthrough/).
* Install Kubernetes with [a cloud provider configurations](/docs/getting-started-guides/) to add Load Balancer and Persistent Volume support.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because users might reasonably want to do this. soon we should document using kubeadm for cloud provider integrations and then we can take this out.



## Cleanup

* To uninstall the socks shop, run `kubectl delete -f microservices-demo/deploy/kubernetes/manifests` on the master.

* To undo what `kubeadm` did, simply delete the machines you created for this tutorial, or run the script below and then uninstall the packages.
<details>
<pre><code>systemctl stop kubelet;
docker rm -f $(docker ps -q); mount | grep "/var/lib/kubelet/*" | awk '{print $3}' | xargs umount 1>/dev/null 2>/dev/null;
rm -rf /var/lib/kubelet /etc/kubernetes /var/lib/etcd /etc/cni;
ip link set cbr0 down; ip link del cbr0;
ip link set cni0 down; ip link del cni0;
systemctl start kubelet</code></pre>
</details> <!-- *syntax-highlighting-hack -->

## Feedback

* Slack Channel: [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
* Mailing List: [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
* [GitHub Issues](https://github.com/kubernetes/kubernetes/issues): please tag `kubeadm` issues with `@kubernetes/sig-cluster-lifecycle`

## Limitations

Please note: `kubeadm` is a work in progress and these limitations will be addressed in due course.

1. The cluster created here doesn't have cloud-provider integrations, so for example won't work with (for example) [Load Balancers](/docs/user-guide/load-balancer/) (LBs) or [Persistent Volumes](/docs/user-guide/persistent-volumes/walkthrough/) (PVs).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't by default, but refer to the kubeadm reference doc if you want to do enable cloud provider integrations

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can add ref to that doc once it exists

To easily obtain a cluster which works with LBs and PVs Kubernetes, try [the "hello world" GKE tutorial](/docs/hellonode) or [one of the other cloud-specific installation tutorials](/docs/getting-started-guides/).

Workaround: use the [NodePort feature of services](/docs/user-guide/services/#type-nodeport) for exposing applications to the internet.
1. The cluster created here has a single master, with a single `etcd` database running on it.
This means that if the master fails, your cluster loses its configuration data and will need to be recreated from scratch.
Adding HA support (multiple `etcd` servers, multiple API servers, etc) to `kubeadm` is still a work-in-progress.

Workaround: regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html).
The `etcd` data directory configured by `kubeadm` is at `/var/lib/etcd` on the master.
1. `kubectl logs` is broken with `kubeadm` clusters due to [#22770](https://github.com/kubernetes/kubernetes/issues/22770).

Workaround: use `docker logs` on the nodes where the containers are running as a workaround.
1. There is not yet an easy way to generate a `kubeconfig` file which can be used to authenticate to the cluster remotely with `kubectl` on, for example, your workstation.

Workaround: copy the kubelet's `kubeconfig` from the master: use `scp root@<master>:/etc/kubernetes/admin.conf .` and then e.g. `kubectl --kubeconfig ./admin.conf get nodes` from your workstation.
13 changes: 9 additions & 4 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,13 +77,18 @@ h2, h3, h4 {
<a href="/docs/whatisk8s/" class="button">Read the Overview</a>
</div>
<div class="col3rd">
<h3>Hello Node!</h3>
<p>In this quickstart, we’ll be creating a Kubernetes instance that stands up a simple “Hello World” app using Node.js. In just a few minutes you'll go from zero to deployed Kubernetes app on Google Container Engine.</p>
<a href="/docs/hellonode/" class="button">Get Started</a>
<h3>Hello World on Google Container Engine</h3>
<p>In this quickstart, we’ll be creating a Kubernetes instance that stands up a simple “Hello World” app using Node.js. In just a few minutes you'll go from zero to deployed Kubernetes app on Google Container Engine (GKE), a hosted service from Google.</p>
<a href="/docs/hellonode/" class="button">Get Started on GKE</a>
</div>
<div class="col3rd">
<h3>Installing Kubernetes on Linux with kubeadm</h3>
<p>This quickstart will show you how to install a secure Kubernetes cluster on any computers running Linux, using a tool called <code>kubeadm</code> which is part of Kubernetes. It'll work with local VMs, physical servers and/or cloud servers, either manually or as part of your own automation. It is currently in alpha but please try it out and give us feedback!</p>
<a href="/docs/getting-started-guides/kubeadm/" class="button">Install Kubernetes with kubeadm</a>
</div>
<div class="col3rd">
<h3>Guided Tutorial</h3>
<p>If you’ve completed the quickstart, a great next step is Kubernetes 101. You will follow a path through the various features of Kubernetes, with code examples along the way, learning all of the core concepts. There's also a <a href="/docs/user-guide/walkthrough/k8s201">Kubernetes 201</a>!</p>
<p>If you’ve completed one of the quickstarts, a great next step is Kubernetes 101. You will follow a path through the various features of Kubernetes, with code examples along the way, learning all of the core concepts. There's also a <a href="/docs/user-guide/walkthrough/k8s201">Kubernetes 201</a>!</p>
<a href="/docs/user-guide/walkthrough/" class="button">Kubernetes 101</a>
</div>
</div>
Expand Down