Skip to content

Commit

Permalink
Merge remote-tracking branch 'kubernetes/master'
Browse files Browse the repository at this point in the history
  • Loading branch information
jaredbhatti committed Aug 2, 2016
2 parents 327d1d3 + 18ae565 commit 1f7c163
Show file tree
Hide file tree
Showing 55 changed files with 654 additions and 1,366 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ http://kubernetes-v1-3.github.io/
Editing of these branches will kick off a build using Travis CI that auto-updates these URLs; you can monitor the build progress at [https://travis-ci.org/kubernetes/kubernetes.github.io](https://travis-ci.org/kubernetes/kubernetes.github.io).

## Partners
Partners can get their logos added to the partner section of the [community page](http://k8s.io/community) by following the below steps and meeting the below logo specifications. Partners will also need to have a URL that is specific to integrating with Kubernetes ready; this URL will be the destination when the logo is clicked.
Kubernetes partners refers to the companies who contribute to the Kubernetes core codebase and/or extend their platform to support Kubernetes. Partners can get their logos added to the partner section of the [community page](http://k8s.io/community) by following the below steps and meeting the below logo specifications. Partners will also need to have a URL that is specific to integrating with Kubernetes ready; this URL will be the destination when the logo is clicked.

* The partner product logo should be a transparent png image centered in a 215x125 px frame. (look at the existing logos for reference)
* The logo must link to a URL that is specific to integrating with Kubernetes, hosted on the partner's site.
Expand Down
6 changes: 2 additions & 4 deletions _data/samples.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,17 +47,15 @@ toc:
section:
- title: Meteor Applications
path: https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/meteor/
- title: Elasticsearch
path: https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/elasticsearch/
- title: OpenShift Origin
path: https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/openshift-origin/
- title: Selenium
path: https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/selenium/

- title: Monitoring and Logging
section:
- title: Elasticsearch/Kibana Logging Demonstration
path: https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/logging-demo/
- title: Elasticsearch
path: https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/elasticsearch/
- title: NewRelic
path: https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/newrelic

Expand Down
25 changes: 11 additions & 14 deletions community.html
Original file line number Diff line number Diff line change
Expand Up @@ -51,29 +51,26 @@ <h3>Events</h3>
frameborder="0" scrolling="no"></iframe>
</div>
</div>
<div class="content">
<h3>Companies</h3>
<p>We are working with a broad group of companies to make sure that Kubernetes works well for everyone, from
individual developers to the largest companies in the cloud space.</p>
<div class="company-logos">
<img src="/images/community_logos/red_hat_logo.png">
<img src="/images/community_logos/intel_logo.png">
<img src="/images/community_logos/core_os_logo.png">
<img src="/images/community_logos/puppet_logo.png">
<img src="/images/community_logos/sysdig_logo.png">
<img src="/images/community_logos/deis_logo.png">
</div>
</div>
<div class="content">
<h3>Partners</h3>
<p>We are working with a broad group of partners to help grow the kubernetes ecosystem supporting
<p>We are working with a broad group of partners who contribute to the kubernetes core codebase, making it stronger and richer, as well as help in growing the kubernetes ecosystem supporting
a sprectrum of compelmenting platforms, from open source solutions to market-leading technologies.</p>
<div class="partner-logos">
<a href="https://coreos.com/kubernetes"><img src="/images/community_logos/core_os_logo.png"></a>
<a href="https://deis.com"><img src="/images/community_logos/deis_logo.png"></a>
<a href="https://sysdig.com/blog/monitoring-kubernetes-with-sysdig-cloud/"><img src="/images/community_logos/sysdig_cloud_logo.png"></a>
<a href="https://puppet.com/blog/managing-kubernetes-configuration-puppet"><img src="/images/community_logos/puppet_logo.png"></a>
<a href="https://www.citrix.com/blogs/2016/07/15/citrix-kubernetes-a-home-run/"><img src="/images/community_logos/citrix_logo.png"></a>
<a href="http://wercker.com/workflows/partners/kubernetes/"><img src="/images/community_logos/wercker_logo.png"></a>
<a href="http://rancher.com/kubernetes/"><img src="/images/community_logos/rancher_logo.png"></a>
<a href="https://www.openshift.com/"><img src="/images/community_logos/red_hat_logo.png"></a>
<a href="https://tectonic.com/press/intel-coreos-collaborate-on-openstack-with-kubernetes.html"><img src="/images/community_logos/intel_logo.png"></a>
<a href="https://elasticbox.com/kubernetes/"><img src="/images/community_logos/elastickube_logo.png"></a>
<a href="https://platform9.com/blog/containers-as-a-service-kubernetes-docker"><img src="/images/community_logos/platform9_logo.png"></a>
<a href="http://www.appformix.com/solutions/appformix-for-kubernetes/"><img src="/images/community_logos/appformix_logo.png"></a>
<a href="http://kubernetes.io/docs/getting-started-guides/dcos/"><img src="/images/community_logos/mesosphere_logo.png"></a>
<a href="http://docs.datadoghq.com/integrations/kubernetes/"><img src="/images/community_logos/datadog_logo.png"></a>
<a href="https://apprenda.com/kubernetes-support/"><img src="/images/community_logos/apprenda_logo.png"></a>
</div>
</div>
</main>
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/admission-controllers.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,5 +145,5 @@ For Kubernetes >= 1.2.0, we strongly recommend running the following set of admi
For Kubernetes >= 1.0.0, we strongly recommend running the following set of admission control plug-ins (order matters):

```shell
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,PersistentVolumeLabel,ResourceQuota
```
25 changes: 9 additions & 16 deletions docs/admin/cluster-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,43 +115,36 @@ gcloud container clusters update mytestcluster --enable-autoscaling=true --min-n
## Maintenance on a Node

If you need to reboot a node (such as for a kernel upgrade, libc upgrade, hardware repair, etc.), and the downtime is
brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer,
brief, then when the Kubelet restarts, it will attempt to restart the pods scheduled to it. If the reboot takes longer
(the default time is 5 minutes, controlled by `--pod-eviction-timeout` on the controller-manager),
then the node controller will terminate the pods that are bound to the unavailable node. If there is a corresponding
replication controller, then a new copy of the pod will be started on a different node. So, in the case where all
replica set (or replication controller), then a new copy of the pod will be started on a different node. So, in the case where all
pods are replicated, upgrades can be done without special coordination, assuming that not all nodes will go down at the same time.

If you want more control over the upgrading process, you may use the following workflow:

Mark the node to be rebooted as unschedulable:
Use `kubectl drain` to gracefully terminate all pods on the node while marking the node as unschedulable:

```shell
kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": true}}'
kubectl drain $NODENAME
```

This keeps new pods from landing on the node while you are trying to get them off.

Get the pods off the machine, via any of the following strategies:
* Wait for finite-duration pods to complete.
* Delete pods with:
For pods with a replica set, the pod will be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.

```shell
kubectl delete pods $PODNAME
```

For pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.

For pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
For pods with no replica set, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.

Perform maintenance work on the node.

Make the node schedulable again:

```shell
kubectl replace nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": false}}'
kubectl uncordon $NODENAME
```

If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
be created automatically when you create a new VM instance (if you're using a cloud provider that supports
be created automatically (if you're using a cloud provider that supports
node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](/docs/admin/node) for more details.

## Advanced Topics
Expand Down
6 changes: 4 additions & 2 deletions docs/admin/daemons.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,9 +71,11 @@ a node for testing.

If you specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
create pods on nodes which match that [node
selector](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/node-selection).
selector](https://github.com/kubernetes/kubernetes.github.io/tree/{{page.docsbranch}}/docs/user-guide/node-selection).
If you specify a `scheduler.alpha.kubernetes.io/affinity` annotation in `.spec.template.metadata.annotations`,
then DaemonSet controller will create pods on nodes which match that [node affinity](../../user-guide/node-selection/#alpha-feature-in-kubernetes-v12-node-affinity).

If you do not specify a `.spec.template.spec.nodeSelector`, then the DaemonSet controller will
If you do not specify a `.spec.template.spec.nodeSelector` nor `node affinity`, then the DaemonSet controller will
create pods on all nodes.

## How Daemon Pods are Scheduled
Expand Down
2 changes: 1 addition & 1 deletion docs/admin/high-availability/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ and
etcdctl cluster-health
```

You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcd get foo`
You can also validate that this is working with `etcdctl set foo bar` on one node, and `etcdctl get foo`
on a different node.

### Even more reliable storage
Expand Down
6 changes: 6 additions & 0 deletions docs/admin/network-plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,9 @@ The plugin requires a few things:
* Kubelet must be run with the `--network-plugin=kubenet` argument to enable the plugin
* Kubelet must also be run with the `--reconcile-cidr` argument to ensure the IP subnet assigned to the node by configuration or the controller-manager is propagated to the plugin
* The node must be assigned an IP subnet through either the `--pod-cidr` kubelet command-line option or the `--allocate-node-cidrs=true --cluster-cidr=<cidr>` controller-manager command-line options.

## Usage Summary

* `--network-plugin=exec` specifies that we use the `exec` plugin, with executables located in `--network-plugin-dir`.
* `--network-plugin=cni` specifies that we use the `cni` network plugin with actual CNI plugin binaries located in `/opt/cni/bin` and CNI plugin configuration located in `network-plugin-dir`, config location defaults to `/etc/cni/net.d`.
* `--network-plugin=kubenet` specifies that we use the `kubenet` network plugin with CNI `bridge` and `host-local` plugins placed in `/opt/cni/bin` or `network-plugin-dir`.
13 changes: 8 additions & 5 deletions docs/admin/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,9 +156,9 @@ Lars Kellogg-Stedman.

### Weave Net from Weaveworks

[Weave Net](https://www.weave.works/documentation/net-1-5-0-introducing-weave/) is a
[Weave Net](https://www.weave.works/documentation/net-1-6-0-introducing-weave/) is a
resilient and simple to use network for Kubernetes and its hosted applications.
Weave Net runs as a [CNI plug-in](https://www.weave.works/documentation/net-1-5-0-cni-plugin/)
Weave Net runs as a [CNI plug-in](https://www.weave.works/docs/net/latest/cni-plugin/)
or stand-alone. In either version, it doesn’t require any configuration or extra code
to run, and in both cases, the network provides one IP address per pod - as is standard for Kubernetes.

Expand All @@ -177,10 +177,13 @@ complicated way to build an overlay network. This is endorsed by several of the
"Big Shops" for networking.


### Calico
### Project Calico

[Calico](https://github.com/projectcalico/calico-containers) uses BGP to enable real container
IPs.
[Project Calico](https://github.com/projectcalico/calico-containers/blob/master/docs/cni/kubernetes/README.md) is an open source container networking provider and network policy engine.

Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet. Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.

Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel, aka [canal](https://github.com/tigera/canal), or native GCE networking.

### Romana

Expand Down
4 changes: 4 additions & 0 deletions docs/getting-started-guides/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,10 @@ The "Guestbook" application is another popular example to get started with Kuber

For more complete applications, please look in the [examples directory](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/)

## Scaling the cluster

Adding and removing nodes through `kubectl` is not supported. You can still scale the amount of nodes manually through adjustments of the 'Desired' and 'Max' properties within the [Auto Scaling Group](http://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html), which was created during the installation.

## Tearing down the cluster

Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the
Expand Down
34 changes: 24 additions & 10 deletions docs/getting-started-guides/centos/centos_manual_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,16 @@ The Kubernetes package provides a few services: kube-apiserver, kube-scheduler,

Hosts:

Please replace host IP with your environment.

```conf
centos-master = 192.168.121.9
centos-minion = 192.168.121.65
```

**Prepare the hosts:**

* Create a virt7-docker-common-release repo on all hosts - centos-{master,minion} with following information.
* Create a /etc/yum.repos.d/virt7-docker-common-release.repo on all hosts - centos-{master,minion} with following information.

```conf
[virt7-docker-common-release]
Expand All @@ -36,10 +38,10 @@ baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
```

* Install Kubernetes on all hosts - centos-{master,minion}. This will also pull in etcd, docker, and cadvisor.
* Install Kubernetes and etcd on all hosts - centos-{master,minion}. This will also pull in docker and cadvisor.

```shell
yum -y install --enablerepo=virt7-docker-common-release kubernetes
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd
```

* Add master and node to /etc/hosts on all machines (not needed if hostnames already in DNS)
Expand All @@ -63,6 +65,9 @@ KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://centos-master:8080"
```

* Disable the firewall on both the master and node, as docker does not play well with other firewall rule managers
Expand All @@ -74,6 +79,18 @@ systemctl stop iptables-services firewalld

**Configure the Kubernetes services on the master.**

* Edit /etc/etcd/etcd.conf to appear as such:

```shell
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
```

* Edit /etc/kubernetes/apiserver to appear as such:

```shell
Expand All @@ -83,9 +100,6 @@ KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://centos-master:8080"

# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"

Expand All @@ -99,10 +113,10 @@ KUBE_API_ARGS=""
* Start the appropriate services on master:

```shell
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
systemctl status $SERVICES
done
```

Expand Down Expand Up @@ -132,10 +146,10 @@ KUBELET_ARGS=""
* Start the appropriate services on node (centos-minion).

```shell
for SERVICES in kube-proxy kubelet docker; do
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
systemctl status $SERVICES
done
```

Expand Down
99 changes: 0 additions & 99 deletions docs/getting-started-guides/coreos/azure/addons/skydns-rc.yaml

This file was deleted.

Loading

0 comments on commit 1f7c163

Please sign in to comment.