diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-13.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-13.md deleted file mode 100644 index 5b83d416b8293..0000000000000 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-13.md +++ /dev/null @@ -1,348 +0,0 @@ ---- -reviewers: -- sig-cluster-lifecycle -title: Upgrading kubeadm clusters from v1.12 to v1.13 -content_template: templates/task ---- - -{{% capture overview %}} - -This page explains how to upgrade a Kubernetes cluster created with `kubeadm` from version 1.12.x to version 1.13.x, and from version 1.13.x to 1.13.y, where `y > x`. - -{{% /capture %}} - -{{% capture prerequisites %}} - -- You need to have a `kubeadm` Kubernetes cluster running version 1.12.0 or later. - [Swap must be disabled][swap]. - The cluster should use a static control plane and etcd pods. -- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md) carefully. -- Make sure to back up any important components, such as app-level state stored in a database. - `kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. - - -[swap]: https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux -### Additional information - -- All containers are restarted after upgrade, because the container spec hash value is changed. -- You can upgrade only from one minor version to the next minor version. - That is, you cannot skip versions when you upgrade. - For example, you can upgrade only from 1.10 to 1.11, not from 1.9 to 1.11. - -{{< warning >}} -The command `join --experimental-control-plane` is known to fail on single node clusters created with kubeadm v1.12 and then upgraded to v1.13.x. -This will be fixed when graduating the `join --control-plane` workflow from alpha to beta. -A possible workaround is described [here](https://github.com/kubernetes/kubeadm/issues/1269#issuecomment-441116249). -{{}} - -{{% /capture %}} - -{{% capture steps %}} - -## Determine which version to upgrade to - -1. Find the latest stable 1.13 version: - - {{< tabs name="k8s_install_versions" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - apt update - apt-cache policy kubeadm - # find the latest 1.13 version in the list - # it should look like 1.13.x-00, where x is the latest patch - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - yum list --showduplicates kubeadm --disableexcludes=kubernetes - # find the latest 1.13 version in the list - # it should look like 1.13.x-0, where x is the latest patch - {{% /tab %}} - {{< /tabs >}} - -## Upgrade the control plane node - -1. On your control plane node, upgrade kubeadm: - - {{< tabs name="k8s_install_kubeadm" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.13.x-00 with the latest patch version - apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.13.x-00 && \ - apt-mark hold kubeadm - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.13.x-0 with the latest patch version - yum install -y kubeadm-1.13.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} - -1. Verify that the download works and has the expected version: - - ```shell - kubeadm version - ``` - -1. On the master node, run: - - ```shell - kubeadm upgrade plan - ``` - - You should see output similar to this: - - ```shell - [preflight] Running pre-flight checks. - [upgrade] Making sure the cluster is healthy: - [upgrade/config] Making sure the configuration is correct: - [upgrade/config] Reading configuration from the cluster... - [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - [upgrade] Fetching available versions to upgrade to - [upgrade/versions] Cluster version: v1.12.2 - [upgrade/versions] kubeadm version: v1.13.0 - - Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': - COMPONENT CURRENT AVAILABLE - Kubelet 2 x v1.12.2 v1.13.0 - - Upgrade to the latest version in the v1.12 series: - - COMPONENT CURRENT AVAILABLE - API Server v1.12.2 v1.13.0 - Controller Manager v1.12.2 v1.13.0 - Scheduler v1.12.2 v1.13.0 - Kube Proxy v1.12.2 v1.13.0 - CoreDNS 1.2.2 1.2.6 - Etcd 3.2.24 3.2.24 - - You can now apply the upgrade by executing the following command: - - kubeadm upgrade apply v1.13.0 - - _____________________________________________________________________ - ``` - - This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. - -1. Choose a version to upgrade to, and run the appropriate command. For example: - - ```shell - kubeadm upgrade apply v1.13.0 - ``` - - You should see output similar to this: - - - - ```shell - [preflight] Running pre-flight checks. - [upgrade] Making sure the cluster is healthy: - [upgrade/config] Making sure the configuration is correct: - [upgrade/config] Reading configuration from the cluster... - [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. - [upgrade/version] You have chosen to change the cluster version to "v1.13.0" - [upgrade/versions] Cluster version: v1.12.2 - [upgrade/versions] kubeadm version: v1.13.0 - [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y - [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] - [upgrade/prepull] Prepulling image for component etcd. - [upgrade/prepull] Prepulling image for component kube-controller-manager. - [upgrade/prepull] Prepulling image for component kube-scheduler. - [upgrade/prepull] Prepulling image for component kube-apiserver. - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler - [upgrade/prepull] Prepulled image for component etcd. - [upgrade/prepull] Prepulled image for component kube-apiserver. - [upgrade/prepull] Prepulled image for component kube-scheduler. - [upgrade/prepull] Prepulled image for component kube-controller-manager. - [upgrade/prepull] Successfully prepulled the images for all the control plane components - [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.13.0"... - Static pod: kube-apiserver-ip-10-0-0-7 hash: 4af3463d6ace12615f1795e40811c1a1 - Static pod: kube-controller-manager-ip-10-0-0-7 hash: a640b0098f5bddc701786e007c96e220 - Static pod: kube-scheduler-ip-10-0-0-7 hash: ee7b1077c61516320f4273309e9b4690 - map[localhost:2379:3.2.24] - [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests969681047" - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-11-20-18-30-42/kube-apiserver.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-apiserver-ip-10-0-0-7 hash: 4af3463d6ace12615f1795e40811c1a1 - Static pod: kube-apiserver-ip-10-0-0-7 hash: bf5b045d2be93e73654f3eb7027a4ef8 - [apiclient] Found 1 Pods for label selector component=kube-apiserver - [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-11-20-18-30-42/kube-controller-manager.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-controller-manager-ip-10-0-0-7 hash: a640b0098f5bddc701786e007c96e220 - Static pod: kube-controller-manager-ip-10-0-0-7 hash: 1e0eea23b3d971460ac032c18ab7daac - [apiclient] Found 1 Pods for label selector component=kube-controller-manager - [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-11-20-18-30-42/kube-scheduler.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-scheduler-ip-10-0-0-7 hash: ee7b1077c61516320f4273309e9b4690 - Static pod: kube-scheduler-ip-10-0-0-7 hash: 7f7d929b61a2cc5bcdf36609f75927ec - [apiclient] Found 1 Pods for label selector component=kube-scheduler - [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! - [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace - [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster - [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace - [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" - [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ip-10-0-0-7" as an annotation - [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials - [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token - [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster - [addons] Applied essential addon: CoreDNS - [addons] Applied essential addon: kube-proxy - - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.13.0". Enjoy! - - [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. - ``` - -1. Manually upgrade your Software Defined Network (SDN). - - Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. - Check the [addons](/docs/concepts/cluster-administration/addons/) page to - find your CNI provider and see whether additional upgrade steps are required. - -1. Upgrade the kubelet on the control plane node: - - {{< tabs name="k8s_install_kubelet" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.13.x-00 with the latest patch version - apt-mark unhold kubelet && \ - apt-get update && apt-get install -y kubelet=1.13.x-00 && \ - apt-mark hold kubelet - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.13.x-0 with the latest patch version - yum install -y kubelet-1.13.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} - -## Upgrade kubectl on all nodes - -1. Upgrade kubectl on all nodes: - - {{< tabs name="k8s_install_kubectl" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.13.x-00 with the latest patch version - apt-mark unhold kubectl && \ - apt-get update && apt-get install -y kubectl=1.13.x-00 && \ - apt-mark hold kubectl - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.13.x-0 with the latest patch version - yum install -y kubectl-1.13.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} - -## Drain control plane and worker nodes - -1. Prepare each node for maintenance by marking it unschedulable and evicting the workloads. Run: - - ```shell - kubectl drain $NODE --ignore-daemonsets - ``` - - On the control plane node, you must add `--ignore-daemonsets`: - - ```shell - kubectl drain ip-172-31-85-18 - node "ip-172-31-85-18" cordoned - error: unable to drain node "ip-172-31-85-18", aborting command... - - There are pending nodes to be drained: - ip-172-31-85-18 - error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): calico-node-5798d, kube-proxy-thjp9 - ``` - - ``` - kubectl drain ip-172-31-85-18 --ignore-daemonsets - node "ip-172-31-85-18" already cordoned - WARNING: Ignoring DaemonSet-managed pods: calico-node-5798d, kube-proxy-thjp9 - node "ip-172-31-85-18" drained - ``` - -## Upgrade the kubelet config on worker nodes - -1. On each node except the control plane node, upgrade the kubelet config: - - ```shell - kubeadm upgrade node config --kubelet-version v1.13.x - ``` - - Replace `x` with the patch version you picked for this ugprade. - - -## Upgrade kubeadm and the kubelet on worker nodes - -1. Upgrade the Kubernetes package version on each `$NODE` node by running the Linux package manager for your distribution: - - {{< tabs name="k8s_kubelet_and_kubeadm" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.13.x-00 with the latest patch version - apt-mark unhold kubelet kubeadm - apt-get update - apt-get install -y kubelet=1.13.x-00 kubeadm=1.13.x-00 - apt-mark hold kubelet kubeadm - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.13.x-0 with the latest patch version - yum install -y kubelet-1.13.x-0 kubeadm-1.13.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} - -## Restart the kubelet for all nodes - -1. Restart the kubelet process for all nodes: - - ```shell - systemctl restart kubelet - ``` - -1. Verify that the new version of the `kubelet` is running on the node: - - ```shell - systemctl status kubelet - ``` - -1. Bring the node back online by marking it schedulable: - - ```shell - kubectl uncordon $NODE - ``` - -1. After the kubelet is upgraded on all nodes, verify that all nodes are available again by running the following command from anywhere kubectl can access the cluster: - - ```shell - kubectl get nodes - ``` - - The `STATUS` column should show `Ready` for all your nodes, and the version number should be updated. - -{{% /capture %}} - -## Recovering from a failure state - -If `kubeadm upgrade` fails and does not roll back, for example because of an unexpected shutdown during execution, you can run `kubeadm upgrade` again. -This command is idempotent and eventually makes sure that the actual state is the desired state you declare. - -To recover from a bad state, you can also run `kubeadm upgrade --force` without changing the version that your cluster is running. - -## How it works - -`kubeadm upgrade apply` does the following: - -- Checks that your cluster is in an upgradeable state: - - The API server is reachable - - All nodes are in the `Ready` state - - The control plane is healthy -- Enforces the version skew policies. -- Makes sure the control plane images are available or available to pull to the machine. -- Upgrades the control plane components or rollbacks if any of them fails to come up. -- Applies the new `kube-dns` and `kube-proxy` manifests and makes sure that all necessary RBAC rules are created. -- Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days. diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14.md deleted file mode 100644 index b60071c3a2891..0000000000000 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14.md +++ /dev/null @@ -1,379 +0,0 @@ ---- -reviewers: -- sig-cluster-lifecycle -title: Upgrading kubeadm clusters from v1.13 to v1.14 -content_template: templates/task ---- - -{{% capture overview %}} - -This page explains how to upgrade a Kubernetes cluster created with kubeadm from version 1.13.x to version 1.14.x, -and from version 1.14.x to 1.14.y (where `y > x`). - -The upgrade workflow at high level is the following: - -1. Upgrade the primary control plane node. -1. Upgrade additional control plane nodes. -1. Upgrade worker nodes. - -{{< note >}} -With the release of Kubernetes v1.14, the kubeadm instructions for upgrading both HA and single control plane clusters -are merged into a single document. -{{}} - -{{% /capture %}} - -{{% capture prerequisites %}} - -- You need to have a kubeadm Kubernetes cluster running version 1.13.0 or later. -- [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux). -- The cluster should use a static control plane and etcd pods. -- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md) carefully. -- Make sure to back up any important components, such as app-level state stored in a database. - `kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. - -### Additional information - -- All containers are restarted after upgrade, because the container spec hash value is changed. -- You only can upgrade from one MINOR version to the next MINOR version, - or between PATCH versions of the same MINOR. That is, you cannot skip MINOR versions when you upgrade. - For example, you can upgrade from 1.y to 1.y+1, but not from 1.y to 1.y+2. - -{{% /capture %}} - -{{% capture steps %}} - -## Determine which version to upgrade to - -1. Find the latest stable 1.14 version: - - {{< tabs name="k8s_install_versions" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - apt update - apt-cache policy kubeadm - # find the latest 1.14 version in the list - # it should look like 1.14.x-00, where x is the latest patch - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - yum list --showduplicates kubeadm --disableexcludes=kubernetes - # find the latest 1.14 version in the list - # it should look like 1.14.x-0, where x is the latest patch - {{% /tab %}} - {{< /tabs >}} - -## Upgrade the first control plane node - -1. On your first control plane node, upgrade kubeadm: - - {{< tabs name="k8s_install_kubeadm_first_cp" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.14.x-00 with the latest patch version - apt-mark unhold kubeadm kubelet && \ - apt-get update && apt-get install -y kubeadm=1.14.x-00 && \ - apt-mark hold kubeadm - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.14.x-0 with the latest patch version - yum install -y kubeadm-1.14.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} - -1. Verify that the download works and has the expected version: - - ```shell - kubeadm version - ``` - -1. On the control plane node, run: - - ```shell - sudo kubeadm upgrade plan - ``` - - You should see output similar to this: - - ```shell - [preflight] Running pre-flight checks. - [upgrade] Making sure the cluster is healthy: - [upgrade/config] Making sure the configuration is correct: - [upgrade/config] Reading configuration from the cluster... - [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - [upgrade] Fetching available versions to upgrade to - [upgrade/versions] Cluster version: v1.13.3 - [upgrade/versions] kubeadm version: v1.14.0 - - Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': - COMPONENT CURRENT AVAILABLE - Kubelet 2 x v1.13.3 v1.14.0 - - Upgrade to the latest version in the v1.13 series: - - COMPONENT CURRENT AVAILABLE - API Server v1.13.3 v1.14.0 - Controller Manager v1.13.3 v1.14.0 - Scheduler v1.13.3 v1.14.0 - Kube Proxy v1.13.3 v1.14.0 - CoreDNS 1.2.6 1.3.1 - Etcd 3.2.24 3.3.10 - - You can now apply the upgrade by executing the following command: - - kubeadm upgrade apply v1.14.0 - - _____________________________________________________________________ - ``` - - This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. - -1. Choose a version to upgrade to, and run the appropriate command. For example: - - ```shell - sudo kubeadm upgrade apply v1.14.x - ``` - - - Replace `x` with the patch version you picked for this ugprade. - - You should see output similar to this: - - ```shell - [preflight] Running pre-flight checks. - [upgrade] Making sure the cluster is healthy: - [upgrade/config] Making sure the configuration is correct: - [upgrade/config] Reading configuration from the cluster... - [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - [upgrade/version] You have chosen to change the cluster version to "v1.14.0" - [upgrade/versions] Cluster version: v1.13.3 - [upgrade/versions] kubeadm version: v1.14.0 - [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y - [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] - [upgrade/prepull] Prepulling image for component etcd. - [upgrade/prepull] Prepulling image for component kube-scheduler. - [upgrade/prepull] Prepulling image for component kube-apiserver. - [upgrade/prepull] Prepulling image for component kube-controller-manager. - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager - [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler - [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver - [upgrade/prepull] Prepulled image for component etcd. - [upgrade/prepull] Prepulled image for component kube-apiserver. - [upgrade/prepull] Prepulled image for component kube-scheduler. - [upgrade/prepull] Prepulled image for component kube-controller-manager. - [upgrade/prepull] Successfully prepulled the images for all the control plane components - [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.0"... - Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 - Static pod: kube-controller-manager-myhost hash: 8ee730c1a5607a87f35abb2183bf03f2 - Static pod: kube-scheduler-myhost hash: 4b52d75cab61380f07c0c5a69fb371d4 - [upgrade/etcd] Upgrading to TLS for etcd - Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/etcd.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 - Static pod: etcd-myhost hash: 877025e7dd7adae8a04ee20ca4ecb239 - Static pod: etcd-myhost hash: 64a28f011070816f4beb07a9c96d73b6 - [apiclient] Found 1 Pods for label selector component=etcd - [upgrade/staticpods] Component "etcd" upgraded successfully! - [upgrade/etcd] Waiting for etcd to become available - [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests043818770" - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-apiserver.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 - Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 - Static pod: kube-apiserver-myhost hash: 6436b0d8ee0136c9d9752971dda40400 - Static pod: kube-apiserver-myhost hash: b8a6533e241a8c6dab84d32bb708b8a1 - [apiclient] Found 1 Pods for label selector component=kube-apiserver - [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-controller-manager.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-controller-manager-myhost hash: 8ee730c1a5607a87f35abb2183bf03f2 - Static pod: kube-controller-manager-myhost hash: 6f77d441d2488efd9fc2d9a9987ad30b - [apiclient] Found 1 Pods for label selector component=kube-controller-manager - [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! - [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-14-20-52-44/kube-scheduler.yaml" - [upgrade/staticpods] Waiting for the kubelet to restart the component - [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) - Static pod: kube-scheduler-myhost hash: 4b52d75cab61380f07c0c5a69fb371d4 - Static pod: kube-scheduler-myhost hash: a24773c92bb69c3748fcce5e540b7574 - [apiclient] Found 1 Pods for label selector component=kube-scheduler - [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! - [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace - [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster - [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace - [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" - [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials - [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token - [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster - [addons] Applied essential addon: CoreDNS - [addons] Applied essential addon: kube-proxy - - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.0". Enjoy! - - [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. - ``` - -1. Manually upgrade your CNI provider plugin. - - Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. - Check the [addons](/docs/concepts/cluster-administration/addons/) page to - find your CNI provider and see whether additional upgrade steps are required. - -1. Upgrade the kubelet and kubectl on the control plane node: - - {{< tabs name="k8s_install_kubelet" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.14.x-00 with the latest patch version - apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00 && \ - apt-mark hold kubelet kubectl - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.14.x-0 with the latest patch version - yum install -y kubelet-1.14.x-0 kubectl-1.14.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} - -1. Restart the kubelet - - ```shell - sudo systemctl restart kubelet - ``` - -## Upgrade additional control plane nodes - -1. Same as the first control plane node but use: - -``` -sudo kubeadm upgrade node experimental-control-plane -``` - -instead of: - -``` -sudo kubeadm upgrade apply -``` - -Also `sudo kubeadm upgrade plan` is not needed. - -## Upgrade worker nodes - -The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, -without compromising the minimum required capacity for running your workloads. - -### Upgrade kubeadm - -1. Upgrade kubeadm on all worker nodes: - - {{< tabs name="k8s_install_kubeadm_worker_nodes" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.14.x-00 with the latest patch version - apt-mark unhold kubeadm kubelet && \ - apt-get update && apt-get install -y kubeadm=1.14.x-00 && \ - apt-mark hold kubeadm - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.14.x-0 with the latest patch version - yum install -y kubeadm-1.14.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} - -### Cordon the node - -1. Prepare the node for maintenance by marking it unschedulable and evicting the workloads. Run: - - ```shell - kubectl drain $NODE --ignore-daemonsets - ``` - - You should see output similar to this: - - ```shell - node/ip-172-31-85-18 cordoned - WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-dj7d7, kube-system/weave-net-z65qx - node/ip-172-31-85-18 drained - ``` - -### Upgrade the kubelet config - -1. Upgrade the kubelet config: - - ```shell - sudo kubeadm upgrade node config --kubelet-version v1.14.x - ``` - - Replace `x` with the patch version you picked for this ugprade. - - -### Upgrade kubelet and kubectl - -1. Upgrade the Kubernetes package version by running the Linux package manager for your distribution: - - {{< tabs name="k8s_kubelet_and_kubectl" >}} - {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.14.x-00 with the latest patch version - apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.14.x-00 kubectl=1.14.x-00 && \ - apt-mark hold kubelet kubectl - {{% /tab %}} - {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.14.x-0 with the latest patch version - yum install -y kubelet-1.14.x-0 kubectl-1.14.x-0 --disableexcludes=kubernetes - {{% /tab %}} - {{< /tabs >}} - -1. Restart the kubelet - - ```shell - sudo systemctl restart kubelet - ``` - -### Uncordon the node - -1. Bring the node back online by marking it schedulable: - - ```shell - kubectl uncordon $NODE - ``` - -## Verify the status of the cluster - -After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command from anywhere kubectl can access the cluster: - -```shell -kubectl get nodes -``` - -The `STATUS` column should show `Ready` for all your nodes, and the version number should be updated. - -{{% /capture %}} - -## Recovering from a failure state - -If `kubeadm upgrade` fails and does not roll back, for example because of an unexpected shutdown during execution, you can run `kubeadm upgrade` again. -This command is idempotent and eventually makes sure that the actual state is the desired state you declare. - -To recover from a bad state, you can also run `kubeadm upgrade --force` without changing the version that your cluster is running. - -## How it works - -`kubeadm upgrade apply` does the following: - -- Checks that your cluster is in an upgradeable state: - - The API server is reachable - - All nodes are in the `Ready` state - - The control plane is healthy -- Enforces the version skew policies. -- Makes sure the control plane images are available or available to pull to the machine. -- Upgrades the control plane components or rollbacks if any of them fails to come up. -- Applies the new `kube-dns` and `kube-proxy` manifests and makes sure that all necessary RBAC rules are created. -- Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days. - -`kubeadm upgrade node experimental-control-plane` does the following on additional control plane nodes: -- Fetches the kubeadm `ClusterConfiguration` from the cluster. -- Optionally backups the kube-apiserver certificate. -- Upgrades the static Pod manifests for the control plane components. diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha-1-13.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha-1-13.md deleted file mode 100644 index beb3292b1f195..0000000000000 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-ha-1-13.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -reviewers: -- luxas -- timothysc -- jbeda -title: Upgrading kubeadm HA clusters from v1.12 to v1.13 -content_template: templates/task ---- - -{{% capture overview %}} - -This page explains how to upgrade a highly available (HA) Kubernetes cluster created with `kubeadm` from version 1.12.x to version 1.13.y. In addition to upgrading, you must also follow the instructions in [Creating HA clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/). - -{{% /capture %}} - -{{% capture prerequisites %}} - -Before proceeding: - -- You need to have a `kubeadm` HA cluster running version 1.12 or higher. -- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md) carefully. -- Make sure to back up any important components, such as app-level state stored in a database. `kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. -- Check the prerequisites for [Upgrading/downgrading kubeadm clusters between v1.12 to v1.13](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-13/). - -{{< note >}} -All commands on any control plane or etcd node should be run as root. -{{< /note >}} - -{{% /capture %}} - -{{% capture steps %}} - -## Prepare for both methods - -Upgrade `kubeadm` to the version that matches the version of Kubernetes that you are upgrading to: - -```shell -apt-mark unhold kubeadm && \ -apt-get update && apt-get upgrade -y kubeadm && \ -apt-mark hold kubeadm -``` - -Check prerequisites and determine the upgrade versions: - -```shell -kubeadm upgrade plan -``` - -You should see something like the following: - -``` -Upgrade to the latest version in the v1.13 series: - -COMPONENT CURRENT AVAILABLE -API Server v1.12.2 v1.13.0 -Controller Manager v1.12.2 v1.13.0 -Scheduler v1.12.2 v1.13.0 -Kube Proxy v1.12.2 v1.13.0 -CoreDNS 1.2.2 1.2.6 -``` - -## Stacked control plane nodes - -### Upgrade the first control plane node - -Modify `configmap/kubeadm-config` for this control plane node: - -```shell -kubectl edit configmap -n kube-system kubeadm-config -``` - -Make the following modifications to the ClusterConfiguration key: - -- `etcd` - - Remove the etcd section completely - -Make the following modifications to the ClusterStatus key: - -- `apiEndpoints` - - Add an entry for each of the additional control plane hosts - -Start the upgrade: - -```shell -kubeadm upgrade apply v -``` - -You should see something like the following: - - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.13.0". Enjoy! - -The `kubeadm-config` ConfigMap is now updated from `v1alpha3` version to `v1beta1`. - -### Upgrading additional control plane nodes - -Start the upgrade: - -```shell -kubeadm upgrade node experimental-control-plane -``` - -## External etcd - -### Upgrade the first control plane - -Run the upgrade: - -``` -kubeadm upgrade apply v1.13.0 -``` - -### Upgrade the other control plane nodes - -For other control plane nodes in the cluster, run the following command: - -``` -kubeadm upgrade node experimental-control-plane -``` - -## Next steps - -### Manually upgrade your CNI provider - -Your Container Network Interface (CNI) provider might have its own upgrade instructions to follow. Check the [addons](/docs/concepts/cluster-administration/addons/) page to find your CNI provider and see whether you need to take additional upgrade steps. - -### Update kubelet and kubectl packages - -Upgrade the kubelet and kubectl by running the following on each node: - -```shell -# use your distro's package manager, e.g. 'apt-get' on Debian-based systems -# for the versions stick to kubeadm's output (see above) -apt-mark unhold kubelet kubectl && \ -apt-get update && \ -apt-get install kubelet= kubectl= && \ -apt-mark hold kubelet kubectl && \ -systemctl restart kubelet -``` - -In this example a _deb_-based system is assumed and `apt-get` is used for installing the upgraded software. On rpm-based systems the command is `yum install =` for all packages. - -Verify that the new version of the kubelet is running: - -```shell -systemctl status kubelet -``` - -Verify that the upgraded node is available again by running the following command from wherever you run `kubectl`: - -```shell -kubectl get nodes -``` - -If the `STATUS` column shows `Ready` for the upgraded host, you can continue. You might need to repeat the command until the node shows `Ready`. - -## If something goes wrong - -If the upgrade fails, see whether one of the following scenarios applies: - -- If `kubeadm upgrade apply` failed to upgrade the cluster, it will try to perform a rollback. If this is the case on the first master, the cluster is probably still intact. - - You can run `kubeadm upgrade apply` again, because it is idempotent and should eventually make sure the actual state is the desired state you are declaring. You can run `kubeadm upgrade apply` to change a running cluster with `x.x.x --> x.x.x` with `--force` to recover from a bad state. - -- If `kubeadm upgrade apply` on one of the secondary masters failed, the cluster is upgraded and working, but the secondary masters are in an undefined state. You need to investigate further and join the secondaries manually. - -{{% /capture %}} diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md similarity index 85% rename from content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15.md rename to content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 20896a678f8d1..f674dfc8a2dcb 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -1,14 +1,17 @@ --- reviewers: - sig-cluster-lifecycle -title: Upgrading kubeadm clusters from v1.14 to v1.15 +title: Upgrading kubeadm clusters content_template: templates/task --- {{% capture overview %}} This page explains how to upgrade a Kubernetes cluster created with kubeadm from version -1.14.x to version 1.15.x, and from version 1.15.x to 1.15.y (where `y > x`). +1.16.x to version 1.16.x, and from version 1.16.x to 1.16.y (where `y > x`). + +To see information about upgrading clusters created using older versions of kubeadm, +please pick a Kubernetes version from the drop down menu of this web page. The upgrade workflow at high level is the following: @@ -20,10 +23,10 @@ The upgrade workflow at high level is the following: {{% capture prerequisites %}} -- You need to have a kubeadm Kubernetes cluster running version 1.14.0 or later. +- You need to have a kubeadm Kubernetes cluster running version 1.15.0 or later. - [Swap must be disabled](https://serverfault.com/questions/684771/best-way-to-disable-swap-in-linux). - The cluster should use a static control plane and etcd pods or external etcd. -- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md) carefully. +- Make sure you read the [release notes](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md) carefully. - Make sure to back up any important components, such as app-level state stored in a database. `kubeadm upgrade` does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. @@ -40,19 +43,19 @@ The upgrade workflow at high level is the following: ## Determine which version to upgrade to -1. Find the latest stable 1.15 version: +1. Find the latest stable 1.16 version: {{< tabs name="k8s_install_versions" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} apt update apt-cache policy kubeadm - # find the latest 1.15 version in the list - # it should look like 1.15.x-00, where x is the latest patch + # find the latest 1.16 version in the list + # it should look like 1.16.x-00, where x is the latest patch {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} yum list --showduplicates kubeadm --disableexcludes=kubernetes - # find the latest 1.15 version in the list - # it should look like 1.15.x-0, where x is the latest patch + # find the latest 1.16 version in the list + # it should look like 1.16.x-0, where x is the latest patch {{% /tab %}} {{< /tabs >}} @@ -64,14 +67,14 @@ The upgrade workflow at high level is the following: {{< tabs name="k8s_install_kubeadm_first_cp" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.15.x-00 with the latest patch version + # replace x in 1.16.x-00 with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.15.x-00 && \ + apt-get update && apt-get install -y kubeadm=1.16.x-00 && \ apt-mark hold kubeadm {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.15.x-0 with the latest patch version - yum install -y kubeadm-1.15.x-0 --disableexcludes=kubernetes + # replace x in 1.16.x-0 with the latest patch version + yum install -y kubeadm-1.16.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -96,26 +99,26 @@ The upgrade workflow at high level is the following: [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade] Fetching available versions to upgrade to - [upgrade/versions] Cluster version: v1.14.2 - [upgrade/versions] kubeadm version: v1.15.0 + [upgrade/versions] Cluster version: v1.15.2 + [upgrade/versions] kubeadm version: v1.16.0 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE - Kubelet 1 x v1.14.2 v1.15.0 + Kubelet 1 x v1.15.2 v1.16.0 - Upgrade to the latest version in the v1.15 series: + Upgrade to the latest version in the v1.16 series: COMPONENT CURRENT AVAILABLE - API Server v1.14.2 v1.15.0 - Controller Manager v1.14.2 v1.15.0 - Scheduler v1.14.2 v1.15.0 - Kube Proxy v1.14.2 v1.15.0 - CoreDNS 1.3.1 1.3.1 - Etcd 3.3.10 3.3.10 + API Server v1.15.2 v1.16.0 + Controller Manager v1.15.2 v1.16.0 + Scheduler v1.15.2 v1.16.0 + Kube Proxy v1.15.2 v1.16.0 + CoreDNS 1.3.1 1.6.2 + Etcd 3.3.10 3.3.15 You can now apply the upgrade by executing the following command: - kubeadm upgrade apply v1.15.0 + kubeadm upgrade apply v1.16.0 _____________________________________________________________________ ``` @@ -123,7 +126,7 @@ The upgrade workflow at high level is the following: This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. {{< note >}} - With the release of Kubernetes v1.15, `kubeadm upgrade` also automatically renews + With the release of Kubernetes v1.16, `kubeadm upgrade` also automatically renews the certificates that it manages on this node. To opt-out of certificate renewal the flag `--certificate-renewal=false` can be used. For more information see the [certificate management guide](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs). {{}} @@ -131,7 +134,7 @@ The upgrade workflow at high level is the following: 1. Choose a version to upgrade to, and run the appropriate command. For example: ```shell - sudo kubeadm upgrade apply v1.15.x + sudo kubeadm upgrade apply v1.16.x ``` - Replace `x` with the patch version you picked for this upgrade. @@ -144,9 +147,9 @@ The upgrade workflow at high level is the following: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' - [upgrade/version] You have chosen to change the cluster version to "v1.15.0" - [upgrade/versions] Cluster version: v1.14.2 - [upgrade/versions] kubeadm version: v1.15.0 + [upgrade/version] You have chosen to change the cluster version to "v1.16.0" + [upgrade/versions] Cluster version: v1.15.2 + [upgrade/versions] kubeadm version: v1.16.0 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. @@ -164,7 +167,7 @@ The upgrade workflow at high level is the following: [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Successfully prepulled the images for all the control plane components - [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.15.0"... + [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.0"... Static pod: kube-apiserver-luboitvbox hash: 8d931c2296a38951e95684cbcbe3b923 Static pod: kube-controller-manager-luboitvbox hash: 2480bf6982ad2103c05f6764e20f2787 Static pod: kube-scheduler-luboitvbox hash: 9b290132363a92652555896288ca3f88 @@ -202,8 +205,8 @@ The upgrade workflow at high level is the following: [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upgrade/staticpods] Renewing certificate embedded in "admin.conf" [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace - [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster - [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace + [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster + [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token @@ -211,7 +214,7 @@ The upgrade workflow at high level is the following: [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy - [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.0". Enjoy! + [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. ``` @@ -246,14 +249,14 @@ Also `sudo kubeadm upgrade plan` is not needed. {{< tabs name="k8s_install_kubelet" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.15.x-00 with the latest patch version + # replace x in 1.16.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \ + apt-get update && apt-get install -y kubelet=1.16.x-00 kubectl=1.16.x-00 && \ apt-mark hold kubelet kubectl {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.15.x-0 with the latest patch version - yum install -y kubelet-1.15.x-0 kubectl-1.15.x-0 --disableexcludes=kubernetes + # replace x in 1.16.x-0 with the latest patch version + yum install -y kubelet-1.16.x-0 kubectl-1.16.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -274,14 +277,14 @@ without compromising the minimum required capacity for running your workloads. {{< tabs name="k8s_install_kubeadm_worker_nodes" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.15.x-00 with the latest patch version + # replace x in 1.16.x-00 with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get install -y kubeadm=1.15.x-00 && \ + apt-get update && apt-get install -y kubeadm=1.16.x-00 && \ apt-mark hold kubeadm {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.15.x-0 with the latest patch version - yum install -y kubeadm-1.15.x-0 --disableexcludes=kubernetes + # replace x in 1.16.x-0 with the latest patch version + yum install -y kubeadm-1.16.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -315,14 +318,14 @@ without compromising the minimum required capacity for running your workloads. {{< tabs name="k8s_kubelet_and_kubectl" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} - # replace x in 1.15.x-00 with the latest patch version + # replace x in 1.16.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ - apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \ + apt-get update && apt-get install -y kubelet=1.16.x-00 kubectl=1.16.x-00 && \ apt-mark hold kubelet kubectl {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - # replace x in 1.15.x-0 with the latest patch version - yum install -y kubelet-1.15.x-0 kubectl-1.15.x-0 --disableexcludes=kubernetes + # replace x in 1.16.x-0 with the latest patch version + yum install -y kubelet-1.16.x-0 kubectl-1.16.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}}