Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install fails due to mismatch in docker client/server versions #898

Closed
ja573 opened this issue May 19, 2020 · 5 comments · Fixed by #899
Closed

Install fails due to mismatch in docker client/server versions #898

ja573 opened this issue May 19, 2020 · 5 comments · Fixed by #899
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/urgent
Milestone

Comments

@ja573
Copy link

ja573 commented May 19, 2020

What happened:
I've been testing a combination of kubeone and terraform for the last few weeks, destroying and recreating the cluster almost daily. Today I've run kubeone install config.yaml --tfjson . (without making any changes to the configuration in neither kubeone nor terraform) and the installation fails with the following:

Configuring kubernetes...
INFO[15:48:04 BST] Installing prerequisites…                    
INFO[15:48:05 BST] Determine operating system…                   node=172.31.189.142
INFO[15:48:07 BST] Determine operating system…                   node=172.31.191.202
INFO[15:48:07 BST] Creating environment file…                    node=172.31.189.142
INFO[15:48:07 BST] Installing kubeadm…                           node=172.31.189.142 os=ubuntu
INFO[15:48:08 BST] Determine operating system…                   node=172.31.190.155
INFO[15:48:08 BST] Creating environment file…                    node=172.31.191.202
INFO[15:48:08 BST] Installing kubeadm…                           node=172.31.191.202 os=ubuntu
INFO[15:48:09 BST] Creating environment file…                    node=172.31.190.155
INFO[15:48:09 BST] Installing kubeadm…                           node=172.31.190.155 os=ubuntu
INFO[15:48:48 BST] Deploying configuration files…                node=172.31.191.202 os=ubuntu
INFO[15:48:48 BST] Deploying configuration files…                node=172.31.189.142 os=ubuntu
INFO[15:48:49 BST] Deploying configuration files…                node=172.31.190.155 os=ubuntu
INFO[15:48:49 BST] Generating kubeadm config file…              
INFO[15:48:50 BST] Configuring certs and etcd on first controller… 
INFO[15:48:50 BST] Ensuring Certificates…                        node=172.31.189.142
INFO[15:48:53 BST] Downloading PKI…                             
INFO[15:48:53 BST] Downloading PKI files…                        node=172.31.189.142
INFO[15:48:54 BST] Creating local backup…                        node=172.31.189.142
INFO[15:48:54 BST] Deploying PKI…                               
INFO[15:48:54 BST] Uploading files…                              node=172.31.191.202
INFO[15:48:54 BST] Uploading files…                              node=172.31.190.155
INFO[15:48:56 BST] Configuring certs and etcd on consecutive controller… 
INFO[15:48:56 BST] Ensuring Certificates…                        node=172.31.191.202
INFO[15:48:56 BST] Ensuring Certificates…                        node=172.31.190.155
INFO[15:48:58 BST] Initializing Kubernetes on leader…           
INFO[15:48:58 BST] Running kubeadm…                              node=172.31.189.142
WARN[15:48:58 BST] Task failed…                                 
WARN[15:49:03 BST] Retrying task…                               
INFO[15:49:03 BST] Initializing Kubernetes on leader…           
INFO[15:49:03 BST] Running kubeadm…                              node=172.31.189.142
WARN[15:49:04 BST] Task failed…                                 
WARN[15:49:14 BST] Retrying task…                               
INFO[15:49:14 BST] Initializing Kubernetes on leader…           
INFO[15:49:14 BST] Running kubeadm…                              node=172.31.189.142
WARN[15:49:14 BST] Task failed…                                 
Error: failed to init kubernetes on leader: + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/sbin:/usr/local/bin:/opt/bin
+ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/sbin:/usr/local/bin:/opt/bin
+ [[ -f /etc/kubernetes/admin.conf ]]
+ sudo kubeadm init --config=./kubeone/cfg/master_0.yaml
W0519 14:49:14.430045    5127 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR CRI]: container runtime is not running: output: Client:
 Debug Mode: false

Server:
ERROR: Error response from daemon: client version 1.40 is too new. Maximum supported API version is 1.39
errors pretty printing info
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: failed to exec command: set -xeu pipefail

export "PATH=$PATH:/sbin:/usr/local/bin:/opt/bin"


if [[ -f /etc/kubernetes/admin.conf ]]; then
	sudo kubeadm  token create uigefr.8jxecq2221gzxe96 --ttl 1h0m0s
	exit 0;
fi
sudo kubeadm  init --config=./kubeone/cfg/master_0.yaml
: Process exited with status 1
failed to init kubernetes on leader: + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/sbin:/usr/local/bin:/opt/bin
+ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/sbin:/usr/local/bin:/opt/bin
+ [[ -f /etc/kubernetes/admin.conf ]]
+ sudo kubeadm init --config=./kubeone/cfg/master_0.yaml
W0519 14:49:14.430045    5127 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR CRI]: container runtime is not running: output: Client:
 Debug Mode: false

Server:
ERROR: Error response from daemon: client version 1.40 is too new. Maximum supported API version is 1.39
errors pretty printing info
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: failed to exec command: set -xeu pipefail

export "PATH=$PATH:/sbin:/usr/local/bin:/opt/bin"


if [[ -f /etc/kubernetes/admin.conf ]]; then
	sudo kubeadm  token create uigefr.8jxecq2221gzxe96 --ttl 1h0m0s
	exit 0;
fi
sudo kubeadm  init --config=./kubeone/cfg/master_0.yaml
: Process exited with status 1

What is the expected behavior:

Installation should succeed

How to reproduce the issue:
I haven't done any major modifications to the AWS terraform examples - so I assume trying the example will fail as well

Anything else we need to know?
I don't think so

Information about the environment:
KubeOne version (kubeone version):

{
  "kubeone": {
    "major": "0",
    "minor": "11",
    "gitVersion": "0.11.1",
    "gitCommit": "39c74f68a0da5668b0e805692df89d42cb1e62f0",
    "gitTreeState": "",
    "buildDate": "2020-04-08T09:34:16Z",
    "goVersion": "go1.14.1",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "machine_controller": {
    "major": "1",
    "minor": "11",
    "gitVersion": "v1.11.1",
    "gitCommit": "",
    "gitTreeState": "",
    "buildDate": "",
    "goVersion": "",
    "compiler": "",
    "platform": "linux/amd64"
  }
}

Operating system: Ubuntu
Provider you're deploying cluster on: AWS
Operating system you're deploying on: Ubuntu

@ja573 ja573 added the kind/bug Categorizes issue or PR as related to a bug. label May 19, 2020
@xmudrii
Copy link
Member

xmudrii commented May 19, 2020

@ja573 There is a PR #896 that is supposed to fix this. We have some problems with tests right now, but we hope it'll be merged soon.

@xmudrii xmudrii self-assigned this May 19, 2020
@xmudrii xmudrii added this to the v1.0 milestone May 19, 2020
@ja573
Copy link
Author

ja573 commented May 20, 2020

@ja573 There is a PR #896 that is supposed to fix this. We have some problems with tests right now, but we hope it'll be merged soon.

@xmudrii sorry to bug you with this, but now that the PR is merged, how can I get the patch? Will there be a release with it soon?

@kron4eg
Copy link
Member

kron4eg commented May 20, 2020

@ja573 there will be new alpha release once we have #899 merged. Plus we plan to backport this to 0.11 release branch

@ja573
Copy link
Author

ja573 commented May 20, 2020

That's great - thanks, @kron4eg

@ja573
Copy link
Author

ja573 commented May 20, 2020

Just in case anyone needs a quick fix to solve this, add the following line to /etc/environment on all the control planes:

DOCKER_API_VERSION=1.39

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/urgent
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants