-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while joining already joined nodes #1461
Comments
@MiroslavRepka few points would help to triage this issue:
|
Original manifest apiVersion: kubeone.io/v1beta1
kind: KubeOneCluster
name: cluster
versions:
kubernetes: 'v1.19.0'
clusterNetwork:
cni:
external: {}
cloudProvider:
none: {}
external: false
addons:
enable: true
path: "addons"
apiEndpoint:
host: 'x.x.x.x'
port: 6443
controlPlane:
hosts:
- publicAddress: 'x.x.x.x'
privateAddress: '192.168.2.1'
sshPrivateKeyFile: './private.pem'
staticWorkers:
hosts:
- publicAddress: 'x.x.x.x'
privateAddress: '192.168.2.2'
sshPrivateKeyFile: './private.pem'
machineController:
deploy: false Updated manifest apiVersion: kubeone.io/v1beta1
kind: KubeOneCluster
name: cluster
versions:
kubernetes: 'v1.19.0'
clusterNetwork:
cni:
external: {}
cloudProvider:
none: {}
external: false
addons:
enable: true
path: "addons"
apiEndpoint:
host: 'x.x.x.x'
port: 6443
controlPlane:
hosts:
- publicAddress: 'x.x.x.x'
privateAddress: '192.168.2.1'
sshPrivateKeyFile: './private.pem'
staticWorkers:
hosts:
- publicAddress: 'x.x.x.x'
privateAddress: '192.168.2.2'
sshPrivateKeyFile: './private.pem'
- publicAddress: 'x.x.x.x'
privateAddress: '192.168.2.3'
sshPrivateKeyFile: './private.pem'
machineController:
deploy: false I will provide a full log later today since I do not have version
|
@xmudrii didn't we removed recently some small short-circuit "check&exit"? Seems like this is the case. Kubeone should have checked if node is initialized already and skip |
I can confirm I have the same issue which only occurs in 1.3.0 alpha versions. In latest stable version everything works. |
@MiroslavRepka @randrusiak Thank you for reporting the issue! We have fixed it in #1485 and the fix will be included in the upcoming 1.3.0-rc.0 release. |
What happened:
When I tried to apply the updated manifest for the cluster, with new nodes, the
kubeone apply
threw an error when trying to join already joined nodes.What is the expected behavior:
Apply the updated manifest and join only the nodes that need to be joined
How to reproduce the issue:
kubeone apply -m <manifest> -y
kubeone apply -m <manifest> -y
Anything else we need to know?
After installing kubeone version
1.2.3
, the bug was resolved.Information about the environment:
KubeOne version (
kubeone version
): 1.3.0 (not sure which alpha, since kubeone reinstalled)Operating system: Pop!_OS 21.04
Provider you're deploying cluster on: Hetzner
Operating system you're deploying on: Ubuntu
The text was updated successfully, but these errors were encountered: