Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeone reset fails when LB is not functioning #474

Closed
kron4eg opened this issue May 31, 2019 · 6 comments · Fixed by #508
Closed

kubeone reset fails when LB is not functioning #474

kron4eg opened this issue May 31, 2019 · 6 comments · Fixed by #508
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@kron4eg
Copy link
Member

kron4eg commented May 31, 2019

$ kubeone reset -v config.yaml -t tf.json
INFO[17:47:27 EEST] Resetting cluster…
INFO[17:47:27 EEST] Destroying worker nodes…
INFO[17:47:27 EEST] Building Kubernetes clientset…
Error: unable to build kubernetes clientset: unable to build dynamic client: unable to build dynamic client: Get https://artioms-api-lb-c768cdec4dc7974a.elb.eu-west-3.amazonaws.com:6443/api?timeout=32s: dial tcp 35.180.146.33:6443: connect: connection refused
@kron4eg kron4eg added the kind/bug Categorizes issue or PR as related to a bug. label May 31, 2019
@xmudrii
Copy link
Member

xmudrii commented Jun 1, 2019

@kron4eg Isn't this working as expected? We use LB as the API endpoint by default. Therefore, it's also written in kubeconfig. As an eventual fix, we could try to connect to the node directly, but that would require modifying kubeconfig (possible with client-go).

@kron4eg
Copy link
Member Author

kron4eg commented Jun 1, 2019 via email

@xmudrii
Copy link
Member

xmudrii commented Jun 1, 2019

I agree that we should retry a few times, but I disagree that we should skip and proceed to SSH-ing. I prefer showing an error message and instruct the user that the step can be skipped by setting --destroy-workers to false.

@kron4eg
Copy link
Member Author

kron4eg commented Jun 3, 2019

SGTM

@kdomanski kdomanski added this to the v0.9 milestone Jun 4, 2019
@kron4eg kron4eg self-assigned this Jun 5, 2019
@xmudrii
Copy link
Member

xmudrii commented Jun 11, 2019

kubeone reset also fails on not found error (e.g. if an object has been already deleted). In such cases, we should just skip the object instead of failing. We should cover this by PR fixing this issue.

@xmudrii xmudrii added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Jun 18, 2019
@xmudrii
Copy link
Member

xmudrii commented Jun 18, 2019

A workaround is to add a log message when this happens to use the --destroy-workers=false flag and then destroy them manually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants