Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MachinePool bootstrap token do not get refreshed automatically when VMSS is manually/externally scaled #2683

Closed
mweibel opened this issue Sep 29, 2022 · 6 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.
Milestone

Comments

@mweibel
Copy link
Contributor

mweibel commented Sep 29, 2022

/kind bug

What steps did you take and what happened:
A cluster with MachinePools and externally managed autoscaler is necessary for this bug to appear.

The reconciliation loop for AzureMachinePool does not automatically refresh bootstrap tokens once they get rotated. Writing of new bootstrap tokens into custom data is done only when there is a surge change or the VMSS model changes.
When scaling a VMSS manually or externally via cluster-autoscaler set to provider azure, the token in the VMSS custom data may already be outdated and therefore the new node can not join the cluster.

I believe there might be two separate issues:

  1. patchVMSSIfNeeded does not verify if custom data changed and therefore does not update it. To do this, we'd need to store e.g. a hash of custom data in the AzureMachinePool.Status and compare the hashes.
  2. auto rotation of Kubeadm bootstrap tokens does not result in a reconciliation of AzureMachinePool. Only a change in MachinePool leads to reconciliation of AzureMachinePool and might result in updating the VMSS custom data.

What did you expect to happen:
When Kubeadm bootstrap token refreshes, the VMSS custom data gets updated automatically and new nodes can join without issues.

Anything else you would like to add:
I'm aware that the prerequisites (MachinePools and externally managed cluster autoscaler) are a special case. I'm not sure how many users want to have an externally managed autoscaler and I'm personally testing whether I can switch to the cluster-api cluster-autoscaler provider.
However, I do think custom data changes should be considered in patchVMSSIfNeeded.

To solve point two, I believe the AzureMachinePool controller would need to watch KubeadmConfig and then kick off a reconciliation. This might be even worth to consider in CAPI itself, since I expect this to be an issue for all CAP* providers.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 29, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 28, 2022
@mweibel
Copy link
Contributor Author

mweibel commented Jan 9, 2023

/remove-lifecycle stale

@dthorsen
Copy link
Contributor

/assign @BrennenMM7

@k8s-ci-robot
Copy link
Contributor

@dthorsen: GitHub didn't allow me to assign the following users: BrennenMM7.

Note that only kubernetes-sigs members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @BrennenMM7

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dthorsen
Copy link
Contributor

/assign @dthorsen

@jackfrancis
Copy link
Contributor

Fixed in #3134

@mweibel the fix provided via #3134 will be included in v1.8.0 (ETA: next week). Please let us know if you're able to verify that v1.8.0 does not fix this bug, and then re-open this issue.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants