-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[release-1.21] Cluster fail to provision when using AWS cloud provider #1618
Comments
I think we want the 1.21.3+rke2r2 milestone |
@cjellick I thought we were not actually planning on doing a 1.21.3+rke2r2 release - the current RCs are just for QA. The next actual release will be 1.21.4+rke2r1. |
Thats what I thought too |
This has been merged to master; I have updated the milestone to 1.22 and created a backport issue for 1.21. CORRECTION: this was merged to release-1.21 first, will convert the other issue into a forwardport |
Validated using v1.21.3-rc6+rke2r2No longer need to supply node-name as hostname when using aws cloud provider. Can still supply it if desired. It will automatically use an equivalent to |
Environmental Info:
RKE2 Version: v1.21.3+rke2r1
Node(s) CPU architecture, OS, and Version:
Ubuntu 20.04
Cluster Configuration:
1 Server
Describe the bug:
When not passing
--node-name
to rke2 server and using--cloud-provider-name=aws
, kubelet fail to register itself with kubernetes api because of the NodeRestriction Admission plugin rancher/rancher#34105 (comment)Steps To Reproduce:
Expected behavior:
The cluster should start normally
Actual behavior:
Kubelet fails to register with the following error:
The text was updated successfully, but these errors were encountered: