-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS iam node roles missing permissions for dockerconfig and addons #15287
Comments
+1, this is a pretty significant breaking change, and it seems like it was a conscious decision to remove the feature gate to keep existing behavior. Is there a documented alternative or some other way to restore the previous behavior? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@pjaak & @colinhoglund please check #15539 if you allow wildcard s3 get access to nodes, you are vulnerable to that. could you please share the cluster spec to get idea what kind of cluster you have? for me it looks like you are hitting https://github.com/kubernetes/kops/blob/master/pkg/model/iam/iam_builder.go#L707 (and possible if you use none dns https://github.com/kubernetes/kops/blob/master/pkg/model/iam/iam_builder.go#L464) this as well. Nowadays AWS uses kops controller for bootstrapping and its removing that access already. but anyway, we need more information to get the idea why you are hitting this bug. You should not need access to |
/remove-lifecycle stale |
The dockerconfig on kOps 1.26 worker nodes comes from kops-controller. If some control-plane nodes are still on an older kOps version, then they might not be serving the dockerconfig, causing kOps 1.26 worker nodes to fail to start (until they talk to a 1.26 kops-controller). |
Which should happen eventually, right? |
It should, unless they have some issue preventing the new control-plane nodes from starting. |
@pjaak could you give more information about the control-plane nodes not coming up? It seems to me that was the root cause and the node role would not affect the control-plane nodes. |
Hi @johngmyers , The node role is what provided permissions to download the dockerconfig.json file from S3. If this role is wrong we cant download the dockerconfig.json. We use our own registry for all the control plane component images, so if it cant download these images it can't start up the control plane components such as API server etc. |
The node role is not used by the control-plane nodes. The control-plane nodes use |
I just had a long (and somewhat frustrating) debugging episode that turned out to be caused by this issue. This is on kOps 1.28.1 with AWS, Kubernetes 1.26.11, using Gossip DNS. My problem was that regular nodes did not get a
As I saw no following error I assumed it had succeeded, but I also noticed that there was no File task in the logs for writing the secret to disk and chmoding it. Looking through the source I see that s3fs read errors are actually swallowed by the kops/upup/pkg/fi/secrets/vfs_secretstorereader.go Lines 76 to 83 in 2038e4c
So the error is not returned unless it's a 404 and the code will go on and try to JSON unmarshal 0 bytes. This will shadow the actual error with an kops/nodeup/pkg/model/secrets.go Line 52 in 2038e4c
I patched
Looking at the node IAM policy ( {
"Statement": [
{
"Action": [
"s3:Get*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<redacted>-kops-state/dev.k8s.local/cluster-completed.spec",
"arn:aws:s3:::<redacted>-kops-state/dev.k8s.local/igconfig/node/*"
]
},
…
} Some questions:
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/kind bug
1. What
kops
version are you running? The commandkops version
, will displaythis information.
Client version: 1.26.2
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.1.26.3
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
Kops upgrade to kubernetes version 1.26
5. What happened after the commands executed?
I went to rotate the master nodes, however noticed that some of the control plane containers weren't coming up due to 401. We use our own private container registry. I did some investigation and noticed the dockerconfig file was not getting populated on all the EC2 instances. I then worked out that the IAM roles were missing the required s3 permissions for these two locations:
As a temporary fix i have added the S3 permissions into the additionalPolicies section.
The text was updated successfully, but these errors were encountered: