Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS iam node roles missing permissions for dockerconfig and addons #15287

Closed
pjaak opened this issue Apr 3, 2023 · 16 comments
Closed

AWS iam node roles missing permissions for dockerconfig and addons #15287

pjaak opened this issue Apr 3, 2023 · 16 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@pjaak
Copy link

pjaak commented Apr 3, 2023

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

Client version: 1.26.2

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

1.26.3

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
Kops upgrade to kubernetes version 1.26

5. What happened after the commands executed?
I went to rotate the master nodes, however noticed that some of the control plane containers weren't coming up due to 401. We use our own private container registry. I did some investigation and noticed the dockerconfig file was not getting populated on all the EC2 instances. I then worked out that the IAM roles were missing the required s3 permissions for these two locations:

/secrets/dockerconfig
/addons/*

As a temporary fix i have added the S3 permissions into the additionalPolicies section.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Apr 3, 2023
@pjaak
Copy link
Author

pjaak commented Apr 11, 2023

Here is the update:
image
On the nodes role its removes them. However on the master IAM role i can see it still keeps this:

            "Action": [
                "s3:Get*"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:s3:::cluster-k8s-state-store/clustername/*"
        },
111

@colinhoglund
Copy link
Contributor

colinhoglund commented Apr 13, 2023

+1, this is a pretty significant breaking change, and it seems like it was a conscious decision to remove the feature gate to keep existing behavior.

Is there a documented alternative or some other way to restore the previous behavior?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 12, 2023
@zetaab
Copy link
Member

zetaab commented Jul 13, 2023

@pjaak & @colinhoglund please check #15539 if you allow wildcard s3 get access to nodes, you are vulnerable to that.

could you please share the cluster spec to get idea what kind of cluster you have?

for me it looks like you are hitting https://github.com/kubernetes/kops/blob/master/pkg/model/iam/iam_builder.go#L707 (and possible if you use none dns https://github.com/kubernetes/kops/blob/master/pkg/model/iam/iam_builder.go#L464) this as well. Nowadays AWS uses kops controller for bootstrapping and its removing that access already.

but anyway, we need more information to get the idea why you are hitting this bug. You should not need access to addons in normal nodes AFAIK. Also dockerconfig should be coming from kops-controller https://github.com/kubernetes/kops/blob/master/cmd/kops-controller/pkg/server/node_config.go#L65-L80. Have you rotated all instances in your cluster?

@zetaab
Copy link
Member

zetaab commented Jul 13, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 13, 2023
@johngmyers
Copy link
Member

The dockerconfig on kOps 1.26 worker nodes comes from kops-controller. If some control-plane nodes are still on an older kOps version, then they might not be serving the dockerconfig, causing kOps 1.26 worker nodes to fail to start (until they talk to a 1.26 kops-controller).

@hakman
Copy link
Member

hakman commented Jul 13, 2023

... causing kOps 1.26 worker nodes to fail to start (until they talk to a 1.26 kops-controller).

Which should happen eventually, right?

@johngmyers
Copy link
Member

It should, unless they have some issue preventing the new control-plane nodes from starting.

@johngmyers
Copy link
Member

@pjaak could you give more information about the control-plane nodes not coming up? It seems to me that was the root cause and the node role would not affect the control-plane nodes.

@johngmyers johngmyers added the triage/needs-information Indicates an issue needs more information in order to work on it. label Jul 14, 2023
@pjaak
Copy link
Author

pjaak commented Jul 14, 2023

Hi @johngmyers ,

The node role is what provided permissions to download the dockerconfig.json file from S3. If this role is wrong we cant download the dockerconfig.json. We use our own registry for all the control plane component images, so if it cant download these images it can't start up the control plane components such as API server etc.

@johngmyers
Copy link
Member

The node role is not used by the control-plane nodes. The control-plane nodes use NodeRoleMaster, which has much more extensive permissions on the S3 state store.

@gustav-b
Copy link

gustav-b commented Dec 9, 2023

I just had a long (and somewhat frustrating) debugging episode that turned out to be caused by this issue.

This is on kOps 1.28.1 with AWS, Kubernetes 1.26.11, using Gossip DNS.

My problem was that regular nodes did not get a /root/.docker/config.json on first boot, despite having a dockerconfig secret. Control-plane nodes did however get them. Looking through the nodeup logs I see that it tried to read the secret:

nodeup[7896]: I1208 21:46:17.442071    7896 s3fs.go:371] Reading file "s3://<redacted>-kops-state/dev.k8s.local/secrets/dockerconfig"

As I saw no following error I assumed it had succeeded, but I also noticed that there was no File task in the logs for writing the secret to disk and chmoding it.

Looking through the source I see that s3fs read errors are actually swallowed by the VFSSecretStoreReader. First here:

func (c *VFSSecretStoreReader) loadSecret(ctx context.Context, p vfs.Path) (*fi.Secret, error) {
data, err := p.ReadFile(ctx)
if err != nil {
if os.IsNotExist(err) {
return nil, nil
}
}
s := &fi.Secret{}

So the error is not returned unless it's a 404 and the code will go on and try to JSON unmarshal 0 bytes. This will shadow the actual error with an unexpected end of JSON input error. But this error is in turn swallowed by the caller:

dockercfg, _ := b.SecretStore.Secret(key)

I patched nodeup with logging of the errors and restarted kops-configuration.service to see what was happening:

nodeup[7896]: I1208 21:46:17.449886    7896 s3fs.go:380] Read error: AccessDenied: Access Denied
nodeup[7896]:         status code: 403, request id: <redacted>, host id: <redacted>

Looking at the node IAM policy (nodes.dev.k8s.local) it was obvious that it had no access to the dockerconfig secret on s3://<redacted>-kops-state/dev.k8s.local/secrets/dockerconfig:

{
	"Statement": [
		{
			"Action": [
				"s3:Get*"
			],
			"Effect": "Allow",
			"Resource": [
				"arn:aws:s3:::<redacted>-kops-state/dev.k8s.local/cluster-completed.spec",
				"arn:aws:s3:::<redacted>-kops-state/dev.k8s.local/igconfig/node/*"
			]
		},
        
}

Some questions:

  1. Does Gossip DNS mean no kops-controller bootstrapping and therefore no secrets from there?
  2. If so, and that means that dockerconfig is not supported with Gossip DNS, shouldn't it be documented somewhere?
  3. Is the swallowing/shadowing of errors in nodeup intentional, or should I file a separate issue about it?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 8, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 7, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

8 participants