Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low-privilege, self-service deployments in shared clusters #233

Open
mkmik opened this issue Sep 2, 2019 · 12 comments
Open

Low-privilege, self-service deployments in shared clusters #233

mkmik opened this issue Sep 2, 2019 · 12 comments
Labels
enhancement help wanted Feature requests approved by maintainers that are not included in the project roadmap

Comments

@mkmik
Copy link
Collaborator

mkmik commented Sep 2, 2019

Currently we assume a cluster admin will be willing to install sealed secrets controller globally.
This assumption doesn't hold for all of our users.

Many people end up with a shared k8s cluster where they get access to one namespace (or a handful of)

We should allow such users to self-service and deploy their own instance of the sealed-secrets controller that will serve their own sealed secrets resources in their own namespace.

It's not yet clear if we should limit this to only one namespace or whether the controller should operate on all sealed secrets resources the RBAC rules allow such a service account to access.

In any case we need to devise a mechanism for multiple controllers to avoid stomping on each others. Since each controller will have a different set of private keys, the effects of stomping on each other will not primarily affect correctness as such, but pollute logs, create spurious k8s events and possibly have unwanted side effects on work queue retries etc.

@mkmik mkmik changed the title Low-privilege, multi-tenant deployment Low-privilege, self-service deployments in shared clusters Sep 2, 2019
@jmichalek132
Copy link

Hi in my case I don't need avoid multiple controllers however, I cannot install sealed secrets globally.

@chrisob
Copy link

chrisob commented Aug 19, 2020

Not to add an unnecessary 👍, but IMHO this is a really important issue, as multi-tenant setups are more and more common (ie. OpenShift).

@ThomasVitt
Copy link
Contributor

Same here! My customer has a large multi-tenant Redhat Openshift Cluster, and I won't get ClusterAdmin rights, just for a dozen namespaces. The sealedsecrets controller won't stop complaining about not having cluster-wide view of sealedsecrets.

@mkmik
Copy link
Collaborator Author

mkmik commented Jan 12, 2021

See related #501

@mkmik
Copy link
Collaborator Author

mkmik commented Jan 12, 2021

@ThomasVitt do you have the rights to install a CRD?

@ThomasVitt
Copy link
Contributor

No, but the CRD has been installed by the cluster admins.
I found the "--all-namespaces" flag, but it doesn't help much, because I want the controller to watch a handful of namespaces which I control, not just one.

@mkmik
Copy link
Collaborator Author

mkmik commented Jan 13, 2021

@ThomasVitt for now you have to run N copies if the controller one in each namespace.

It's technically possible for us to add support for watching a specific set of namespaces. This is a common problem with kubernetes controllers/operators.

@ThomasVitt
Copy link
Contributor

Thanks for the quick reply!
OK then I'll deploy the controller N times and wait for the new version containing this feature :-D

@mkmik
Copy link
Collaborator Author

mkmik commented Jan 13, 2021

The biggest hassle is that each controller will have its own private keys and thus you need to make sure to use the right public key to encrypt each secret.

One temporarily workaround is to just pick the "main" namespace (the one whose public key you tell your users to seal the secrets with) and just copy its private keys to the other namespaces.

See the backup section/faq of the readme for instructions on how to locate and download/transfer the sealed secrets private key secret.

@childofthewired
Copy link

The deployment of the controller seems to have issues in OpenShift Dedicated as well.

With limited administrative rights, users don't inherit the ability to grant access to the CRD API groups:

Error from server (Forbidden): error when creating "STDIN": clusterroles.rbac.authorization.k8s.io "secrets-unsealer" is forbidden: user "bob" (groups=["dedicated-admins" "system:authenticated:oauth" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:
{APIGroups:[""], Resources:["events"], Verbs:["create" "patch"]}
{APIGroups:[""], Resources:["secrets"], Verbs:["get" "create" "update" "delete"]}
{APIGroups:["bitnami.com"], Resources:["sealedsecrets"], Verbs:["get" "list" "watch"]}
{APIGroups:["bitnami.com"], Resources:["sealedsecrets/status"], Verbs:["update"]}
Error from server (NotFound): error when creating "STDIN": clusterroles.rbac.authorization.k8s.io "secrets-unsealer" not found

@github-actions github-actions bot added the Stale label Jan 28, 2022
@juan131 juan131 added help wanted Feature requests approved by maintainers that are not included in the project roadmap and removed Stale labels Feb 3, 2022
@bitnami-labs bitnami-labs deleted a comment from github-actions bot Feb 3, 2022
@RichardNixon52
Copy link

Any updates?

@varkrish
Copy link

Any update on this? I think ExternalSecrets controller is having this kind of a feature.

ExternalSecret manifest allows scoping the access of kubernetes-external-secrets controller. This allows deployment of multiple kubernetes-external-secrets instances at the same cluster and each instance can access a set of ExternalSecrets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement help wanted Feature requests approved by maintainers that are not included in the project roadmap
Projects
None yet
Development

No branches or pull requests

8 participants