-
Notifications
You must be signed in to change notification settings - Fork 692
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support GKE / change the default controller namespace #99
Conversation
Separating it also means we have a little bit more clarity over access to the encryption private key and it's entirely separate from any system components which might have admin permissions (say, the kubernetes dashboard). Fixes bitnami-labs#90.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love it, thanks for doing all that work on the documentation!
Just a suggestion for simplifying the migration process...
README.md
Outdated
``` | ||
Or on Mac OS: | ||
```sh | ||
$ sed -i '' 's/kube-system/sealed-secrets/g' master.key |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really, osx doesn't have a "normal" sed
? Perhaps we can do sed <master.key >moved.key
or something that works on both platforms..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, annoying indeed. I'll see if I can come up with a generic command.
```sh | ||
$ sed -i '' 's/kube-system/sealed-secrets/g' master.key | ||
``` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should create the secret here, before installing the new controller. That way we don't need to restart the controller post-install.
Something like:
3. Load the existing master secret key into the new namespace:
```sh
$ kubectl create -f moved.key
```
What I actually think we should do is combine these 3 steps into one, since I think it's easier to see what's going on as a single step (please disagree if you don't think so):
1. Copy your existing master key to the new namespace:
```sh
$ kubectl create namespace sealed-secrets
$ kubectl get secret -n sealed-secrets sealed-secrets-key -o yaml |
sed 's/kube-system/sealed-secrets/' |
kubectl create -f -
```
2. Install the new controller.
3. Delete the old kube-system controller.
4. Verify unsealing new/modified keys is working as expected, then delete the old kube-system master key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anguslees will having two controllers in different namespaces create any race conditions on a live cluster? Hopefully no since then that means we can simplify/reorder these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've gone ahead and updated this, can always re-order 3+4 if there's any possibility for race conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have the possibility of 2 controllers running simultaneously for a brief period during a "normal" controller upgrade (same namespace/Deployment upgrade). This should be fine (at worst we decrypt a SealedSecret twice).
Travis is failing because we need to create the new namespace. Need to add something like the following, at ~L27 in controller-norbac.jsonnet: ns: kube.Namespace($.namespace.metadata.namespace), |
Looks like I need a way to force namespace to the top of the YAML file with kubecfg or similar. The build fails but the controller.yaml generated locally includes the namespace definition. Unless kubectl handles that in which case it must be a CI generation problem. |
That's very odd:
|
Does kubectl wait for the namespace to be available for use? |
Wat 🤦♂️ I turned on extra kubectl debugging in #100. I learned:
I will fix this in kubecfg, but I'd like to not block this PR on waiting for a kubecfg release.
Whatever we do, we should update the install instructions similarly - since this CI failure is showing us what users will experience when they try the same. I think this means I'd prefer (3) (generate the right yaml somehow), since that contains the hackery within that small build step and doesn't affect users. |
@anguslees how about e36869b? |
lgtm, but CI is still choking with:
I don't see anywhere obvious where we're double-creating the namespace, and I've run out of debugging time for today :( I'll come back to it tomorrow, unless you find something new before then. Wow, this looked like such a straightforward change :( Fwiw, I suspect we should change our install instructions / CI script to say |
Is this still being worked on and considered for merging into a release? |
@lewisdawson I looked into this for a while but couldn't spot the problem. I suspect parts of the CI setup need to be reworked slightly. If you have any ideas how to fix that part I think the rest is mergeable. |
This fixes CI for me:
|
@anguslees are you happy for us to change the create to apply? You mention above it's only papering over the problem. I tend to agree but I'd prefer to have GKE support sooner. |
@anguslees I've updated this PR so the CI is working again, let me know if you want any other changes or you'd like me to squash/rebase it etc. |
Hello @c-knowles @anguslees , Ricardo Carvalho |
@ricardompcarvalho perhaps try reversing steps 3 and 4 to delete the old controller first, as long as you’ve done the backup in step 1 it should not be dangerous. If that works better I can update the PR before it gets merged. |
@c-knowles thanks for answering, but i dont understand exactly what you say about "revert"!!! |
@ricardompcarvalho reverse rather than revert, to clarify I mean run step 4 first and then 3 after. To try to answer your other questions:
|
@c-knowles thanks once more.
|
@ricardompcarvalho - we installed directly into I suggest fetching the public cert once and saving that for offline use. Otherwise you will need to add the |
I'm not against the spirit of this PR, but it has become stale and we need more discussion in #90 It seems that nowadays you can use On the other hand that's usually the most foolproof way of ensuring that your RBAC rules do protect your private key properly. Anyway let's keep the discussion in #90 and possibly in #233. |
For what it’s worth what is here does work, I’ve used it. No need to install elsewhere and migrate on the above question, the new default here is a namespace specific to sealed-secrets which ensure RBAC etc is more easily managed. |
@c-knowles sorry could you please rephrase? My non-native brain still does a poor job parsing some English sentences. |
Sure, the main things I wanted to convey
Happy to close it off as you did, I just wanted to update as hadn’t had any action on it and no longer actively using this project. |
Thanks. Yeah, as I said I'd personally lean towards defaulting to a dedicated namespace in principle but as this PR no longer applies cleanly on master I'd like to first understand the implications before putting some work to adapt against HEAD. For example, one pet peeve of mine is that it should be possible to use sealed secrets in both "system" mode and in "bring your own controller" mode and as part of that journey we might have the sealed secrets controller create a new sealing keys per namespace. If we have that, the question of whether a |
Separating it also means we have a little bit more clarity over access to the encryption private key and it's entirely separate from any system components which might have admin permissions (say, the kubernetes dashboard).
Fixes #90.