Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support GKE / change the default controller namespace #99

Closed
wants to merge 8 commits into from

Conversation

cknowles
Copy link

@cknowles cknowles commented May 9, 2018

Separating it also means we have a little bit more clarity over access to the encryption private key and it's entirely separate from any system components which might have admin permissions (say, the kubernetes dashboard).

Fixes #90.

Separating it also means we have a little bit more clarity over access to the encryption private key and it's entirely separate from any system components which might have admin permissions (say, the kubernetes dashboard).

Fixes bitnami-labs#90.
Copy link
Contributor

@anguslees anguslees left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love it, thanks for doing all that work on the documentation!

Just a suggestion for simplifying the migration process...

README.md Outdated
```
Or on Mac OS:
```sh
$ sed -i '' 's/kube-system/sealed-secrets/g' master.key
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really, osx doesn't have a "normal" sed? Perhaps we can do sed <master.key >moved.key or something that works on both platforms..

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, annoying indeed. I'll see if I can come up with a generic command.

```sh
$ sed -i '' 's/kube-system/sealed-secrets/g' master.key
```

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should create the secret here, before installing the new controller. That way we don't need to restart the controller post-install.

Something like:

3. Load the existing master secret key into the new namespace:
    ```sh
    $ kubectl create -f moved.key
    ```

What I actually think we should do is combine these 3 steps into one, since I think it's easier to see what's going on as a single step (please disagree if you don't think so):

1. Copy your existing master key to the new namespace:
    ```sh
    $ kubectl create namespace sealed-secrets
    $ kubectl get secret -n sealed-secrets sealed-secrets-key -o yaml |
       sed 's/kube-system/sealed-secrets/' |
       kubectl create -f -
    ```
2. Install the new controller.
3. Delete the old kube-system controller.
4. Verify unsealing new/modified keys is working as expected, then delete the old kube-system master key.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anguslees will having two controllers in different namespaces create any race conditions on a live cluster? Hopefully no since then that means we can simplify/reorder these.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've gone ahead and updated this, can always re-order 3+4 if there's any possibility for race conditions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have the possibility of 2 controllers running simultaneously for a brief period during a "normal" controller upgrade (same namespace/Deployment upgrade). This should be fine (at worst we decrypt a SealedSecret twice).

@anguslees
Copy link
Contributor

Travis is failing because we need to create the new namespace.

Need to add something like the following, at ~L27 in controller-norbac.jsonnet:

   ns: kube.Namespace($.namespace.metadata.namespace),

@cknowles
Copy link
Author

cknowles commented May 10, 2018

Looks like I need a way to force namespace to the top of the YAML file with kubecfg or similar. The build fails but the controller.yaml generated locally includes the namespace definition. Unless kubectl handles that in which case it must be a CI generation problem.

@anguslees
Copy link
Contributor

That's very odd:

namespace "sealed-secrets" created
 ...
Error from server (NotFound): error when creating "controller.yaml": namespaces "sealed-secrets" not found

@cknowles
Copy link
Author

Does kubectl wait for the namespace to be available for use?

@anguslees
Copy link
Contributor

Wat 🤦‍♂️ I turned on extra kubectl debugging in #100. I learned:

  • kubectl delays the error responses and prints them after trying to create everything - it does not abort at the first error. So the ordering of output messages/errors is misleading.
  • kubectl does no sorting of input, it just tries them in order (I knew this).
  • kubecfg does not sort the show output in dependency order. I distinctly remember adding this code a long time ago, and just assumed that this was how kubecfg behaved - but I can't find any mention of it now in the kubecfg git log. This is the real error here :(

I will fix this in kubecfg, but I'd like to not block this PR on waiting for a kubecfg release.
Our options are:

  1. Use kubecfg update controller.yaml rather than kubectl create -f. This is tempting, but I would like to test our published install instructions here, so we can uncover problems (like this one!)
  2. Run an explicit separate kubectl create namespace sealed-secrets command first.
  3. Change the way we generate the controller-*.yaml files so that we place the Namespace before any namespaced resources, without relying on kubecfg to do this for us. This probably means (temporarily) removing the Namespace from jsonnet, and adding it using a shell cat or similar in the make rule.
  4. Wait until a fixed kubecfg is released that sorts the namespace early in YAML output.

Whatever we do, we should update the install instructions similarly - since this CI failure is showing us what users will experience when they try the same. I think this means I'd prefer (3) (generate the right yaml somehow), since that contains the hackery within that small build step and doesn't affect users.

@cknowles
Copy link
Author

@anguslees how about e36869b?

@anguslees
Copy link
Contributor

anguslees commented May 10, 2018

lgtm, but CI is still choking with:

Error from server (AlreadyExists): error when creating "controller.yaml": namespaces "sealed-secrets" already exists

I don't see anywhere obvious where we're double-creating the namespace, and I've run out of debugging time for today :( I'll come back to it tomorrow, unless you find something new before then.

Wow, this looked like such a straightforward change :( Fwiw, I suspect we should change our install instructions / CI script to say kubectl apply -f rather than create - and this would also paper over issues like the above. I would rather understand the issue better first, but that's an option ...

@lewisdawson
Copy link

Is this still being worked on and considered for merging into a release?

@cknowles
Copy link
Author

@lewisdawson I looked into this for a while but couldn't spot the problem. I suspect parts of the CI setup need to be reworked slightly. If you have any ideas how to fix that part I think the rest is mergeable.

@ghost
Copy link

ghost commented Aug 1, 2018

This fixes CI for me:

diff --git a/.travis.yml b/.travis.yml
index 2480d3e..c6ca177 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -93,7 +93,7 @@ script:
       minikube update-context
       minikube status
       while ! kubectl cluster-info; do sleep 3; done
-      kubectl create -f $INT_SSC_CONF
+      kubectl apply -f $INT_SSC_CONF
       kubectl rollout status deployment/sealed-secrets-controller -n sealed-secrets -w
       make integrationtest CONTROLLER_IMAGE=$CONTROLLER_IMAGE
     fi

@cknowles
Copy link
Author

cknowles commented Sep 4, 2018

@anguslees are you happy for us to change the create to apply? You mention above it's only papering over the problem. I tend to agree but I'd prefer to have GKE support sooner.

@cknowles
Copy link
Author

cknowles commented Oct 2, 2018

@anguslees I've updated this PR so the CI is working again, let me know if you want any other changes or you'd like me to squash/rebase it etc.

@ricardompcarvalho
Copy link

Hello @c-knowles @anguslees ,
I'm trying to migrate sealed secrets from namespace kube-system to namespace sealed-secrets but gives me error!
I'm following the readme step where says "Migration from SealedSecret <0.8.0 to 0.8.0+", but in step number 3 gives me that for example : "sealed-secrets-controller" already exists".
Can someone help me, please?

Ricardo Carvalho

@cknowles
Copy link
Author

@ricardompcarvalho perhaps try reversing steps 3 and 4 to delete the old controller first, as long as you’ve done the backup in step 1 it should not be dangerous. If that works better I can update the PR before it gets merged.

@ricardompcarvalho
Copy link

@c-knowles thanks for answering, but i dont understand exactly what you say about "revert"!!!
Revert does not work because it is installed in "kube-system" and when doing the command kubectl create it tries to create in "kube-system" and not in "sealed-secrets". Another question I have is: do I have to create the "sealed-secrets" namespace or does it automatically create when we do kubectl

@cknowles
Copy link
Author

@ricardompcarvalho reverse rather than revert, to clarify I mean run step 4 first and then 3 after. To try to answer your other questions:

  • Unless you are using a CLI binary built from this branch which is not merged or released yet then the default namespace will still be kube-system. If you want to do the same using a stable release you will need to add the additional flag I updated the default of in this PR (controller-namespace).
  • Generally kubectl will not auto-create any objects, the namespace has an explicit create step documented in this PR.

@ricardompcarvalho
Copy link

@c-knowles thanks once more.
I'm working on stable Jenkins allocated in GKE and i want to add sealed-secrets to my work, i have some questions if you allow to ask:

  • I must first install sealed-secrets on kube-system namespace, only then do the migration?

  • If i'm working on a different CLI don't work?

@karlskewes
Copy link
Contributor

karlskewes commented Apr 5, 2019

@ricardompcarvalho - we installed directly into sealed-secrets namespace in each of our clusters.
Just update all the yaml files with whatever namespace you like before applying them and you'll be good to go. You can test it with minikube or similar.

I suggest fetching the public cert once and saving that for offline use. Otherwise you will need to add the --namespace <custom_namespace> flag to your kubeseal .... commands.

@mkmik
Copy link
Collaborator

mkmik commented Sep 10, 2019

I'm not against the spirit of this PR, but it has become stale and we need more discussion in #90

It seems that nowadays you can use kube-system on GKE so it seems to be less blocking.
That said, a lot of people I've talked to feel a bit uneasy polluting the kube-system namespace.

On the other hand that's usually the most foolproof way of ensuring that your RBAC rules do protect your private key properly. Anyway let's keep the discussion in #90 and possibly in #233.

@mkmik mkmik closed this Sep 10, 2019
@cknowles
Copy link
Author

For what it’s worth what is here does work, I’ve used it. No need to install elsewhere and migrate on the above question, the new default here is a namespace specific to sealed-secrets which ensure RBAC etc is more easily managed.

@mkmik
Copy link
Collaborator

mkmik commented Sep 10, 2019

@c-knowles sorry could you please rephrase? My non-native brain still does a poor job parsing some English sentences.

@cknowles
Copy link
Author

Sure, the main things I wanted to convey

  • PR was tested installed from scratch plus the migration path
  • There was a question above about installing to kube-system first and then migrating but there is no need for this
  • Separation eases RBAC management since it’s now separated from kube-system

Happy to close it off as you did, I just wanted to update as hadn’t had any action on it and no longer actively using this project.

@mkmik
Copy link
Collaborator

mkmik commented Sep 10, 2019

Thanks. Yeah, as I said I'd personally lean towards defaulting to a dedicated namespace in principle but as this PR no longer applies cleanly on master I'd like to first understand the implications before putting some work to adapt against HEAD.

For example, one pet peeve of mine is that it should be possible to use sealed secrets in both "system" mode and in "bring your own controller" mode and as part of that journey we might have the sealed secrets controller create a new sealing keys per namespace. If we have that, the question of whether a sealed-secrets namespace is "protected enough" in the average user's RBAC config becomes moot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Move out of kube-system namespace
7 participants