Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spec: Config: Passwd is empty in oc get machineconfig #578

Closed
kikisdeliveryservice opened this issue Oct 30, 2018 · 11 comments
Closed

Spec: Config: Passwd is empty in oc get machineconfig #578

kikisdeliveryservice opened this issue Oct 30, 2018 · 11 comments

Comments

@kikisdeliveryservice
Copy link
Contributor

Version

$ openshift-install version
v0.3.0-69-g657fb433d5164b0f712c90a20b54f765ad3b381c

Platform (aws|libvirt|openshift):

libvirt

What happened?

I'm working on some MCD work to update ssh keys. However when I laid down MCD yaml file, I adding some logging to update.go and saw that oldConfig.Spec.Config.Passwd.Users is empty after a cluster is first spun up.

I double checked with oc get machineconfig and the whole spec:config:passwd section is empty:
spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 2.2.0 networkd: {} passwd: {}

What you expected to happen?

I expected to see the inital config spec:config:passwd:users: to have one user "core" with an SSH key.

How to reproduce it (as minimally and precisely as possible)?

Create a cluster w/ ssh keys, then run oc get machineconfig and note that the spec: section is empty

$ oc get machineconfig
$ oc get machineconfig xxxxxx -o yaml

Anything else we need to know?

This is blocking work on adding the ability to update SSH keys in the MCD, since the MCD cannot add users only change existing users' ssh keys. As such, we need to have an initial config with a user and an ssh key.

References

cc @abhinavdahiya since @sdemos & I wanted to loop you in so we can figure out the best way to solve this.

@abhinavdahiya
Copy link
Contributor

The installer does not use MCO to distribute keys right now. [see here](https://github.com/openshift/installer/blob/master/pkg/asset/ignition/machine/node.go#L13-L46

@kikisdeliveryservice
Copy link
Contributor Author

kikisdeliveryservice commented Oct 30, 2018

@abhinavdahiya right, aware of the workaround but we are hoping to figure out a way to populate the initial config w/ the core user/ssh to allow us to update SSH keys. Whether that is here or in the MCO repo (or both) and wanted to get your thoughts on it. :)

@sdemos
Copy link

sdemos commented Oct 30, 2018

@abhinavdahiya yeah, we know about that. this is part of the work to make the ssh keys work within the existing system instead of being hacked onto the side.

with the workaround you linked to, the ssh keys successfully get to the node when initially getting provisioned, but it messes up the MCDs ability to reconcile them in the future since that section isn't in the machineconfig, so the issue here is to figure out how to fix up the rest of the system to properly embed the ssh keys in the machineconfig so that the mcd can deal with them.

my initial thought was to have the base machineconfig contain the core user and then have the ssh keys as machineconfig snippets that get merged by the mcc but I don't know if that would work for bootstrapping during the install phase, which is why we are bringing it up (and why the issue is here in particular).

@abhinavdahiya
Copy link
Contributor

abhinavdahiya commented Oct 31, 2018

MCO currently discovers its configuration using the cluster-config-v1 configmaps' key install-config both during bootstrap (ondisk file) and inside cluster(kube api object).

So i would secret for the ssh key and label it.

  • bootstrap mco can read this from disk
  • cluster mco can use label selector / list of objects to collect all the ssh keys.

Then MCO can plumb through the information to MCC like we do for root-ca,etcd-ca etc to that MCC can create appropiate machineconfig objects.

@wking
Copy link
Member

wking commented Oct 31, 2018

So i would secret for the ssh key and label it.

Do we need the ability to tune this per-pool (i.e. some masters or worker pools have a different SSH keyset than others)? If not, isn't the existing cluster--config-v1 already sufficient? I don't think we need secrets for the admin's public SSH key(s). On the other hand, we probably do need a secret for the admin's password. I expect we should not be setting the password in a non-secret config-map (and I haven't checked yet; maybe we already aren't).

@sdemos
Copy link

sdemos commented Oct 31, 2018

I don't think we need secrets for the admin's public SSH key(s).

I agree. I think it's fine if they are just in the config. I just wasn't sure when the initial config gets rendered, because I know there are a bunch of templates in the MCO repo that end up configuring the machine, and obviously the ssh keys can't just go there, but if there is a per-cluster mechanism that exists that seems like the way to go.

also in case you haven't seen it the work here (openshift/machine-config-operator#115) is allowing the mcd to lay down ssh keys from the config so it'll just be as simple as having them included there and they will automatically be reconciled over the lifetime of the cluster (as well as being handled by ignition for the initial provisioning).

@sdemos
Copy link

sdemos commented Oct 31, 2018

@wking looking at the things you linked to, it looks like that means the cluster config has the ssh keys in it. where is the disconnect then? does the MCO(/MCC mabye) need to pull the info from the cluster config into the MachineConfig and the ignition that the MCS serves to machines when they are provisioning with ignition?

another part of the problem here too is that the existing MachineConfigs don't have the core user in them, so they aren't really representing the current state of the machine. we could special case it in the MCD in some way but I think it would be better to put the core user into the passwd section of the machineconfig so we can properly reconcile it if the ssh keys for that user change.

@kikisdeliveryservice
Copy link
Contributor Author

Right as @sdemos said, the MCD is only going to update SSH keys for an existing Passwd.User, which going by all of things I've seen, should be "core".

@kikisdeliveryservice
Copy link
Contributor Author

So after poking around I think that the MCO is getting the correct cluster-config that has the sshkeys kube-system cluster-config-v1 but that the MCO isn't using all of the data from the cluster-config at the moment. I'm going to work on this and do some plumbing in the MCO and open a PR once I have something. Will cc everyone to take a look.

@crawford
Copy link
Contributor

crawford commented Jan 4, 2019

Closing due to inactivity.

@openshift-ci-robot
Copy link
Contributor

@crawford: Closing this issue.

In response to this:

Closing due to inactivity.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants