-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flavored tokens --> flavored nodes #3335
Comments
TBH I'm not 100% sure I fully understand the used case explained above, so I can add only some background considerations based on my knowledge of kubeadm/CABPK:
More information about alternative option for kubeadm join could be found here |
@brianthelion could you please expand on your user story with some examples of what you'd do with one token vs another? |
/milestone Next |
@fabriziopandini @ncdc @vincepri Thank you for the follow-up. The basis for the use-case is that we're stuck with certain operational constraints, the most challenging of which is that third-parties are administering our nodes. When we hand out bootstrap tokens, we have strong guarantees about who we're handing them out to. Next, we want to limit any given administrator's node's access to workloads on the basis of whether they're trusted ( CAVEAT that I'm basing the following solely on my reading of the docs, issue tracker, and a bit of code: My use-case appears to highlight an important gap in the chain of trust, namely, that the only (strong-sense) identity token in the system is the bootstrap token, but there's no method for associating it (again, strongly) with the node UUID. Unless I'm missing something, this means that true authc and authz on nodes is fundamentally impossible. |
The splint between machine administrator and cluster administrator sounds interesting, but personally I would like this use case being developed into a proposal because IMO there are many open points to be addressed around this use case eg.
|
Thanks @brianthelion for the additional info. If it's not too much trouble, I'd like to ask for even more details... Are you trying to only schedule certain pods to certain nodes, or avoid scheduling certain pods to certain nodes? What makes one specific node different from any other? How do you distinguish? You mention chain of trust and node authentication and authorization. What part of you setup needs to authorize a node, and how is it doing that today? Any more information you can provide would be greatly appreciated. Thanks! |
Ideally, it would be as simple as
Naively, it would seem sufficient for node to store its bootstrap token for posterity.
(Not for me.)
(Not for me.)
Both. See next.
We use
See previous. Since our pods are deployed into potentially sensitive network environments, we need to establish strong guarantees about who the node is acting as an agent for. Today, we rely on heuristics about which network domain the node sits in to make an educated guess. This is not secure and it is not going to scale.
I'm being a little circumspect (read: "cagey") about our use-case because I think its complexity may distract from the basics of the problem, which is actually pretty straightforward. That being that there does not appear to be ANY mechanism -- secure, hacky, or otherwise -- maintaining a record of the node's identity as defined by the initial bootstrap token handover from cluster administrator to node administrator. |
There is a direct link from Machine.status.nodeRef to the node's name. Does that help at all? |
@ncdc From a UX perspective, it's not unreasonable for the cluster administrator to demand to know the node name before handing a bootstrap token off to the node administrator, but as @fabriziopandini mentioned that's often not just feasible in automated provisioning scenarios. |
@brianthelion if you're able to share, what sort of environment are you operating in - cloud / on prem / VMs / bare metal? Are you using a custom scheduler? How do you envision the node being characterized as sweet or sour? Where is this information stored? Who/what sets it (securely)? Who/what consumes a node's flavor? (I hope you don't mind all the questions - I'm having a hard time seeing the big picture and I'm trying to learn - thanks for your patience!) |
Our product is a software appliance that runs on-prem in a pre-packaged QEMU/KVM virtual machine. The only things the VM image is missing when it's downloaded by the node administrator -- the "keys to the car" -- are an ssh key and a Kube bootstrap token. The node administrator has to supply both pre-boot via
Not as of now, no.
This is a purely out-of-band decision based on business criteria that are opaque to the cluster API. Suffice it to say that there's some oracle somewhere on our side that holds a list of the names of node administrators and whether they should get a
Prior to bootstrapping the node, it is only stored in our "secret list" as above. After bootstrapping the node, "flavor" should appear as one of the cluster API's available node attributes.
Unknown; this is what currently appears to be missing from the bootstrapping process.
For us, it would be a
Nope, no problem. It may be that there's an easy answer here that I'm just missing due to lack of Kube knowledge. Q&A will help ferret that out. |
Thanks! Does the node admin have access to the cluster-api management cluster? Are they creating So if you're using the You also wrote that you want the node admin to provide the Kubernetes bootstrap token to cloud-init for kubeadm to use when joining. Is that a token you expect the node admin to generate? |
No. More on that at the bottom.
No, nothing fancy in the provisioning process as of yet.
Yes, you've got it.
This seems like a potential workaround, but for two hang-ups: (1) The node's name (or UID) would have to be known at token creation time; (2) the name (or UID) would effectively become node's credential, and it's not exactly a cryptographically secure one. You'd really like something more secure than this.
We have an API call that wraps the |
Maybe the node's name doesn't matter? You have a cluster admin who creates a Machine with a specific flavor, and as soon as Machine.spec.nodeRef is set, your controller labels the node with the flavor. Would that work? (i.e. there is no flavor token)
Are you not using the Cluster API kubeadm-based bootstrap provider? |
Sounds potentially promising. I'll need to learn more about the
Our API is just calling |
We match a Machine to a Node based on equivalent providerID values between the two. For example, when you create a node in AWS, you create a Machine and an AWSMachine. The AWS provider asks AWS to run an instance and records the instance ID in AWSMachine.spec.providerID. This is then copied to Machine.spec.providerID. As soon as Cluster API is able to connect to the new cluster's apiserver, it starts looking at nodes, trying to find one whose spec.providerID matches the value the Machine has. Once we have this match, we set Machine.spec.nodeRef to the name of the node that matches.
It sounds like you're not currently using Cluster API but have another way to set up clusters? Is that accurate? |
I'm struggling a bit to understand the use cases where we would want to support Clusters that have mixed workloads across different security boundaries. That seems more like something that would be better handled by federating workloads across multiple clusters or by using something similar to virtual cluster approach that is being investigated by wg-multitenancy. That said, I think there are several different ways this could be achieved through the existing mechanisms. Namely the Kubeadm config templates related to a given MachineDeployment, especially since I would expect a MachineDeployment to span nodes across security boundaries. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/lifecycle frozen |
I finally got around to implementing your suggesting, above, of using a controller for this. At When everything is correct, the kubelet joins just fine, and our controller further "flavors" the node by checking our token and then applying node annotations. However, there are appear to be some issues when the kubelet needs to be rejected. See #4848. |
@vincepri: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Preface
I've chosen the term "flavor" here instead of one of the existing idioms -- "label", "taint", "role", "annotation," etc. -- to indicate that I'm agnostic to implementation mechanism. An existing operator and/or an obvious implementation path for this may already exist. If so, a pointer to a relevant doc would be very helpful; it has proven extremely difficult to Google for.
User Story
First, I want to be able to generate multiple "flavors" of bootstrap tokens. For now, let's assume that there are only two,
sweet
andsour
.Second, I need an operator that can detect whether a joining node has a
sweet
token or asour
token and map that flavor to the node as well. Node flavor will then be used by downstream logic for differential processing.Detailed Description
A review of old issues reveals that this has been flirted with quite a bit in the past, but most implementation suggestions called on the kubelet to set its own flavor in one way or another. For my use-case, this is NOT acceptable -- the kubelet cannot be trusted in this regard.
One perfectly admissible solution here would be for the cluster to pass TWO tokens to the node, one standard bootstrap token and a secondary "flavored" token. The flavored token must be secure in nature, though.
Anything else you would like to add:
There are plenty of examples of node label operators in the wild relying on all manner of attributes of the joining node, but none of operator implementations appear to access the original bootstrap token. Apologies if I've missed something.
/kind feature
The text was updated successfully, but these errors were encountered: