-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-4969: Cluster Domain Downward API #4972
base: master
Are you sure you want to change the base?
Conversation
nightkr
commented
Nov 21, 2024
- One-line PR description: Initial KEP draft
- Issue link: Cluster Domain Downward API #4969
- Other comments:
Welcome @nightkr! |
Hi @nightkr. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just two minor nitpicks/typos
Currently there is no way for cluster workloads to query for this domain name, | ||
leaving them either use relative domain names or take it as manual configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently there is no way for cluster workloads to query for this domain name, | |
leaving them either use relative domain names or take it as manual configuration. | |
Currently, there is no way for cluster workloads to query this domain name, | |
leaving them to either use relative domain names or configure it manually. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left "query for this" as-is, because I read the revised variant as "ask what the domain name (that I already know) means" rather than "ask what the domain name is".
Currently there is no way for cluster workloads to query for this domain name, | ||
leaving them either use relative domain names or take it as manual configuration. | ||
|
||
This KEP proposes adding a new Downward API for that workloads can use to request it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This KEP proposes adding a new Downward API for that workloads can use to request it. | |
This KEP proposes adding a new Downward API that workloads can use to request it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- `nodePropertyRef` (@aojea) | ||
- `runtimeConfigs` (@thockin) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this is out of scope of this KEP, but I guess this change sets a precedent of other configs that could be passed down into the Pod.
I wonder if someone with more experience than me has an idea/vision of what that could look like, which may then determine what name to decide on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking forward, I suspect that both cases will become relevant eventually. This property just occupies an awkward spot since it doesn't really have one clean owner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have tried not to define magic words, neither as env vars nor as sources, but as sources is far less worrisome than as env.
If this is the path we go, it's not so bad. I don't know if I would call it clusterPropertyRef
since there's no cluster property to refer to. But something like runtimeProperty
isn't egregious.
/ok-to-test |
/retest |
clusterPropertyRef: clusterDomain | ||
``` | ||
|
||
`foo` can now perform the query by running `curl http://bar.$NAMESPACE.svc.$CLUSTER_DOMAIN/`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Extra credit: define a command line argument that relies on interpolating $(CLUSTER_DOMAIN)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I honestly forgot that we do env expansion on command
and args
- I had to go look it up. It SHOULD work.
environments, since `node-b` might not be able to resolve `cluster.local` | ||
FQDNs correctly. | ||
|
||
For this KEP to make sense, this would have to be explicitly prohibited. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if that's true.
For Pod consumes, the downward API is implemented by the kubelet, so each kubelet can expose its local view of the cluster domain.
We would still strongly recommend against having multiple cluster domains defined across your cluster - anything else sounds really unwise - but technically it can be made to work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's one of those.. it would be implementable without making such a declaration, but it would likely be another thing leading to pretty confusing behaviour. Maybe the language can be softened somewhat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can mention the need to emphasize that the existing thing, already a bad idea, is even more of a bad idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if we write a document that says "setting these differently within a cluster is prohibited" that does ~nothing to enforce that clusters will actually follow that behavior, and what clusters actually do is what drives consistency.
We have conformance, but I wouldn't be terribly enthused about a test that attempts to validate every node, those are meant to catch API breaks, not runtime configuration.
I don't see why a pod wouldn't work fine, all that needs to be true is the cluster domain reported to the pod needs to be routable on the network to get to services. The consistency part doesn't actually seem relevant.
<!-- | ||
What are the caveats to the proposal? | ||
What are some important details that didn't come across above? | ||
Go in to as much detail as necessary here. | ||
This might be a good place to talk about core concepts and how they relate. | ||
--> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A related detail.
- A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via
.status
. - A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via a somethingz HTTP endpoint.
- A kubelet that is formally aware of its cluster domain could report back either the cluster domain value, or a hash of it, via a Prometheus static-series label
egkubelet_cluster_domain{domain_name="cluster.example"} 1
If we decide to make the API server aware of cluster domain, adding that info could help with troubleshooting and general observability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's already available via the kubelet's /configz (mentioned in Alternatives). I don't have a strong opinion either way on adding it to the Node status.
Co-authored-by: Tim Bannister <[email protected]>
Co-authored-by: Tim Bannister <[email protected]>
The ConfigMap written by k3s[^prior-art-k3s] could be blessed, requiring that | ||
all other distributions also provide it. However, this would require additional | ||
migration effort from each distribution. | ||
|
||
Additionally, this would be problematic to query for: users would have to query | ||
it manually using the Kubernetes API (since ConfigMaps cannot be mounted across | ||
Namespaces), and users would require RBAC permission to query wherever it is stored. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought centralized management was a non goal though? I recommend highlighting that this alternative isn't aligned with this KEP's goals.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of the non-goal I was mostly referring to configuring the kubelet itself (which would have been a retread of #281). Maybe that could be clarified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd clarify that. I took the non-goal to mean that the value might be aligned across the cluster, but that this KEP explicitly avoids having Kubernetes help with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This roughly shares the arguments for/against as [the ConfigMap alternative](#alternative-configmap), | ||
although it would allow more precise RBAC policy targeting. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought centralized management was a non goal though? I recommend highlighting that this alternative isn't aligned with this KEP's goals.
# The following PRR answers are required at alpha release | ||
# List the feature gate name and the components for which it must be enabled | ||
feature-gates: | ||
- name: MyFeature |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could pick the feature gate name even at provisional stage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. PodClusterDomain
would align with PodHostIPs
(the only other downward API feature gate listed on https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/), but happy to hear other takes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM, especially for provisional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
## Design Details | ||
|
||
A new Downward API `clusterPropertyRef: clusterDomain` would be introduced, which can be projected into an environment variable or a volume file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's this a reference to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was mostly trying to be consistent with fieldRef
and resourceFieldRef
, which also don't quite correspond to the pod/container objects (since they're not quite 1:1), but I'm certainly not married to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's actually a node property, and making it a cluster-wide property is an explicit non-goal of the KEP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried to clarify the non-goal in c3f33dd
fieldRef: metadata.namespace | ||
- name: CLUSTER_DOMAIN | ||
valueFrom: | ||
clusterPropertyRef: clusterDomain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is really a node/kubelet property, not a cluster property, is a kubelet config option https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
--cluster-domain string
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's the question, is it a property of the kubelet or is it a property of the cluster that just-so-happens to currently be configured at the kubelet level?
As far as I can tell, I'd argue it seems to be the latter. Back when we discussed it in SIG-Network, @thockin mentioned that support for different clusterDomains across kubelets was always theoretical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My point is that realistically you can not assume consistency ... we very much want it to be a cluster level property but then is when you enters into chicken and egg problems, Kubelet must be able to work disconnected, and is common for some distros to use this property to bootstrap a cluster (kubeadm, openshift, ...) so how can you configure a global dns domain if the cluster was not created, what happens if the domain changes once the kubelet connects , see https://docs.google.com/document/d/1Dx7Qu5rHGaqoWue-JmlwYO9g_kgOaQzwaeggUsLooKo/edit?tab=t.0#heading=h.rkh0f6t1c3vc
My main concern is that what happens if tomorrow people start to have clusters with split domains? they can perfectly do it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I'm missing something, but as I understand it kubeadm will always initialize it to the value it gets from its ClusterConfiguration before it even launches the kubelet?
Some properties will always need to be consistent for the kubelet to be in a valid state (apiserver URL, certificates, etc).
My main concern is that what happens if tomorrow people start to have clusters with split domains? they can perfectly do it
They could also have clusters with overlapping pod CIDRs. At some point we have to delineate what's a valid state for the cluster to be in and what isn't.
I'm fine with declaring that split domain configurations are valid and supported (or that there is an intention to work towards that at some point). But that's not the impression that I got from either you or @thockin so far.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I very much hope that one day we will have the time and wherewithal to revisit DNS. The existing schema was intended as a demonstration, and that's how much thought went into it. We've been making it work ever since, but I think we can do better.
I think that if we wanted to do that, we would need pods to opt-in to an alternate DNS configuration. We have PodDNSConfig
which allows one to configure it manually, so it is already sort of possible, but that should be opaque to us - we can't reasonable go poking around in there and making assumptions (e.g. parsing search paths).
The net result is that the cluster zone (not sure why we didn't use that term) is neither a cluster config nor a kubelet config -- it's a pod config, or at least the pod is an indirection into one of the others.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I'm missing something, but as I understand it kubeadm will always initialize it to the value it gets from its ClusterConfiguration before it even launches the kubelet?
kubeadm has a cluster-wide shared kubelet config, I don't think the point was "This is how kubeadm works" it was "clusters do not all work the same, and there is no portable guarantee that this is a cluster-wide concept as opposed to a kubelet option (and many clusters do not use kubeadm)".
Some properties will always need to be consistent for the kubelet to be in a valid state (apiserver URL, certificates, etc).
Eh? Taking this example ... You could easily make a cluster where different kubelets have different API server URLs, certificates, etc. Some clusters even do use a local proxy for example.
We don't make any assumptions about this AFAIK, those details are local to this kubelet instance.
They could also have clusters with overlapping pod CIDRs. At some point we have to delineate what's a valid state for the cluster to be in and what isn't.
Well, taking that example, pod CIDRs aren't necessarily a property managed by Kubernetes core either, lots of clusters use BYO IPAM for pod IPs.
I'm fine with declaring that split domain configurations are valid and supported (or that there is an intention to work towards that at some point). But that's not the impression that I got from either you or @thockin so far.
I read this more as: We haven't declared this one way or another yet, maybe let's not declare that this is a cluster property because that's not necessarily true and we're not prepared to enforce it (e.g. through some sort of conformance test).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SERVICES_DOMAIN
seems like a more accurate name, though of course some users may already be familiar with the kubelet option
Also, as long as the reported domain works for the pod to reach services, I don't think it matters that we attempt to assume if it's identical across machines or not.
If your goal is to have one pod read something to then be consumed by other pods, then we are in fact trying to read cluster-wide config by way of a kubelet option, and THAT seems like a broken concept without actually moving it into something cluster-scoped.
Why should this KEP _not_ be implemented? | ||
--> | ||
|
||
## Alternatives |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nightkr I just realized that there is alread a way of exposing the fqdn to the pods https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-hostname-and-subdomain-fields, it may be a bit hackin and help to shape this KEP too or may be is enoug to satisfy this demand.
If you set the subdomain field in the pod spec, you can get the FQDN
The Pod spec also has an optional subdomain field which can be used to indicate that the pod is part of sub-group of the namespace. For example, a Pod with spec.hostname set to "foo", and spec.subdomain set to "bar", in namespace "my-namespace", will have its hostname set to "foo" and its fully qualified domain name (FQDN) set to "foo.bar.my-namespace.svc.cluster.local" (once more, as observed from within the Pod).
Deploy this pod
apiVersion: v1
kind: Pod
metadata:
name: fqdn-pod
spec:
subdomain: x
containers:
- name: my-container
image: busybox:stable
command: ["sleep", "infinity"]
The FQDN is available internally to the Pod
$ kubectl exec -it fqdn-pod -- hostname -f
fqdn-pod.x.default.svc.cluster.local
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was worried about running into length limitations, but that doesn't seem to be an issue in practice. It was perfectly happy to regurgitate dummy-deploy-566c59b7dd-dlvv4.qwerasdfzxcvqwerasdfzxcvqwerasdfzxcvqwerasdfzxcvqwerasdfzxcvqwe.ohgodwhydidicreatethissuperlongnamespacen amethisissocursed.svc.cluster.local
without any issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can work, but I don't love it. If you need a solution NOW, probably do this.
It can also be retrieved from the kubelet's `/configz` endpoint, however this is | ||
[considered unstable](https://github.com/kubernetes/kubernetes/blob/9d967ff97332a024b8ae5ba89c83c239474f42fd/staging/src/k8s.io/component-base/configz/OWNERS#L3-L5). | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO if you depend on that to get the cluster domain field and it changes breaking consumers, I would consider that a regression and we'll fix it.
I don't think that comment means that this endpoint or functionality is going to disappear, it is more about the schema of the config, we have a similar thing with kube-proxy config that is v1alpha1, but at this point and after more than X years it seems to me those are now stable APIs we can not change,.
cc: @thockin
#### Story 1 | ||
|
||
The Pod `foo` needs to access its sibling Service `bar` in the same namespace. | ||
It adds two `env` bindings: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be simpler to have a downward API binding to request the FQDN of a particular service (which would include both the namespace and the cluster domain name in a single value).
env:
- name: SERVICE1
valueFrom:
serviceReference:
name: bar
- name: SERVICE2
valueFrom:
serviceReference:
namespace: otherns
name: blah
→
SERVICE1=bar.myns.svc.cluster.local
SERVICE2=blah.otherns.svc.cluster.local
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO tricky without ReferenceGrant. Users might expect that the service reference actually honors the existence of the other Service, or might make API changes to built that expectation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That also wouldn't really work for our (@stackabletech) use case/current API contract.
Our operators generate configmaps with all the details you'll need to connect to the service managed by the operators (including URLs). Our operators' pod manifests don't know what specific CR objects they'll end up managing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, it would be good to update the user story (or add a second user story) explaining that. The user stories are there to help explain why the particular solution you chose is the best solution.
This also becomes problematic for TLS, since there is no way to distinguish | ||
which of these two cases a certificate applies to. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what you mean by this. A TLS certificate always has the FQDN in it, doesn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically no (subjectAltName can be an email address, an IP address, a URI, maybe a few other things).
But if it's a DNS name then the client validates against the FQDN and not any other domain. Clients that do not are asking for trouble.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tested this. At least curl
tests the certificate's names (CN and SANs) against the verbatim hostname in the URL, before expanding into the FQDN.
So if your FQDN is foo.bar
, search domain is bar
, and run curl https://foo
then the certificate will be validated against foo
, not foo.bar
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To curl
, the FQDN there is foo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the given context, foo.
does not resolve. nslookup foo
returns foo.bar.
. If we consider foo.
a valid FQDN for the query then we have watered down the term so far that it has become meaningless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the KEP text, let's try to clarify what we're saying here.
Co-authored-by: Tim Bannister <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: nightkr The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc @bowei |
Ok, I had a lot of fun trying to figure out this puzzle, all the solutions we are exploring sounds like workarounds, when setting the The OCI specification introduced the
The hack in #4972 (comment) works because we mount the /etc/hosts, not because it appends
So, if we agree to set the UTS namespace
@thockin WDYT, I see you already touched on this a long time ago moby/moby#14282 (comment) |
I like the UTS namespace idea, but bear in mind Windows nodes (and Pods) have DNS but don't have UTS namespaces. Also, the API server doesn't - right now - know the cluster domain name, so defaulting within the Pod API is tricky. |
windows does a lot of thing differently and does not implement all pods features, so I would not mix it here
domain name is a kubelet property, as discussed in other places it is hard to move it to a cluster global property #4972 (comment) , my point is that kubelet sends domainname via CRI API with its configured cluster domain, the question is how to expose that |
If we set However, even if a Pod has a custom domain name |
Using the NIS domain could lead to confusion, since it's currently just forwarded from the host system (tested under Arch, v1.31.4+k3s1): |
I am not sure what all the implications of setting domain name are. I will have to go read more. For example, should it include namespace? Does nsswitch consider it? |
@nightkr that does not match my observation and the definition of uts_namespaces(7)
is it possible that the runtime of that k3s distro is not setting the UTS namespaces for the pods and inheriting the host UTS namespace?
@thockin that is why I do think this should be opt-in, this subsystem is complicated and multiple distros take different decisions, from the
that IIUIC it is possible to configure the pod to use the NID domain name if you configure your nsswitch.conf for doing that |
@aojea At least in the k3s environment I tested against, the domain is copied from the host when creating the container. If I change the host's domain afterwards then the container's domain stays the same (until it is deleted and recreated, anyway). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In #4972 (comment) Antonio argues for using the NIS domainname. With my 30 years of Linux sysadmin experience, I cannot tell you the exact implications of that. It's so poorly documented that it feels like asking for trouble.
That said, docker does it. The standard Linux tools at least seem to look at it.
$ docker run --privileged -ti ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
0958e719bd8b
0958e719bd8b
(none)
$ docker run --privileged -ti --hostname foo ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo
foo
(none)
$ docker run --privileged -ti --hostname foo.example.com ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo.example.com
foo.example.com
(none)
example.com
$ docker run --privileged -ti --hostname foo --domainname example.com ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo
foo.example.com
example.com
example.com
$ docker run --privileged -ti --hostname foo.bar --domainname example.com ubuntu sh -c "hostname; hostname -f; domainname; dnsdomainname"
foo.bar
hostname: No address associated with hostname
example.com
dnsdomainname: No address associated with hostname
Using --hostname and --domainname is the only thing that got close to what I expected. I straced these commands and, frankly, it's all over the place as to what's happening. :)
Where this goes wrong for me is that these are trying to divine the Pod's FQDN, when we have never defined that in kubernetes. I think we could/should but see other comments about DNS schema and evolution. If we had an FQDN for pods it would PROBABLY be something like <hostname>.<namespace>.pod.<zone>
. You are not trying to learn the pod's FQDN, you are trying to learn the zone.
So I have another alternative. It's the first time I write it so bear with me.
This belongs in status
.
Why not put the effective DNS Config (PodDNSConfig
plus) into status?
type PodDNSStatus struct {
Hostname string
Zone string
Nameservers []string
Searches []string
Options []PodDNSConfigOption
}
Then this becomes just one more fieldRef: status.dns.zone
or something like that.
I get the freedom to one day fix DNS. People who use custom DNS configs get their own data here. It can vary by kubelet if someone wants to do that.
Now, shoot me down?
|
||
## Summary | ||
|
||
All Kubernetes Services (and many Pods) have Fully Qualified Domain Names (FQDNs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strictly speaking, Services do not have a DNS name. Services have a name. Most Kube clusters use DNS to expose services, by mapping them into a DNS zone with the schema you listed. This is not actually a requirement, though it is so pervasive it probably is de facto required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you pass conformance without cluster DNS? I never checked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After 10 years of Hyrum's Law, perhaps not, but it doesn't mean we should couple further :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you pass conformance without cluster DNS? I never checked.
When we test services we are generally using clusterIP.
... but we do have [It] [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
exists :-)
Which checks for services by DNS, though it's relying on the search paths (or an implementation that just provides these narrowly I supposed)
https://github.com/kubernetes/kubernetes/blob/0e9ca10eebdead0ef9ef54a6754ce70161c2b1e9/test/e2e/network/dns.go#L103-L109
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(all conformance tests are in https://github.com/kubernetes/kubernetes/blob/0e9ca10eebdead0ef9ef54a6754ce70161c2b1e9/test/conformance/testdata/conformance.yaml)
You definitely need functioning DNS that resolves to things, but you don't specifically need something like .cluster.local
AFAICT, you could be resolving entries like kubernetes.default
directly, in theory, and still pass all the ones I've peeked at.
Currently, there is no way for cluster workloads to query for this domain name, | ||
leaving them to either use relative domain names or configure it manually. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...because it is not actually part of Kubernetes. Except that we conflated it with kubelet and pod hostname
and subdomain
fields and ... :)
fieldRef: metadata.namespace | ||
- name: CLUSTER_DOMAIN | ||
valueFrom: | ||
clusterPropertyRef: clusterDomain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I very much hope that one day we will have the time and wherewithal to revisit DNS. The existing schema was intended as a demonstration, and that's how much thought went into it. We've been making it work ever since, but I think we can do better.
I think that if we wanted to do that, we would need pods to opt-in to an alternate DNS configuration. We have PodDNSConfig
which allows one to configure it manually, so it is already sort of possible, but that should be opaque to us - we can't reasonable go poking around in there and making assumptions (e.g. parsing search paths).
The net result is that the cluster zone (not sure why we didn't use that term) is neither a cluster config nor a kubelet config -- it's a pod config, or at least the pod is an indirection into one of the others.
clusterPropertyRef: clusterDomain | ||
``` | ||
|
||
`foo` can now perform the query by running `curl http://bar.$NAMESPACE.svc.$CLUSTER_DOMAIN/`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I honestly forgot that we do env expansion on command
and args
- I had to go look it up. It SHOULD work.
- `nodePropertyRef` (@aojea) | ||
- `runtimeConfigs` (@thockin) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have tried not to define magic words, neither as env vars nor as sources, but as sources is far less worrisome than as env.
If this is the path we go, it's not so bad. I don't know if I would call it clusterPropertyRef
since there's no cluster property to refer to. But something like runtimeProperty
isn't egregious.
Why should this KEP _not_ be implemented? | ||
--> | ||
|
||
## Alternatives |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can work, but I don't love it. If you need a solution NOW, probably do this.
With CoreDNS we do have an A record for each Pod, but only for Pods that back a Service - see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-aaaa-records-1 |
fieldRef: metadata.namespace | ||
- name: CLUSTER_DOMAIN | ||
valueFrom: | ||
clusterPropertyRef: clusterDomain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I'm missing something, but as I understand it kubeadm will always initialize it to the value it gets from its ClusterConfiguration before it even launches the kubelet?
kubeadm has a cluster-wide shared kubelet config, I don't think the point was "This is how kubeadm works" it was "clusters do not all work the same, and there is no portable guarantee that this is a cluster-wide concept as opposed to a kubelet option (and many clusters do not use kubeadm)".
Some properties will always need to be consistent for the kubelet to be in a valid state (apiserver URL, certificates, etc).
Eh? Taking this example ... You could easily make a cluster where different kubelets have different API server URLs, certificates, etc. Some clusters even do use a local proxy for example.
We don't make any assumptions about this AFAIK, those details are local to this kubelet instance.
They could also have clusters with overlapping pod CIDRs. At some point we have to delineate what's a valid state for the cluster to be in and what isn't.
Well, taking that example, pod CIDRs aren't necessarily a property managed by Kubernetes core either, lots of clusters use BYO IPAM for pod IPs.
I'm fine with declaring that split domain configurations are valid and supported (or that there is an intention to work towards that at some point). But that's not the impression that I got from either you or @thockin so far.
I read this more as: We haven't declared this one way or another yet, maybe let's not declare that this is a cluster property because that's not necessarily true and we're not prepared to enforce it (e.g. through some sort of conformance test).
environments, since `node-b` might not be able to resolve `cluster.local` | ||
FQDNs correctly. | ||
|
||
For this KEP to make sense, this would have to be explicitly prohibited. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if we write a document that says "setting these differently within a cluster is prohibited" that does ~nothing to enforce that clusters will actually follow that behavior, and what clusters actually do is what drives consistency.
We have conformance, but I wouldn't be terribly enthused about a test that attempts to validate every node, those are meant to catch API breaks, not runtime configuration.
I don't see why a pod wouldn't work fine, all that needs to be true is the cluster domain reported to the pod needs to be routable on the network to get to services. The consistency part doesn't actually seem relevant.
fieldRef: metadata.namespace | ||
- name: CLUSTER_DOMAIN | ||
valueFrom: | ||
clusterPropertyRef: clusterDomain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SERVICES_DOMAIN
seems like a more accurate name, though of course some users may already be familiar with the kubelet option
Also, as long as the reported domain works for the pod to reach services, I don't think it matters that we attempt to assume if it's identical across machines or not.
If your goal is to have one pod read something to then be consumed by other pods, then we are in fact trying to read cluster-wide config by way of a kubelet option, and THAT seems like a broken concept without actually moving it into something cluster-scoped.
- `nodePropertyRef` (@aojea) | ||
- `runtimeConfigs` (@thockin) | ||
|
||
This also implies a decision about who "owns" the setting, the cluster as a whole or the individual kubelet. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That actually seems orthogonal to exposing this to the pods. We could expose the service domain as reported to kubelet without requiring this.
@sftim you can't say "we have an A record for each Pod" and then "but only for some Pods". :) What I really meant is that we (kubernetes) do not define a "standard" name for all pods, only for pods in the context of a service |
Thought I'd qualified that adequately. Each Pod that backs some Service where the cluster uses CoreDNS with the right options. And this is very much not good enough. We also can't use this to set FQDN because one Pod can back multiple Services. |
Pinging - time is running out for 1.33 |