-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move DNS records from base_domain to cluster_domain #1169
Move DNS records from base_domain to cluster_domain #1169
Conversation
@@ -6,11 +6,13 @@ locals { | |||
public_zone_id = "${data.aws_route53_zone.base.zone_id}" | |||
|
|||
zone_id = "${var.private_zone_id}" | |||
|
|||
cluster_domain = "${var.cluster_name}.${var.base_domain}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This package could be adjusted to take the unsplit cluster domain if we replaced this with Go logic to snip subdomains off the full cluster name until we found a public zone. Or we could restructure the install-config to make the public zone's domain part of the AWS-specific platform configuration. Both of those are probably more work than we want to sink into initial exploration, but I thought I'd file a note to track future polish possibilities ;).
@@ -1,6 +1,8 @@ | |||
locals { | |||
new_worker_cidr_range = "${cidrsubnet(data.aws_vpc.cluster_vpc.cidr_block,1,1)}" | |||
new_master_cidr_range = "${cidrsubnet(data.aws_vpc.cluster_vpc.cidr_block,1,0)}" | |||
|
|||
cluster_domain = "${var.cluster_name}.${var.base_domain}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This module could be adjusted to take the unsplit cluster domain and use it where it currently uses var.cluster_name
in tag values and resource names. That would address #762, although we might have to use the length-limited cluster ID for the load-balancer names or somehow introduce uniqueness there. I've just filed #1170 making cluster-names in metadata.json
libvirt-specific, so this resource naming is the last #762 blocker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@blrm was looking at finding the resources that would conflict if we try to use same cluster name for same base domain.
I'd guess:
$ git describe
unreleased-master-177-g4907cba
$ git grep ' name *= .*cluster_name' data/data/aws | cat
data/data/aws/bootstrap/main.tf: name = "${var.cluster_name}-bootstrap-profile"
data/data/aws/bootstrap/main.tf: name = "${var.cluster_name}-bootstrap-role"
data/data/aws/bootstrap/main.tf: name = "${var.cluster_name}-bootstrap-policy"
data/data/aws/iam/main.tf: name = "${var.cluster_name}-worker-profile"
data/data/aws/iam/main.tf: name = "${var.cluster_name}-worker-role"
data/data/aws/iam/main.tf: name = "${var.cluster_name}_worker_policy"
data/data/aws/main.tf: name = "${var.cluster_name}-etcd-${count.index}"
data/data/aws/main.tf: name = "_etcd-server-ssl._tcp.${var.cluster_name}"
data/data/aws/master/main.tf: name = "${var.cluster_name}-master-profile"
data/data/aws/master/main.tf: name = "${var.cluster_name}-master-role"
data/data/aws/master/main.tf: name = "${var.cluster_name}_master_policy"
data/data/aws/route53/base.tf: name = "${var.cluster_name}-api.${var.base_domain}"
data/data/aws/route53/base.tf: name = "${var.cluster_name}-api.${var.base_domain}"
data/data/aws/vpc/master-elb.tf: name = "${var.cluster_name}-int"
data/data/aws/vpc/master-elb.tf: name = "${var.cluster_name}-ext"
data/data/aws/vpc/master-elb.tf: name = "${var.cluster_name}-api-int"
data/data/aws/vpc/master-elb.tf: name = "${var.cluster_name}-api-ext"
data/data/aws/vpc/master-elb.tf: name = "${var.cluster_name}-services"
but I don't see a problem with removing those in stages.
I have some PRs in flight, #1155 and openshift/machine-config-operator#357 |
The installer needs to move towards [1] where the private zone named as `cluster_name.base_domain` ie. `cluster_domain` because of [2], but the public zone still remains `base_domain` as that cannot be created by instaler. This means that cluster-ingress-operator cannot use the public r53 zone with the same name as the `base_domain` from `DNS.config.openshift.io` [3] as it will be set to the `cluster_domain`. This changes the public zone discovery to find a public zone which is the nearest parent domain to `cluster_domain`. [1]: openshift/installer#1169 [2]: openshift/installer#1136 [3]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L28
PR in flight openshift/cluster-ingress-operator#117 |
8df4ede
to
80f7750
Compare
@@ -54,7 +54,7 @@ func (d *DNS) Generate(dependencies asset.Parents) error { | |||
// not namespaced | |||
}, | |||
Spec: configv1.DNSSpec{ | |||
BaseDomain: installConfig.Config.BaseDomain, | |||
BaseDomain: installConfig.Config.ClusterDomain(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we instead need to change DNSSpec
like:
type DNSSpec struct {
BaseDomain string `json:"baseDomain"`
ClusterDomain string `json:"clusterDomain"`
}
Otherwise downstream consumers (e.g. the ingress operator's DNS manager) are left to guess the public zone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we instead need to change
DNSSpec
like:type DNSSpec struct { BaseDomain string `json:"baseDomain"` ClusterDomain string `json:"clusterDomain"` }
I think the Basedomain
field correctly represents what is required.
from the doc
// baseDomain is the base domain of the cluster. All managed DNS records will
// be sub-domains of this base.
//
// For example, given the base domain `openshift.example.com`, an API server
// DNS record may be created for `cluster-api.openshift.example.com`.
Otherwise downstream consumers (e.g. the ingress operator's DNS manager) are left to guess the public zone.
while the Ingress spec only specifies the domain
used for routes
from the doc
// domain is used to generate a default host name for a route when the
// route's host name is empty. The generated host name will follow this
// pattern: "<route-name>.<route-namespace>.<domain>".
but does not specify how/where to realize that information for ingress's DNS manager.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what if we extend the DNSSpec
to
type DNSSpec struct {
BaseDomain string `json:"baseDomain"`
PublicZone *DNSZone `json:"publicZone"`
PrivateZone *DNSZone `json:"privateZone"`
}
// DNSZone describes a dns zone for a provider.
// A zone can be identified by an identifier or tags.
type DNSZone struct {
// id is the identifier that can be used to find the dns zone
// +optional
ID *string `json:"id"`
// tags is a map of tags that can be used to query the dns zone
// +optional
Tags map[string]string `json:"tags"`
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Order of operations to get ingress-operator updated and get installer tests passing might be:
- Merge config/v1: add public and private zones to DNSSpec api#202 to add the zone IDs
- PR to installer to populate the zone IDs (no change to
DNS.Spec.BaseDomain
) - PR to ingress-operator to redo the DNS management in terms of the new API from config/v1: add public and private zones to DNSSpec api#202, this eliminates ingress-operator's usage of
DNS.Spec.BaseDomain
entirely)
Then installer PRs which want to change the meaning of DNS.Spec.BaseDomain
are okay (although if no other operators are consuming it, maybe it needs removed, or at least moved to DNS.Status.BaseDomain
?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
openshift/cluster-ingress-operator#121 covers step 3.
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components. `DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`. A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator` creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an assumption that the public zone matching the `BaseDomain` is *probably* the correct zone. With changes in installer [1] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator` or any other operator. Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators. `DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created. `DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow installer to point to a private DNS zone that will be created for the cluster. [1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9 [2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111 [3]: openshift/installer#1169
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components. `DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`. A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator` creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an assumption that the public zone matching the `BaseDomain` is *probably* the correct zone. With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator` or any other operator. Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators. `DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created. `DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow installer to point to a private DNS zone that will be created for the cluster. [1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9 [2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111 [3]: openshift/installer#1169
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components. `DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`. A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator` creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an assumption that the public zone matching the `BaseDomain` is *probably* the correct zone. With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator` or any other operator. Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators. `DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created. `DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow installer to point to a private DNS zone that will be created for the cluster. [1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9 [2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111 [3]: openshift/installer#1169
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components. `DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`. A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator` creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an assumption that the public zone matching the `BaseDomain` is *probably* the correct zone. With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator` or any other operator. Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators. `DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created. `DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow installer to point to a private DNS zone that will be created for the cluster. [1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9 [2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111 [3]: openshift/installer#1169
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components. `DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`. A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator` creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an assumption that the public zone matching the `BaseDomain` is *probably* the correct zone. With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator` or any other operator. Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators. `DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created. `DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow installer to point to a private DNS zone that will be created for the cluster. [1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9 [2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111 [3]: openshift/installer#1169
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components. `DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`. A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator` creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an assumption that the public zone matching the `BaseDomain` is *probably* the correct zone. With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator` or any other operator. Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators. `DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created. `DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow installer to point to a private DNS zone that will be created for the cluster. [1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9 [2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111 [3]: openshift/installer#1169
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components. `DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`. A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator` creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an assumption that the public zone matching the `BaseDomain` is *probably* the correct zone. With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator` or any other operator. Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators. `DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created. `DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow installer to point to a private DNS zone that will be created for the cluster. [1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9 [2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111 [3]: openshift/installer#1169
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components. `DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`. A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator` creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an assumption that the public zone matching the `BaseDomain` is *probably* the correct zone. With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator` or any other operator. Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators. `DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created. `DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow installer to point to a private DNS zone that will be created for the cluster. [1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9 [2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111 [3]: openshift/installer#1169
/lgtm |
The public zone is now a parent domain of the private zone. The discovery of public zone now tried to find a public zone that is the nearest parent domain of the private zone's domain.
6a84e7b
to
e8ff089
Compare
rebased around #1157 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Just a small nit (that I unfortunately forgot to submit when I looked over this the other day).
@@ -19,7 +19,7 @@ variable "cluster_id" { | |||
|
|||
variable "cluster_domain" { | |||
type = "string" | |||
description = "The domain name of the cluster." | |||
description = "The domain name of the cluster. All DNS recoreds must be under this domain." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/recoreds/records/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
e8ff089
to
a0a77ad
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya, staebler, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
openshift/cluster-authentication-operator#72 has been merged. /retest |
last e2e-aws failed due to router flake. Flaky tests:
[Conformance][Area:Networking][Feature:Router] The HAProxy router should expose prometheus metrics for a route [Suite:openshift/conformance/parallel/minimal]
Failing tests:
[Conformance][Area:Networking][Feature:Router] The HAProxy router should enable openshift-monitoring to pull metrics [Suite:openshift/conformance/parallel/minimal] No more auth errors. /retest |
Known failures:
/retest |
openshift/installer#1169 changes the URL for the api. Once that merges, we will need this patch to update the script that exposes the api.
This logic became a method in 1ab1cd3 (types: add ClusterDomain helper for InstallConfig, 2019-01-31, openshift#1169); so we can drop the validation-specific helper which was from bf3ee03 (types: validate cluster name in InstallConfig, 2019-02-14, openshift#1255). Or maybe we never needed the validation-specific helper ;).
This logic became a method in 1ab1cd3 (types: add ClusterDomain helper for InstallConfig, 2019-01-31, openshift#1169); so we can drop the validation-specific helper which was from bf3ee03 (types: validate cluster name in InstallConfig, 2019-02-14, openshift#1255). Or maybe we never needed the validation-specific helper ;).
The issue was reported in #1136
On AWS, we are currently creating a private zone for
base_domain
and create all the necessary records in that private zone. when users create a new cluster under the same domain, we create a new private zone for thebase_domain
again. This setup creates 2 zones each with authority onbase_domain
and therefore each cluster cannot resolve the api or other endpoints for the other cluster.A solution is that we create private zones for a cluster with authority on
cluster_domain = cluster_name.base_domain
. This allows each cluster to maintain authority on subdomain of basedomain and allows each to resolve the other.some of our current dns records look like:
cluster_name-api.base_domain
-> for api and ignition server.cluster_name-etcd-{idx}.base_domain
-> for each master with etcd.To make sure when moving records from base_domain to cluster_domain, we do not make our dns names very long. the new records look like:
api.cluster_domain
-> for api and ignition server.etcd-{idx}.cluster_domain
-> for each master with etcd.This keeps all the records exactly the same length as before, as we are moving cluster_name and replacing
-
with a.
tasks:
base_domain
to be undercluster_domain
with new schemabase_domain
to be undercluster_domain
with new schemabase_domain
to be undercluster_domain
with new schema*.app.cluster_domain
DNS.config.openshift.io
and consumers.