-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move DNS records from base_domain to cluster_domain #1169
Changes from all commits
3f7f0c9
6538030
1ab1cd3
2a0f9b3
bab6085
a0a77ad
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,8 @@ | ||
locals { | ||
new_private_cidr_range = "${cidrsubnet(data.aws_vpc.cluster_vpc.cidr_block,1,1)}" | ||
new_public_cidr_range = "${cidrsubnet(data.aws_vpc.cluster_vpc.cidr_block,1,0)}" | ||
|
||
cluster_domain = "${var.cluster_name}.${var.base_domain}" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This module could be adjusted to take the unsplit cluster domain and use it where it currently uses There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I'd guess: $ git describe
unreleased-master-177-g4907cba
$ git grep ' name *= .*cluster_name' data/data/aws | cat
data/data/aws/bootstrap/main.tf: name = "${var.cluster_name}-bootstrap-profile"
data/data/aws/bootstrap/main.tf: name = "${var.cluster_name}-bootstrap-role"
data/data/aws/bootstrap/main.tf: name = "${var.cluster_name}-bootstrap-policy"
data/data/aws/iam/main.tf: name = "${var.cluster_name}-worker-profile"
data/data/aws/iam/main.tf: name = "${var.cluster_name}-worker-role"
data/data/aws/iam/main.tf: name = "${var.cluster_name}_worker_policy"
data/data/aws/main.tf: name = "${var.cluster_name}-etcd-${count.index}"
data/data/aws/main.tf: name = "_etcd-server-ssl._tcp.${var.cluster_name}"
data/data/aws/master/main.tf: name = "${var.cluster_name}-master-profile"
data/data/aws/master/main.tf: name = "${var.cluster_name}-master-role"
data/data/aws/master/main.tf: name = "${var.cluster_name}_master_policy"
data/data/aws/route53/base.tf: name = "${var.cluster_name}-api.${var.base_domain}"
data/data/aws/route53/base.tf: name = "${var.cluster_name}-api.${var.base_domain}"
data/data/aws/vpc/master-elb.tf: name = "${var.cluster_name}-int"
data/data/aws/vpc/master-elb.tf: name = "${var.cluster_name}-ext"
data/data/aws/vpc/master-elb.tf: name = "${var.cluster_name}-api-int"
data/data/aws/vpc/master-elb.tf: name = "${var.cluster_name}-api-ext"
data/data/aws/vpc/master-elb.tf: name = "${var.cluster_name}-services" but I don't see a problem with removing those in stages. |
||
} | ||
|
||
resource "aws_vpc" "new_vpc" { | ||
|
@@ -9,7 +11,7 @@ resource "aws_vpc" "new_vpc" { | |
enable_dns_support = true | ||
|
||
tags = "${merge(map( | ||
"Name", "${var.cluster_name}.${var.base_domain}", | ||
"Name", "${local.cluster_domain}", | ||
), var.tags)}" | ||
} | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -65,7 +65,7 @@ func (d *DNS) Generate(dependencies asset.Parents) error { | |
// not namespaced | ||
}, | ||
Spec: configv1.DNSSpec{ | ||
BaseDomain: installConfig.Config.BaseDomain, | ||
BaseDomain: installConfig.Config.ClusterDomain(), | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we instead need to change type DNSSpec struct {
BaseDomain string `json:"baseDomain"`
ClusterDomain string `json:"clusterDomain"`
} Otherwise downstream consumers (e.g. the ingress operator's DNS manager) are left to guess the public zone. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I think the
while the Ingress spec only specifies the
but does not specify how/where to realize that information for ingress's DNS manager. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So what if we extend the type DNSSpec struct {
BaseDomain string `json:"baseDomain"`
PublicZone *DNSZone `json:"publicZone"`
PrivateZone *DNSZone `json:"privateZone"`
}
// DNSZone describes a dns zone for a provider.
// A zone can be identified by an identifier or tags.
type DNSZone struct {
// id is the identifier that can be used to find the dns zone
// +optional
ID *string `json:"id"`
// tags is a map of tags that can be used to query the dns zone
// +optional
Tags map[string]string `json:"tags"`
} There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Order of operations to get ingress-operator updated and get installer tests passing might be:
Then installer PRs which want to change the meaning of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. openshift/cluster-ingress-operator#121 covers step 3. |
||
}, | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This package could be adjusted to take the unsplit cluster domain if we replaced this with Go logic to snip subdomains off the full cluster name until we found a public zone. Or we could restructure the install-config to make the public zone's domain part of the AWS-specific platform configuration. Both of those are probably more work than we want to sink into initial exploration, but I thought I'd file a note to track future polish possibilities ;).