Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move DNS records from base_domain to cluster_domain #1169

Merged
merged 6 commits into from
Feb 16, 2019

Conversation

abhinavdahiya
Copy link
Contributor

@abhinavdahiya abhinavdahiya commented Jan 31, 2019

The issue was reported in #1136

On AWS, we are currently creating a private zone for base_domain and create all the necessary records in that private zone. when users create a new cluster under the same domain, we create a new private zone for the base_domain again. This setup creates 2 zones each with authority on base_domain and therefore each cluster cannot resolve the api or other endpoints for the other cluster.

A solution is that we create private zones for a cluster with authority on cluster_domain = cluster_name.base_domain. This allows each cluster to maintain authority on subdomain of basedomain and allows each to resolve the other.

some of our current dns records look like:
cluster_name-api.base_domain -> for api and ignition server.
cluster_name-etcd-{idx}.base_domain -> for each master with etcd.

To make sure when moving records from base_domain to cluster_domain, we do not make our dns names very long. the new records look like:
api.cluster_domain -> for api and ignition server.
etcd-{idx}.cluster_domain -> for each master with etcd.
This keeps all the records exactly the same length as before, as we are moving cluster_name and replacing - with a .

tasks:

  • move libvirt records from base_domain to be under cluster_domain with new schema
  • move aws records from base_domain to be under cluster_domain with new schema
  • move openstack records from base_domain to be under cluster_domain with new schema
  • changes are required in Machine Config Operator to use the new api server url
  • changes are required in Ingress Operator to make sure it can create the *.app.cluster_domain
  • possible changes required in DNS.config.openshift.io and consumers.

@openshift-ci-robot openshift-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jan 31, 2019
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 31, 2019
@@ -6,11 +6,13 @@ locals {
public_zone_id = "${data.aws_route53_zone.base.zone_id}"

zone_id = "${var.private_zone_id}"

cluster_domain = "${var.cluster_name}.${var.base_domain}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This package could be adjusted to take the unsplit cluster domain if we replaced this with Go logic to snip subdomains off the full cluster name until we found a public zone. Or we could restructure the install-config to make the public zone's domain part of the AWS-specific platform configuration. Both of those are probably more work than we want to sink into initial exploration, but I thought I'd file a note to track future polish possibilities ;).

@@ -1,6 +1,8 @@
locals {
new_worker_cidr_range = "${cidrsubnet(data.aws_vpc.cluster_vpc.cidr_block,1,1)}"
new_master_cidr_range = "${cidrsubnet(data.aws_vpc.cluster_vpc.cidr_block,1,0)}"

cluster_domain = "${var.cluster_name}.${var.base_domain}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This module could be adjusted to take the unsplit cluster domain and use it where it currently uses var.cluster_name in tag values and resource names. That would address #762, although we might have to use the length-limited cluster ID for the load-balancer names or somehow introduce uniqueness there. I've just filed #1170 making cluster-names in metadata.json libvirt-specific, so this resource naming is the last #762 blocker.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#762 @blrm was looking at finding the resources that would conflict if we try to use same cluster name for same base domain. I remember him mentioning that it was more than this module.

@blrm do you have some notes/finding that can provide more details on what might collide in #762 😇

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@blrm was looking at finding the resources that would conflict if we try to use same cluster name for same base domain.

I'd guess:

$ git describe
unreleased-master-177-g4907cba
$ git grep ' name *= .*cluster_name' data/data/aws | cat
data/data/aws/bootstrap/main.tf:  name = "${var.cluster_name}-bootstrap-profile"
data/data/aws/bootstrap/main.tf:  name  = "${var.cluster_name}-bootstrap-role"
data/data/aws/bootstrap/main.tf:  name  = "${var.cluster_name}-bootstrap-policy"
data/data/aws/iam/main.tf:  name = "${var.cluster_name}-worker-profile"
data/data/aws/iam/main.tf:  name  = "${var.cluster_name}-worker-role"
data/data/aws/iam/main.tf:  name  = "${var.cluster_name}_worker_policy"
data/data/aws/main.tf:  name    = "${var.cluster_name}-etcd-${count.index}"
data/data/aws/main.tf:  name    = "_etcd-server-ssl._tcp.${var.cluster_name}"
data/data/aws/master/main.tf:  name = "${var.cluster_name}-master-profile"
data/data/aws/master/main.tf:  name  = "${var.cluster_name}-master-role"
data/data/aws/master/main.tf:  name  = "${var.cluster_name}_master_policy"
data/data/aws/route53/base.tf:  name    = "${var.cluster_name}-api.${var.base_domain}"
data/data/aws/route53/base.tf:  name    = "${var.cluster_name}-api.${var.base_domain}"
data/data/aws/vpc/master-elb.tf:  name                             = "${var.cluster_name}-int"
data/data/aws/vpc/master-elb.tf:  name                             = "${var.cluster_name}-ext"
data/data/aws/vpc/master-elb.tf:  name     = "${var.cluster_name}-api-int"
data/data/aws/vpc/master-elb.tf:  name     = "${var.cluster_name}-api-ext"
data/data/aws/vpc/master-elb.tf:  name     = "${var.cluster_name}-services"

but I don't see a problem with removing those in stages.

@abhinavdahiya
Copy link
Contributor Author

abhinavdahiya commented Jan 31, 2019

changes are required in Machine Config Operator to use the new api server url

I have some PRs in flight, #1155 and openshift/machine-config-operator#357

@openshift-ci-robot openshift-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 1, 2019
abhinavdahiya added a commit to abhinavdahiya/cluster-ingress-operator that referenced this pull request Feb 5, 2019
The installer needs to move towards [1] where the private zone named as `cluster_name.base_domain` ie. `cluster_domain` because of [2], but the
public zone still remains `base_domain` as that cannot be created by instaler.

This means that cluster-ingress-operator cannot use the public r53 zone with the same name as the `base_domain` from `DNS.config.openshift.io` [3] as it will be set to the
`cluster_domain`.

This changes the public zone discovery to find a public zone which is the nearest parent domain to `cluster_domain`.

[1]: openshift/installer#1169
[2]: openshift/installer#1136
[3]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L28
@abhinavdahiya
Copy link
Contributor Author

changes are required in Ingress Operator to make sure it can create the *.app.cluster_domain

PR in flight openshift/cluster-ingress-operator#117

@@ -54,7 +54,7 @@ func (d *DNS) Generate(dependencies asset.Parents) error {
// not namespaced
},
Spec: configv1.DNSSpec{
BaseDomain: installConfig.Config.BaseDomain,
BaseDomain: installConfig.Config.ClusterDomain(),
Copy link
Contributor

@ironcladlou ironcladlou Feb 7, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we instead need to change DNSSpec like:

type DNSSpec struct {
  BaseDomain string `json:"baseDomain"`
  ClusterDomain string `json:"clusterDomain"`
}

Otherwise downstream consumers (e.g. the ingress operator's DNS manager) are left to guess the public zone.

Copy link
Contributor Author

@abhinavdahiya abhinavdahiya Feb 7, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we instead need to change DNSSpec like:

type DNSSpec struct {
  BaseDomain string `json:"baseDomain"`
  ClusterDomain string `json:"clusterDomain"`
}

I think the Basedomain field correctly represents what is required.
from the doc

	// baseDomain is the base domain of the cluster. All managed DNS records will
	// be sub-domains of this base.
	//
	// For example, given the base domain `openshift.example.com`, an API server
	// DNS record may be created for `cluster-api.openshift.example.com`.

Otherwise downstream consumers (e.g. the ingress operator's DNS manager) are left to guess the public zone.

while the Ingress spec only specifies the domain used for routes
from the doc

	// domain is used to generate a default host name for a route when the
	// route's host name is empty.  The generated host name will follow this
	// pattern: "<route-name>.<route-namespace>.<domain>".

but does not specify how/where to realize that information for ingress's DNS manager.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So what if we extend the DNSSpec to

type DNSSpec struct {
  BaseDomain string `json:"baseDomain"`
  PublicZone *DNSZone `json:"publicZone"`
  PrivateZone *DNSZone `json:"privateZone"`
}

// DNSZone describes a dns zone for a provider.
// A zone can be identified by an identifier or tags.
type DNSZone struct {
  // id is the identifier that can be used to find the dns zone
  // +optional
  ID *string `json:"id"`
  // tags is a map of tags that can be used to query the dns zone
  // +optional
  Tags map[string]string `json:"tags"`
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Order of operations to get ingress-operator updated and get installer tests passing might be:

  1. Merge config/v1: add public and private zones to DNSSpec api#202 to add the zone IDs
  2. PR to installer to populate the zone IDs (no change to DNS.Spec.BaseDomain)
  3. PR to ingress-operator to redo the DNS management in terms of the new API from config/v1: add public and private zones to DNSSpec api#202, this eliminates ingress-operator's usage of DNS.Spec.BaseDomain entirely)

Then installer PRs which want to change the meaning of DNS.Spec.BaseDomain are okay (although if no other operators are consuming it, maybe it needs removed, or at least moved to DNS.Status.BaseDomain?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

abhinavdahiya added a commit to abhinavdahiya/api that referenced this pull request Feb 7, 2019
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components.
`DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`.

A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator`
creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an
assumption that the public zone matching the `BaseDomain` is *probably* the correct zone.

With changes in installer [1] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will
be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator`
or any other operator.

Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators.

`DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created.

`DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow
installer to point to a private DNS zone that will be created for the cluster.

[1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9
[2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111
[3]: openshift/installer#1169
abhinavdahiya added a commit to abhinavdahiya/api that referenced this pull request Feb 7, 2019
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components.
`DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`.

A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator`
creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an
assumption that the public zone matching the `BaseDomain` is *probably* the correct zone.

With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will
be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator`
or any other operator.

Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators.

`DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created.

`DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow
installer to point to a private DNS zone that will be created for the cluster.

[1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9
[2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111
[3]: openshift/installer#1169
abhinavdahiya added a commit to abhinavdahiya/api that referenced this pull request Feb 8, 2019
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components.
`DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`.

A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator`
creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an
assumption that the public zone matching the `BaseDomain` is *probably* the correct zone.

With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will
be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator`
or any other operator.

Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators.

`DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created.

`DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow
installer to point to a private DNS zone that will be created for the cluster.

[1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9
[2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111
[3]: openshift/installer#1169
abhinavdahiya added a commit to abhinavdahiya/api that referenced this pull request Feb 8, 2019
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components.
`DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`.

A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator`
creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an
assumption that the public zone matching the `BaseDomain` is *probably* the correct zone.

With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will
be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator`
or any other operator.

Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators.

`DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created.

`DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow
installer to point to a private DNS zone that will be created for the cluster.

[1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9
[2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111
[3]: openshift/installer#1169
abhinavdahiya added a commit to abhinavdahiya/api that referenced this pull request Feb 9, 2019
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components.
`DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`.

A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator`
creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an
assumption that the public zone matching the `BaseDomain` is *probably* the correct zone.

With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will
be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator`
or any other operator.

Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators.

`DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created.

`DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow
installer to point to a private DNS zone that will be created for the cluster.

[1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9
[2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111
[3]: openshift/installer#1169
abhinavdahiya added a commit to abhinavdahiya/api that referenced this pull request Feb 11, 2019
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components.
`DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`.

A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator`
creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an
assumption that the public zone matching the `BaseDomain` is *probably* the correct zone.

With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will
be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator`
or any other operator.

Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators.

`DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created.

`DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow
installer to point to a private DNS zone that will be created for the cluster.

[1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9
[2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111
[3]: openshift/installer#1169
abhinavdahiya added a commit to abhinavdahiya/api that referenced this pull request Feb 11, 2019
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components.
`DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`.

A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator`
creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an
assumption that the public zone matching the `BaseDomain` is *probably* the correct zone.

With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will
be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator`
or any other operator.

Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators.

`DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created.

`DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow
installer to point to a private DNS zone that will be created for the cluster.

[1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9
[2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111
[3]: openshift/installer#1169
abhinavdahiya added a commit to abhinavdahiya/api that referenced this pull request Feb 12, 2019
Based on [1] `DNS` should be the source of truth for operators that manage DNS for their components.
`DNS` currently specifies the `BaseDomain` that should be used to make aure all the records after subdomains to `BaseDomain`.

A missing piece for operators that need to create DNS records is where should these records be created. For example, the `cluster-ingress-operator`
creates DNS records in public and private r53 zones on AWS by listing all zones that match `BaseDomain` [2]. The ingress operator is currently making an
assumption that the public zone matching the `BaseDomain` is *probably* the correct zone.

With changes in installer [3] of creating private r53 zone `cluster_name.base_domain` and using public zone `base_domain`. The `BaseDomain` in `DNSSpec` will
be set to the `cluster_domain` (`cluster_name.base_domain`) as all records must be subdomain of the `cluster_domain`. This breaks the previous assumption for the `cluster-ingress-operator`
or any other operator.

Clearly there is a gap to be filled regarding where the DNS records should be created. The installer knows which public and private zones should be used and can provide that information for the operators.

`DNSSpec` is extended to include a required `PrivateZone` field and an optional `PublicZone` field to provide operators location of where the corresponding records should be created.

`DNSZone` struct is also added to allow defining the DNS zone either using an `ID` or a string to string map `Tags`. `ID` allows installer to specify the public zone as it is predetermined, while `Tags` allow
installer to point to a private DNS zone that will be created for the cluster.

[1]: https://github.com/openshift/api/blob/d67473e7f1907b74d1f27706260eecf0bc9f2a52/config/v1/types_dns.go#L9
[2]: https://github.com/openshift/cluster-ingress-operator/blob/e7517023201c485428b3cdb3a86612230cf49e0a/pkg/dns/aws/dns.go#L111
[3]: openshift/installer#1169
@wking
Copy link
Member

wking commented Feb 15, 2019

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Feb 15, 2019
@openshift-ci-robot openshift-ci-robot removed the lgtm Indicates that a PR is ready to be merged. label Feb 15, 2019
@abhinavdahiya
Copy link
Contributor Author

rebased around #1157

Copy link
Contributor

@staebler staebler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Just a small nit (that I unfortunately forgot to submit when I looked over this the other day).

@@ -19,7 +19,7 @@ variable "cluster_id" {

variable "cluster_domain" {
type = "string"
description = "The domain name of the cluster."
description = "The domain name of the cluster. All DNS recoreds must be under this domain."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/recoreds/records/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done with e8ff089 -> a0a77ad

@staebler
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Feb 15, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: abhinavdahiya, staebler, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [abhinavdahiya,staebler,wking]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@abhinavdahiya
Copy link
Contributor Author

/retest

@crawford
Copy link
Contributor

openshift/cluster-authentication-operator#72 has been merged.

/retest

@abhinavdahiya
Copy link
Contributor Author

last e2e-aws failed due to router flake.

Flaky tests:

[Conformance][Area:Networking][Feature:Router] The HAProxy router should expose prometheus metrics for a route [Suite:openshift/conformance/parallel/minimal]

Failing tests:

[Conformance][Area:Networking][Feature:Router] The HAProxy router should enable openshift-monitoring to pull metrics [Suite:openshift/conformance/parallel/minimal]

No more auth errors.
/hold cancel

/retest

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 16, 2019
@crawford
Copy link
Contributor

Known failures:

Flaky tests:

[Conformance][Area:Networking][Feature:Router] The HAProxy router should expose prometheus metrics for a route [Suite:openshift/conformance/parallel/minimal]

Failing tests:

[Conformance][Area:Networking][Feature:Router] The HAProxy router should enable openshift-monitoring to pull metrics [Suite:openshift/conformance/parallel/minimal]

/retest

@openshift-merge-robot openshift-merge-robot merged commit b4c027f into openshift:master Feb 16, 2019
flaper87 pushed a commit to imain/ocp-doit that referenced this pull request Feb 20, 2019
openshift/installer#1169 changes the URL for
the api. Once that merges, we will need this patch to update the
script that exposes the api.
wking added a commit to wking/openshift-installer that referenced this pull request Apr 4, 2020
This logic became a method in 1ab1cd3 (types: add ClusterDomain
helper for InstallConfig, 2019-01-31, openshift#1169); so we can drop the
validation-specific helper which was from bf3ee03 (types: validate
cluster name in InstallConfig, 2019-02-14, openshift#1255).  Or maybe we never
needed the validation-specific helper ;).
vrutkovs pushed a commit to vrutkovs/installer that referenced this pull request Apr 13, 2020
This logic became a method in 1ab1cd3 (types: add ClusterDomain
helper for InstallConfig, 2019-01-31, openshift#1169); so we can drop the
validation-specific helper which was from bf3ee03 (types: validate
cluster name in InstallConfig, 2019-02-14, openshift#1255).  Or maybe we never
needed the validation-specific helper ;).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants