-
Notifications
You must be signed in to change notification settings - Fork 264
aws loadbalancer service created on cluster built with tectonic only available in one az #786
Comments
Hmm, very surprised that this is happening as Tectonic invokes the cloud provider with defaults, and I imagine that it makes sense to enable cross-AZ by default.
We will need to investigate further, thanks for reporting. |
Seems like this might be related to these VPC tags:
|
@s-urbaniak Did this break with the changes in #469 maybe? |
I believe this issue goes back quite a ways. I can't be 100% sure without creating a 1.5.6 cluster again, but I'm pretty sure I saw the issue in 1.5.6 as well, but didn't spend any time looking into it until now. I don't know if the person who opened the referenced kubernetes issue was running tectonic installer, but he opened his issue a year ago, and his output was exactly same as what I am seeing. |
I have seen this issue in both Tectonic 1.5 and 1.6 when running in an existing VPC. Kubernetes seems to expect the subnets to be tagged for cluster ownership or it won't find them. I have been manually tagging the subnets as a workaround. |
And, I have an existing VPC, so there you have it.
and it is now working. Thanks for the workaround |
I have just created a LoadBalancer service using the manifest you provided above, on Tectonic Because I have masters and workers in two AZs, the ELB itself is also in two AZs.
|
I had some back and forth with the people over in the kubernetes repo because I wasn't convinced that it was a tectonic issue, but apparently it is.
kubernetes/kubernetes#28586 (comment)
What is happening is when I create a service type loadbalancer on the tectonic managed cluster, it ends up creating the load balancer in only 1 AZ, so nodes in other AZs show up as out of service. I can manually add the AZs to the ELB, but that defeats the purpose.
I'm running version 1.6.2
Service is create as such:
The relevant parts of the ELB are:
So, while there are 3 instances, only 2 are in service because the third is in another AZ. My cluster has 4 AZs that nodes can live on, but only us-east-1e is available. If all my nodes end up in different AZs, then the service will fail.
The text was updated successfully, but these errors were encountered: