Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add optional LoadBalancer service for HA mode #433

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

tomhjp
Copy link
Contributor

@tomhjp tomhjp commented Dec 14, 2020

This adds an optional LoadBalancer service type when operating in HA mode, specifically to address cross-cluster replication, but more widely useful as well.

Still TODO: Acceptance tests.

Unit tests pass locally:

docker build -t vault-helm-test -f test/docker/Test.dockerfile .
docker run --rm -v `pwd`:/root vault-helm-test bats -t /root/test/unit/server-ha-lb.bats
1..15
ok 1 server/ha-lb-Service: generic annotations
ok 2 server/ha-lb-Service: disable with ha.enabled false
ok 3 server/ha-lb-Service: disable with ha.lb.enabled false
ok 4 server/ha-lb-Service: disable with server.service.enabled false
ok 5 server/ha-lb-Service: type LoadBalancer
ok 6 server/ha-lb-Service: clusterIP empty by default
ok 7 server/ha-lb-Service: externalTrafficPolicy Local and publishNotReadyAddresses false as defaults
ok 8 server/ha-lb-Service: externalTrafficPolicy can be set
ok 9 server/ha-lb-Service: publishNotReadyAddresses can be set
ok 10 server/ha-lb-Service: port and targetPort will be 8200 by default
ok 11 server/ha-lb-Service: https-internal port and targetPort will be 8201
ok 12 server/ha-lb-Service: https-rep port and targetPort will be 8202
ok 13 server/ha-lb-Service: port and targetPort can be set
ok 14 server/ha-lb-Service: vault port name is http, when tlsDisable is true
ok 15 server/ha-lb-Service: vault port name is https, when tlsDisable is false

@tomhjp tomhjp mentioned this pull request Dec 14, 2020
@tomhjp
Copy link
Contributor Author

tomhjp commented Dec 14, 2020

Thanks for the quick review, @jasonodonnell. Ready for another round now.

@tomhjp tomhjp requested a review from jasonodonnell December 14, 2020 15:56
@david-becher
Copy link

Is there any progress here? Cross-cluster replication needs a LoadBalancer, which exposes both port 8200 and 8201. I even think, this could/should be the default for the UI LoadBalancer. I would really like to see this chart support cross-cluster replication by exposing 8201 as well. Only being able to set up replication within the same K8s cluster somehow defeats the purpose of replication in the first place...

@bkonick
Copy link

bkonick commented Apr 7, 2021

Is there any progress here? Cross-cluster replication needs a LoadBalancer, which exposes both port 8200 and 8201. I even think, this could/should be the default for the UI LoadBalancer. I would really like to see this chart support cross-cluster replication by exposing 8201 as well. Only being able to set up replication within the same K8s cluster somehow defeats the purpose of replication in the first place...

@david-becher I actually have both 8200 and 8201 exposed by defining this in the values yaml (using v0.9.1 of the chart)

server:
  service:
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"
      type: "LoadBalancer"

That yields me three distinct LoadBalancer services all listening on 8200 and 8201:

NAME                     TYPE           CLUSTER-IP      EXTERNAL-IP                                                                        PORT(S)                         AGE
service/vault            LoadBalancer   10.x.x.x     blah1.elb.amazonaws.com   8200:32183/TCP,8201:31241/TCP   29d
service/vault-active     LoadBalancer   10.x.x.x   blah2.elb.amazonaws.com   8200:32663/TCP,8201:30630/TCP   29d
service/vault-internal   ClusterIP      None            <none>                                                                             8200/TCP,8201/TCP               29d
service/vault-standby    LoadBalancer   10.x.x.x    blah3.elb.amazonaws.com   8200:31777/TCP,8201:30724/TCP   29d

@david-becher
Copy link

Hi @bkonick

Thats correct and I have used this before, but it’s creating three distinct LoadBalancers when I actually only need one LB. Maybe I have not grasped the full concept of how to use all the different LBs, but having only one LB would reduce the costs imposed by cloud resources.

Maybe the chart should be adapted in a way where one can configure each of the services individually as LB or ClusterIP so that one can pick and choose the LB which is actually needed.

@bkonick
Copy link

bkonick commented Apr 7, 2021

Hi @bkonick

Thats correct and I have used this before, but it’s creating three distinct LoadBalancers when I actually only need one LB. Maybe I have not grasped the full concept of how to use all the different LBs, but having only one LB would reduce the costs imposed by cloud resources.

Maybe the chart should be adapted in a way where one can configure each of the services individually as LB or ClusterIP so that one can pick and choose the LB which is actually needed.

We use vault for client access as this includes all pods (active and perf standby). We use vault-active for replication as that traffic needs to be sent only to the active pod. We don't really use vault-standby though. We have used this pattern successfully with a replication chain that includes:

  • NA <=> EU performance replication
  • NA <=> second NA site DR replication
  • EU <=> second EU site DR replication

I think at minimum both vault and vault-active are desired, unless you don't plan on using performance standby pods in which case you would just want vault-active.

I agree that having full control over which actually get provisioned would be helpful.

@hashicorp-cla
Copy link

hashicorp-cla commented Mar 12, 2022

CLA assistant check
All committers have signed the CLA.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants