Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(transport): Connect lazily in the load balanced channel #493

Merged
merged 3 commits into from
Nov 18, 2020

Conversation

LukeMathWalker
Copy link
Contributor

Motivation

If the first connection to the endpoint in an Insert changeset fails, the service goes into a failed state and the channel is dropped:

[2020-11-17T19:49:49.555562881+00:00] DEBUG: test/19660 on luca-XPS-15-9570: connecting to 127.0.0.124:5000
[2020-11-17T19:49:49.555756890+00:00] DEBUG: test/19660 on luca-XPS-15-9570: updating from discover
[2020-11-17T19:49:49.555879512+00:00] DEBUG: test/19660 on luca-XPS-15-9570: service failed
    error: load balancer discovery error: error trying to connect: tcp connect error: Connection refused (os error 111)

This is not coherent with the behavior we observe if the connection is broken after having successfully connected at least once - the channel remains open and we try to reconnect without disrupting calls.

Solution

The current PR makes the connection step lazy by default in the load balanced channel.
An alternative approach would be to allow the caller to choose if it should connect eagerly (and crash if the first connection fails) or be lazy - maybe as an additional parameter on balance_channel? Or a balance_channel_lazy? Or a lazy property on the Endpoint?

Why are we touching this?

We have been trying to build a client-side load balanced gRPC client on top of tonic load balanced channel using a background task probing the DNS on a schedule. We'd be happy to add it as an example to tonic itself (including the test that got us here 😅 )

Copy link
Member

@LucioFranco LucioFranco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants