Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configuring onReady threshold #5433

Closed
ghost opened this issue Mar 5, 2019 · 13 comments
Closed

Allow configuring onReady threshold #5433

ghost opened this issue Mar 5, 2019 · 13 comments
Assignees
Milestone

Comments

@ghost
Copy link

ghost commented Mar 5, 2019

Please answer these questions before submitting your issue.

What version of gRPC are you using?

v1.17.1, though this hasn't changed in later versions

What did you expect to see?

Right now isReady applies backpressure once 32 KiB of data is buffered:

@VisibleForTesting
public static final int DEFAULT_ONREADY_THRESHOLD = 32 * 1024;

However, our use cases involve larger payloads (1-2 MiB/message), which means that sending any message immediately applies backpressure. We'd like to maximize throughput, so we're OK with more memory usage (to a point). We've also found in our own benchmarks that sending a couple messages beyond when isReady changes to false increases throughput reasonably (about ~50%).

While this is an okay workaround, it'd be more convenient to be able to set the threshold manually; also, then we can manage how much data is actually enqueued, instead of selectively ignoring the backpressure signal.

Would it be acceptable to make this setting not static final (or at least not final)? Or is there another way to adjust the backpressure threshold?

@ejona86
Copy link
Member

ejona86 commented Mar 6, 2019

Would it be acceptable to make this setting not static final (or at least not final)?

No, no, no. That's not how we roll.

The way to do this on client-side would be to add a CallOption that suggests an outbound buffer size. It wouldn't be guaranteed to be followed (for example, InProcess would ignore it), but OkHttp and Netty would normally observe it. On server-side I guess it'd be a method on ServerCall.

I do wonder if there's something a bit smarter we could be doing here, but nothing obvious stands out.

@ghost
Copy link
Author

ghost commented Mar 6, 2019

Okay, that's understandable (and better than setting it globally).

Implementation-wise, it looks like there'd also be a method on Stream analogous to setMessageCompression to provide the hint. It looks reasonably straightforward; would you be open to a contribution, or should we go through the RFC process?

@ejona86
Copy link
Member

ejona86 commented Mar 6, 2019

You can just have the ClientTransports read the CallOptions directly, if you'd like. Or you can follow the setMessageCompression pattern, although it will likely be a bit more work.

Let me confirm with some of the other languages that 1) they agree this is a language-specific feature (given how our API works) and 2) they don't have any better ideas. After that, it's fine without a gRFC.

@ghost
Copy link
Author

ghost commented Mar 12, 2019

Wanted to follow up on this - was there any consensus on whether this requires a gRFC? (I understand if you just haven't gotten around to it yet, though.)

@ejona86
Copy link
Member

ejona86 commented Mar 12, 2019

I haven't gotten to it yet. It's on my list to do today or tomorrow (as I leave for vacation after that).

@ejona86 ejona86 changed the title Expose DEFAULT_ONREADY_THRESHOLD? Allow configuring onReady threshold Mar 13, 2019
@ejona86
Copy link
Member

ejona86 commented Mar 13, 2019

So, I talked with others and it seems this will be a cross-language thing. That means we'll do a gRFC and such. Go has a similar threshold to Java (they copied Java after hearing how we solved the problem). It seems C might want to do something similar in the future.

It does seem that the "proper" value of the threshold is dependent on the network bandwidth, so there should probably be some auto-tuning in place even if it's unclear how it should work. However, even with auto-tuning we might expose an API to set the size in order to sacrifice performance for the sake of memory/prompt flow control.

@ejona86 ejona86 modified the milestones: Unscheduled, Next Mar 13, 2019
@ghost
Copy link
Author

ghost commented Mar 13, 2019

Thanks for checking @ejona86. We'll start working on the RFC.

There are already mechanisms to auto-tune the flow control window at least (though it seems you can't enable it in Java without an internal API), so it might be more a question of how to distribute that bandwidth between calls.

@ejona86
Copy link
Member

ejona86 commented Mar 13, 2019

The existing auto-tuning is sort of the "opposite" of what we need in two regards: it is tuning for the network, not the local communication, and it is for receiving, not sending.

@jduo
Copy link
Contributor

jduo commented Feb 27, 2024

@dfawley now that grpc/proposal#15 is closed, are you open to external PRs for solving this issue? Or have a preference to keep this internal?

@dfawley
Copy link
Member

dfawley commented Feb 27, 2024

@ejona86 can comment on Java. For Go I am happy to review a PR that implements a configuration knob for this buffer size.

@ejona86
Copy link
Member

ejona86 commented Feb 27, 2024

I'd be happy to review a PR that adds this to Java. Client and server APIs can be done in separate PRs or together; either way.

Something similar to what I mentioned in #5433 (comment) and #5433 (comment)

@jduo
Copy link
Contributor

jduo commented Mar 2, 2024

I've started a draft PR based on what's been discussed in this issue (#10977).

@jduo
Copy link
Contributor

jduo commented Mar 22, 2024

This issue can be closed with the completion of #10977

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jun 21, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants