-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reuse message buffer for streaming #1774
Conversation
Thank you for your pull request. Before we can look at your contribution, we need to ensure all contributors are covered by a Contributor License Agreement. After the following items are addressed, please respond with a new comment here, and the automated system will re-verify.
Regards, |
@MakMukhi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Can you run our benchmarks(
grpc-go/benchmark/benchmain/main.go
Line 24 in 0848a09
go run benchmark/benchmain/main.go -benchtime=10s -workloads=all \ |
Especially, for streaming with different message sizes and post the before and after results on the PR.
Also, the build is failing, can you look into that too. Thanks for the help. :)
// This may allocate more memory than needed, but avoids allocating memory every time in the case when | ||
// each message is slightly larger than the previous one. | ||
allocLen *= 2 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about:
allocLen := int(math.Pow(2, math.Ceil(math.Log2(length))+1))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MakMukhi
I did a micro benchmark, using math functions takes more than 100ns but multiply by 2 in a for loop takes less than 10ns.
For growing 64B to 1MB.
rpc_util_test.go
Outdated
func TestBufferReuse(t *testing.T) { | ||
for _, test := range []struct { | ||
reuse bool | ||
sameBuf bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't suppose we need two bools here, one would do.
@MakMukhi And the failed test is
It seems not related to this PR, and it only failed on a particular Go version. |
To focus on the issue I've found, I only post 1M response for now. The result of after:
The result of before
The result is odd, reuse buffer branch allocates more than before.
|
@MakMukhi It has nothing to do with double length. I ran many times, the 50_latency reduced about 9%. Here is the result of different response size: After:
Before:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI: you could also do buffer re-use in the serverStream (server.go) with presumably similar benefit.
@dfawley Since some server applications need to maintain thousands of idle client connections, in this case, keeping a buffer increases the memory usage dramatically. So we need to add a server option to disable it. |
@dfawley Only enable client-side buffer:
Enable client-side and server-side buffer:
|
Hey @coocood We recently came with an idea to get rid of this buffer entirely. Currently, the transport reads data into individual buffers and puts them on In lieu of this new plan, I'm going to go ahead and close this PR. We do greatly appreciate your time and effort that you've put in. |
@MakMukhi |
It shouldn't take too long once started. But, currently I'm working on a
precursor to it and will only be able to get to it in a few weeks down the
line. Hopefully, by end of this quarter.
…On Tue, Feb 6, 2018 at 8:34 PM, Ewan Chou ***@***.***> wrote:
@MakMukhi <https://github.com/makmukhi>
The new design sounds good.
How long does it take to implement?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1774 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ATtnRy0AWwQ6xNwaDtY3mp1uzYVMdtnBks5tSSfmgaJpZM4RP1rc>
.
|
For #1455