-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking Issue: reduce allocations during bulk data transfers #3526
Comments
On
|
Interesting article on how to reduce allocations using a |
pprof update, running in a branch that includes the above PRs (#3644, #3646, #3648, #3655). Again, the scenario is the transfer of a single 1 GB file on a single stream. ServerWe're down from 540 MB to 128 MB, that's 76%. ClientWe're down from 514 MB to 147MB, that's 72%. I believe we can get rid of the allocation in |
Impressive work, Marten!! |
#3655 looks fixed |
Do we know that allocation overhead (if any) for opening/closing streams? That would be common in the RPC use case |
I just added a new benchmark test that opens and accepts streams in #3697. Looks like we're allocating almost 4 kB per stream:
|
@zllovesuki If you have any ideas how to optimize that, I'd be happy to review a PR :) |
A 1 GB transfer using v0.29.0 currently creates (as measured by pprof's alloc tool, which measures total allocations over the lifetime of the process) about 500 - 600 MB of allocations, both on the sender and on the receiver side.
This is a problem for performance, because allocating memory consumes resources, and more importantly, all of this memory has to be garbage-collected, putting a lot of pressure on the GC.
We need to drastically reduce the amount of allocations. Target is a reduction of an order of magnitude.
The worst offenders at the moment seem to be:
wire.{Extended}Header
. This is a struct representing a QUIC packet headerackhandler.Packet
structs (used to track sent packets until they're acknowledged / declared lost): reduce allocations of ackhandler.Packet #3525wire.AckFrame
: use a sync.Pool for ACK frames #3547unix.ParseSocketControlMessage
. This will require us to either rewrite that code ourselves, or upstream a fix togolang.org/x/net
(both?)Go issue: x/sys/unix: add ParseOneSocketControlMessage golang/go#54714
PR: use the new zero-allocation control message parsing function from x/sys #3609
bytes.Buffer
when packing packets. Allocating newbytes.Buffer
s is responsible 32 MB of allocations. We can just append to a[]byte
serialize frames by appending to a byte slice, not to a bytes.Buffer #3530
bytes.Reader
for every packet when parsing frames. This is responsible for 32 MB of allocations on the receiver side.use a single bytes.Reader for frame parsing #3536
Traces
Server
Client
The text was updated successfully, but these errors were encountered: