-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x/net/http2: frameScratchBuffer can consume lots of memory for idle conns #38049
Comments
Every new connection has its own set of scratch buffers which can be at most 4 x 512KB. |
Ok so If I understand max memory should be arround 2Mo*number client ? I will make some test to see that. It's possible to change this buffer to 2 * 512KB for exemple ? |
Sorry, its hard coded. |
Will await test from @zoic21 |
Given go-http-tunnel has vendored the http2 code, you could experiment and adjust the value. I would however be curious how it affects throughput as well as memory. |
I already change that by last version of net/http2, my go.mod :
Same result |
We were leaking memory steadily in `golang.org/x/net/http2.(*ClientConn).frameScratchBuffer` every time we retrieved a new set of credentials. I believe this was because we were creating a new `*vault.Config` every time we retrieved credentials from vault so that we would pick up any new changes to `VAULT_CACERT`. This had the unfortunate side effect of creating, amongst other things, a new pool of connections for each new config. Each connection has a set of 4 scratch buffers with a maximum size of 512MB each. I believe that by creating new connection pools we were proliferating these buffers. See: golang/go#38049. I haven't investigated and confirmed this theory fully but in any case, creating one `*vault.Config` and simply reading config values from the environment with `config.ReadEnvironment()` stops the leak. I've also added the UW operational endpoint so that we can profile these issues live in future.
We were leaking memory steadily in `golang.org/x/net/http2.(*ClientConn).frameScratchBuffer` every time we retrieved a new set of credentials. I believe this was because we were creating a new `*vault.Config` every time we retrieved credentials from vault so that we would pick up any new changes to `VAULT_CACERT`. This had the unfortunate side effect of creating, amongst other things, a new pool of connections for each new config. Each connection has a set of 4 scratch buffers with a maximum size of 512MB each. I believe that by creating new connection pools we were proliferating these buffers. See: golang/go#38049. I haven't investigated and confirmed this theory fully but in any case, creating one `*vault.Config` and simply reading config values from the environment with `config.ReadEnvironment()` stops the leak. I've also added the UW operational endpoint, which includes pprof so that we can profile these kinds of issues live in the future.
We were leaking memory steadily in `golang.org/x/net/http2.(*ClientConn).frameScratchBuffer` every time we retrieved a new set of credentials. I believe this is because we were never closing the response body when we were retrieving a lease from vault. From my testing, it seems that the rate at which memory was leaking was also exacerbated by the fact that we were creating a new `*vault.Config` every time we retrieved credentials from vault. Presumably this was because we were creating a new pool of connections (and therefore a new set of scratch buffers) for each config, although I haven't fully validated this hypothesis. See: golang/go#38049. Ensuring that the response body is closed after decoding the json from it and using one `*vault.Config` seems to stop the leak. I've also added the UW operational endpoint so that we can profile these issues live in future.
We were leaking memory steadily in `golang.org/x/net/http2.(*ClientConn).frameScratchBuffer` every time we retrieved a new set of credentials. I believe this is because we were never closing the response body when we were retrieving a lease from vault. From my testing, it seems that the rate at which memory was leaking was also exacerbated by the fact that we were creating a new `*vault.Config` every time we retrieved credentials from vault. Presumably this was because we were creating a new pool of connections (and therefore a new set of scratch buffers) for each config, although I haven't fully validated this hypothesis. See: golang/go#38049. Ensuring that the response body is closed after decoding the json from it and using one `*vault.Config` seems to stop the leak. I've also added the UW operational endpoint so that we can profile these issues live in future.
We were leaking memory steadily in `golang.org/x/net/http2.(*ClientConn).frameScratchBuffer` every time we retrieved a new set of credentials. I believe this is because we were never closing the response body when we were retrieving a lease from vault. From my testing, it seems that the rate at which memory was leaking was also exacerbated by the fact that we were creating a new `*vault.Config` every time we retrieved credentials from vault. Presumably this was because we were creating a new pool of connections (and therefore a new set of scratch buffers) for each config, although I haven't fully validated this hypothesis. See: golang/go#38049. Ensuring that the response body is closed after decoding the json from it and using one `*vault.Config` seems to stop the leak. I've also added the UW operational endpoint so that we can profile these issues live in future.
We were leaking memory steadily in `golang.org/x/net/http2.(*ClientConn).frameScratchBuffer` every time we retrieved a new set of credentials. I believe this is because we were never closing the response body when we were retrieving a lease from vault. From my testing, it seems that the rate at which memory was leaking was also exacerbated by the fact that we were creating a new `*vault.Config` every time we retrieved credentials from vault. Presumably this was because we were creating a new pool of connections (and therefore a new set of scratch buffers) for each config, although I haven't fully validated this hypothesis. See: golang/go#38049. Ensuring that the response body is closed after decoding the json from it and using one `*vault.Config` seems to stop the leak. I've also added the UW operational endpoint so that we can profile these issues live in future.
Every The graph in #38049 (comment) looks like what I'd expect from this implementation: Initial growth as buffers are allocated and retained, eventually leveling off once every conn has allocated all its buffers. So I don't believe there is a memory leak here. (If you leak The per- The variable buffer size is something else that might be worth looking at. The buffer is sized to There are a few different approaches we could take and I don't have a good sense yet for which one I favor, but the current state where idle |
In current http/2 server implementation, the default value of As a result, the frame buffer size is always 512KiB at the client side, which means the total buffer size per client connection can reach 2MiB (there can be 4 cached buffers). This is not very friendly to devices with lower size of RAM. Can we make the two constants ( |
I think we should just have reasonable behavior here without requiring users to fiddle around with tuning knobs to avoid wasting a bunch of idle memory. This probably means having a single shared buffer pool rather than maintaining a per- |
Change https://golang.org/cl/325169 mentions this issue: |
I believe there is somewhere memory leak on the HTTP2, client, my staging boxes is out of memory after 2 hours, where frameScratchBuffer can amass up to 4GB+ memory, tried various think and failed to see what's going on, |
Use a sync.Pool, not per ClientConn. Co-authored with [email protected] Fixes golang/go#38049 Change-Id: I93f06b19857ab495ffcf15d7ed2aaa2a6cb31515 Reviewed-on: https://go-review.googlesource.com/c/net/+/325169 Run-TryBot: Brad Fitzpatrick <[email protected]> Trust: Brad Fitzpatrick <[email protected]> Reviewed-by: Damien Neil <[email protected]> TryBot-Result: Go Bot <[email protected]>
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
To begin I'am new in goland community so maybe it's an issue in my side... I use this project https://github.com/mmatczuk/go-http-tunnel to make a http2 reverse proxy. I test with about 700 connected clients (but with 0 I get same issue). Every request grow up memory, after some analyse with pprof I got :
And deeper :
Same request few minute later show 517.31MB (I always request the http2 server)
What did you expect to see?
Stabilization of memory occupancy
What did you see instead?
Memory grows without stopping
I discover goland univers and I hope that it's a bad analysis on my part but if someone cloud help me it's with pleasure.
Thank in advance
The text was updated successfully, but these errors were encountered: