-
Notifications
You must be signed in to change notification settings - Fork 548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creates a custom memcached client to reuse memory #285
Comments
Just to clarify, by current memcached client you refer to |
I refer to this one yes. but it's hidden down below into mimir. |
It's more items data sent back to users of the library take a profile of the store gateway, it should be one of the most alloc. The author left a todo if I remember correctly too. |
gotcha! 👍 |
Would be awesome to have a way to reuse them when the item is not needed anymore. |
Definitely agree. I think this is a point of improvement that could have a quite positive impact. Maybe we can simply add a WDYT? |
That's the spirit I think the hardest part is the usage. If you go for it Prometheus code base as a pool that is bucketed and should work nicely. May be buckets needs to be configurable |
When you get a chance, take a look at these changes: https://github.com/ortuman/gomemcache/commit/8bb94a59999fc52836308966af8ec058d3d706f0 Also I've done some benchmarking and seen a slightly better result when reusing buffers. I guess the true value of this solution will be more noticeable in the long term, by minimizing the number of GC pauses..
|
The benchmark is wrong they both make the slice. I think you should:
|
The first approach does not implicitly invoke Anyway, I like the idea of using a bucket bufffer pool instead. This would help to reduce fragmentation, and thus reducing GC interventions. Take a look at ortuman/gomemcache@71beaed139c365c02400465e32bf80614da6247a and tell me how you feel about it. Also, here's a table of preliminary benchmark results:
|
It's better but I don't think you need a bytes.Buffer you can use the slice and re-sliced it to length required. |
Oh! You're 100% right. Just changed it @ https://github.com/ortuman/gomemcache/commit/ab4be5bd29b01f1f7687db3b4333890a514ac6a2 |
Thanks @ortuman for working on this! Working on your personal repo is fine for experimenting, but if we'll end up vendoring this in Mimir then please fork |
Sure thing! As you mentioned I just wanted to experiment on my own repo before doing a formal fork. If this goes on I'll follow your suggestion. Thanks! |
Note Mimir currently uses https://github.com/themihai/gomemcache fork, in order to add a circuit-breaker (only in query-frontend right now). |
👋 Hey was passing by, while investigating a different issue on dev, I noticed on a profile that a lot of time is spent in the The most obvious one is that we could use n, err := fmt.Fscanf(bytes.NewReader(line), pattern, dest...) Instead of n, err := fmt.Sscanf(string(line), pattern, dest...) To avoid copying every single line just to scan it. Apart from that, I'd consider implementing a custom parser for that line format instead of relying on |
Just for reference, it would be great to have the GATS command implemented too: #718 |
Some more data, was checking continous profiling on dev and realised that store-gateway spends 12% of its time just parsing the memcached response strings: |
I wanted to check the size of this problem, so I grabbed another profile.
So copying the line is ~0.7% of all allocations; probably not the big win. Agreed that a custom parser can perform better. |
I coded bradfitz/gomemcache@master...bboreham:faster-scanline
(Note I still copy the |
Awesome! Since we're using a fork, can you send a PR to the fork? However... The fork hasn't been updated in 4 years, so maybe it's just easier for you to bring in the changes required. |
<fx: time passes...> I have created https://github.com/grafana/gomemcache/ with the above change. |
We're now pooling memory in Mimir when reading from the chunks cache, passing a request-scoped pool from the |
The current memcached client always throw away the buffer which creates a lot of allocations.
It would be great to have a reusable buffer, although this might be difficult since there's a lot of cache interface used along the way.
The text was updated successfully, but these errors were encountered: