-
Notifications
You must be signed in to change notification settings - Fork 548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
store-gateway: Use a pooling memory allocator for loading chunks from memcached #3772
Comments
Some initial notes:
|
Are you thinking we remove the If the former, how many other projects use this client? Do we care about making a breaking change like this? |
We should talk to Bryan and other projects using that fork (Mimir currently use an older commit not including it). I think that pooling would be very harder to use for the Mimir case, instead of passing the allocator directly to |
Yeah, let's definitely talk to the other projects using this. I'm not sure if anyone in Grafana is using the pooling behavior yet so it might be pretty easy to remove without disruption. My $0.02 is that the lifecycle of the memory being used for cache results is very request-oriented so it makes more sense to have the caller control the lifecycle of the memory instead of needing to return pooled bytes to the memecached client. The client will end up hidden behind many layers of abstraction (dskit wrappers, store-gateway wrappers). Additionally, doing via passing the allocator (or whatever we call it) allows us to do arena-style memory management. |
💯 |
This change adds the ability to pass a variable number of Option callbacks to read cache calls. Currently, this allows the underlying cache client to make use of a caller supplied memory allocator. Only the memcached client does so at the moment. See grafana/mimir#3772 Signed-off-by: Nick Pillitteri <[email protected]>
This change adds the ability to pass a memory allocator to caching methods via their context argument. This allows the underlying client to use memory allocators better suited to their workloads than GC. This is only used by the Memcached client at the moment. See grafana/mimir#3772 Signed-off-by: Nick Pillitteri <[email protected]>
This change adds the ability to pass a memory allocator to caching methods using one or more Option arguments. This allows the underlying client to use memory allocators better suited to their workloads than GC. This is only used by the Memcached client at the moment. See grafana/mimir#3772 Signed-off-by: Nick Pillitteri <[email protected]>
This commit of dskit adds the ability to pass per-call options to cache calls. This allows callers to use specific memory allocators better suited to their workloads than GC. See #3772 Signed-off-by: Nick Pillitteri <[email protected]>
This commit of dskit adds the ability to pass per-call options to cache calls. This allows callers to use specific memory allocators better suited to their workloads than GC. See #3772 Signed-off-by: Nick Pillitteri <[email protected]> Signed-off-by: Nick Pillitteri <[email protected]>
Store-gateways spend a significant portion of time reading data from memcached. Due to the way the memcached client used works, each read requires the client to allocate enough space for the data read. This memory is thread thrown away after the request is finished. This creates a lot of garbage and extra work for something only briefly used.
Modeling an approach based on the work done in #3756, we should attempt to pool memory used for reading from the chunks cache to reduce garbage generated and improve request speed.
From a dev cell, ~1/3 of memory allocated is by the memcached client:
![Screenshot 2022-12-19 at 13-28-38 Explore - Phlare (fire-dev-001) - Grafana](https://user-images.githubusercontent.com/1127373/208495338-14f3bf62-efac-45cc-b016-5a636bc19476.png)
Tasks
grafana/gomemcache
- Allow result memory to be allocated from per-call Allocator gomemcache#8grafana/dskit
withgrafana/gomemcache
- Use Grafana Memcached client fork instead of upstream dskit#248Cache
and memcached ingrafana/dskit
The text was updated successfully, but these errors were encountered: