-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding fetch_multi to Cache Strategy #827
Comments
Great news, thanks @joaomdmoura! |
@thibaudgg I don't think it's mandatory (we are already conducting some production tests) but yeah, it's definitively a great optimisation! Really useful! |
@joaomdmoura you're right "mandatory" is too strong, but definitely great to have. |
Yeah @thibaudgg we discussed it yesterday and we'll move forward with this strategy! 😄 I'm not sure if |
Great, keep up the good work! |
The related thoughtbot article is pretty good, even though for 0.8 https://robots.thoughtbot.com/fast-json-apis-in-rails-with-key-based-caches-and |
Already any updates on this? I noticed that most of the time my controller spends to talk with redis for fetching the records. Would be a great improvement! |
Wanna help? B mobile phone
|
As pointed out by @thibaudgg on my last PR #810 (Adding Fragment Cache to AMS), fetch_multi would be a great optimisation to the new cache implementation (#693).
One feature of the new implementation is the individual cache strategy. It enables your application to re-use the cache in different responses.
Ex. It can re-use an object cached from a
index
method response, on ashow
method response. The opposite is also true.AMS will also retrieve this cached objects individually, and that's when
fetch_multi
comes in. It improves performance by retrieving multiple cache keys for a collection in a single go.Despite of a simple improvement, it isn't easy. It will change how the
Adapter
fetches cache and how it integrates withFragmentCache
I'm planning to work on this, but would like to hear some thoughts :)
The text was updated successfully, but these errors were encountered: