-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: implement caching mechanism #6267
Comments
I like the idea, but I would be curious to what @particlebanana thinks. Adding a cache layer is something that is not necessarily needed in my opinion since it can easily be done above the ORM in an application. This is just my two cents on the matter |
I've thought about adding caching and it's on the roadmap somewhere after associations and transactions are wrapped up. I think it could belong in the ORM if it's a simple lightweight cache layer. If you need anything more complicated it should probably be at a layer above the ORM. Because we have this concept of adapters it should be some kind of pluggable interface where any adapter can be used as a cache adapter. I would probably prefer you not have to specify a getter/setter for the cache object. You should be able to just specify a cache adapter somewhere and all find queries will use it unless otherwise specified. Behind the scenes it could work the way you described, using the hash of the options as a key, but the user facing API shouldn't change. I'd gladly accept some ideas on what the api would look like and how it would work behind the scenes. If anyone wants to try and tackle it that would be awesome. I think we should all agree on what it would look like before any code is written though. |
This is why I thought it would be good to ask. :) I think what you've proposed is a good idea (global cache settings), however I could see some cases where it wouldn't be ideal: you may want to handle caching differently for different models, so it would be necessary to have per-model configurations available for things like cache expiration. I'm very much in favour of the adapter approach as long as it doesn't limit verbosity too much - caching is about optimization and as such should be tuneable without too much trouble.
|
@lwansbrough still interested in jumping in on this? It's something we'll be able to help on after v0.10 is stable (built-in transactions support will probably come first though). But in the mean time, we could ideate on it, and even get started |
I may be able to contribute some time to this, sure. |
Having long experience with Yii, I can just tell how it works on that framework. We could allow a config/cache.js with several cache adapters, being one the default. That would cache to the default and expire at 60000 milliseconds. Yii also has Cache Dependencies, that allows you to add several dependencies on when to expire the cache, based on DB, memory, expressions, etc. Here are Yii's way of doing that: http://www.yiiframework.com/doc/guide/1.1/en/caching.overview Specially, check 2. Query Caching, here: http://www.yiiframework.com/doc/guide/1.1/en/caching.data |
Caching could be done at the policy layer instead of globally. In the policies config, requests could be routed through a "cache" policy, and customized per-controller if needed. |
@tjwebb do you have any code up of this setup? I'm looking to throw a cache layer in front of my Models and this sounds like a good route. |
I would really recommend not implementing your cache at the request level. If you're going to do that, you may as well use the cache built into nginx or something similar, unless you've got some sort of authentication system in place. Even then, putting the caching interface at the request level inside the application seems like a somewhat bad idea. If you're in need of a quick fix, sure. Cache should be implemented at the data manager or ORM-query level. |
@lwansbrough currently I'm implementing it at the controller level using a service, but it does seem a little cumbersome. I don't have quite enough experience to extend Waterline to include a cache layer at the ORM level. |
Neither. :( It'd be one of the bigger features to develop I think. I've been wanting to take a look at this for a while but I've been too busy with the actual work that needs to be done to worry about the framework itself. |
Yea I love nginx cache, but mix session-based auth into the mix and it might not make sense. Sails already supports redis as a session cache; I think leveraging this to extend it to work as a model cache is a sensible approach. Manual caching is just sometimes necessary, and its better to implement as a framework feature than as ad-hoc bandaids whenever the need arises. Postgres, for example, relies heavily on the OS page-cache, meaning that you're sharing this cache with every other application on the system. |
Been thinking about it. Here's my brain-dump on the topic. I think this project would be high-value for the amount of time it will take to implement. I'd suggest starting by implementing a write-around cache, probably at the collection level. Junction tables can inherit caching from their dominant collection. One option is to create the idea of a "cache adapter" which implements certain methods that enable caching when composed with another adapter, i.e. taking advantage of the already-existing-and-sort-of-working syntax A look at sequelize's solution (in-progress): Perhaps we could use https://www.npmjs.com/package/json-stable-stringify to help create a deterministic hash of |
I'd be open to using redis as an optional caching layer: you should take a |
Where do you see waterline-cursor benefiting from this? Doesn't that module do the in-memory joins? (Perhaps I'm misunderstanding something!) I'm thinking about query/result caching. I guess it could benefit waterline-cursor in the sense that waterline-cursor may not have to be used as often! |
Yea, e.g. what if you wanted to do a join via waterline-cursor of I like the general idea of a key/value adapter spec.
|
would be awesome |
+1 for @stuk88 idea |
+1 as well to adding a cache. |
Though it won't help much in pushing things forward, but still.... 👍 |
Thanks for posting, @lwansbrough. I'm a repo bot-- nice to meet you! It has been 30 days since there have been any updates or new comments on this page. If this issue has been resolved, feel free to disregard the rest of this message. On the other hand, if you are still waiting on a patch, please:
Thanks so much for your help! |
Thanks for posting, @lwansbrough. I'm a repo bot-- nice to meet you! It has been 30 days since there have been any updates or new comments on this page. If this issue has been resolved, feel free to disregard the rest of this message. On the other hand, if you are still waiting on a patch, please:
Thanks so much for your help! |
Can anyone reopen this one? Its a Feature issue... I guess Feature issues should never be auto closed? |
+1 please reopen |
This is linked to on the Roadmap. We are trying to keep issues as only issues and track features using the Roadmap. Closing this doesn't mean it's no longer valid. Caching is very cool and something we will keep in mind as features progress. |
this is now available in offshore 0.0.4, documentation could be found here |
How to integrated this in sails installation? |
Hey @particlebanana et al, I'm thinking it would be very useful to be able to provide a mechanism for specifying caching methods. Personally I think it would be very useful to have this functionality available within Waterline due to the importance of the way in which a cached object's key is generated. I was considering doing this outside of Waterline - but then I wouldn't have access to which properties were being populated, etc in order to build a corresponding key. Here's what I'm thinking:
Specifying a cache for the query would look something like this:
The key would be a hashed representation of the normalized criteria/joins/options etc and value of course being the model.
remove
andupdate
could also be specified forUser.update
andUser.remove
queries.I will probably end up doing this anyway, but I want to request for comments in case you've got a better solution for this.
Cheers
The text was updated successfully, but these errors were encountered: