-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add central configuration for Apollo Client cache sizes #11408
Conversation
|
size-limit report 📦
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is generally really great! Since this will be a public API, I had a few suggestions for future-proofing this a bit more, but otherwise I like the approach you've taken here. Really cool!
} | ||
|
||
/** | ||
* The cache sizes used by various Apollo Client caches. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* The cache sizes used by various Apollo Client caches. | |
* The cache sizes used by various internal Apollo Client caches. These internal | |
* caches are used to speed up potentially expensive, repeated operations. |
It might make sense to add the internal
modifier here so we don't confuse those that may think this is related to the InMemoryCache
. I also think it might make sense just to go into a bit more detail on why they exist. It adds some nice context for the rest of the description down below, especially the paragraph that begins with "As a result, these cache sizes..."
* Note that these caches are all derivative and if an item is cache-collected, | ||
* it's not the end of the world - the cached item will just be recalculated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Note that these caches are all derivative and if an item is cache-collected, | |
* it's not the end of the world - the cached item will just be recalculated. | |
* Note that these caches are all derivative so if an item is cache-collected, | |
* the cached item will be recalculated. |
Adding "its not the end of the world" makes it sound like a bad thing that shouldn't happen. I think removing that statement still makes this make sense, so I'd opt just to leave it out.
* This method is called from `transformDocument`, which is called from | ||
* `QueryManager` with a user-provided DocumentNode. | ||
* It is also called with already-transformed DocumentNodes, assuming the | ||
* user provided additional transforms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* This method is called from `transformDocument`, which is called from | |
* `QueryManager` with a user-provided DocumentNode. | |
* It is also called with already-transformed DocumentNodes, assuming the | |
* user provided additional transforms. | |
* This cache holds the results from transformed DocumentNodes to prevent | |
* unnecessary recomputation. Document transforms are used whenever a query | |
* is executed in the client. |
Again, I would try and avoid some of the internal mechanics of where its called and instead provide a broader overview here. DocumentTransform
s are also called in ObservableQuery
as well, but I don't think we need to provide a reference to every location it could be used (and who knows, we may add to it in the future, so I'd hate to keep this comment up-to-date).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's difficult - here I provide this much details because this cache is not a DocumentTransform cache and should not be confused with those.
It's also important to distinguish here the they are called with a user-provided DocumentNode (so they require a smaller cache size than DocumentTransforms) would.
It's less important for cache size what the result is, and much more important what the input/cache key is going to be.
* The cache size here should be chosen with other DocumentTransforms in mind. | ||
* For example, if there was a DocumentTransform that would take `n` DocumentNodes, | ||
* and returned a differently-transformed DocumentNode depending if the app is | ||
* online or offline, then we assume that the cache returns `2*n` documents. | ||
* | ||
* No user-provided DocumentNode will actually be "the last one", as we run the | ||
* `defaultDocumentTransform` before *and* after the user-provided transforms. | ||
* | ||
* So if we assume that the user-provided transforms receive `n` documents and | ||
* return `n` documents, the cache size should be `2*n`. | ||
* | ||
* If we assume that the user-provided transforms receive `n` documents and | ||
* returns `2*n` documents, the cache size should be `3*n`. | ||
* | ||
* This size should also then be used in every other cache that mentions that | ||
* it operates on a "transformed" DocumentNode. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should word this more towards the "when you .concat
/.split
transforms together" type approach. I think what you have in this section has some really good information and consideration, but I think its a bit difficult to understand what some of it means without knowing the behind-the-scenes implementation.
For example defaultDocumentTransform
is not mentioned anywhere in the client API or in our docs. Unless you knew how we use your custom transform in QueryManager
, that statement makes no sense (i.e. "where did defaultDocumentTransform
come from?).
I think you could generalize some of this, while simultaneously keeping some of this useful information by saying something like this:
The cache size here should be chosen with other DocumentTransforms in mind.
Document transforms can be combined through concatenation, where each
transform will process the document node and pass it to the next transform
in the chain. This means that a single document node may pass through
several DocumentTransforms, each one caching its result along the way.
// additional math stuff you have up here
Don't feel like you need to take exactly what I've worded here, but just trying to get at the fact that I'd love to talk about this more from the perspective of the public API (.concat
, etc) and try and reduce the amount we talk about the internal implementation (i.e. defaultDocumentTransform
). A link to the public docs page might also be useful here if you want to provide additional context on some of the stuff going on here.
* recommended to set this to a high value. | ||
*/ | ||
executeSubSelectedArray: number; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll avoid too many comments on each of these tsdocs, but hopefully you sense a theme in general. Since this will be a public API, I'd love to try and generalize some of these so that users can understand the purposes of each without needing to have explicit knowledge of the implementation details and specific APIs used behind the scenes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's really a difficult line to choose here - the implementation details are not too important, but it is important what will be the input to these caches, and in which sequence data will traverse from one to the next - having a too small cache size in an "early" cache will artificially blow up caches down the line.
Let's talk all of that through in person later, maybe we find a better way of communicating that :)
* You can directly modify this object, but any modification will | ||
* only have an effect on caches that are created after the modification. | ||
* | ||
* So for global caches, such as `parser`, `canonicalStringify` and `print`, | ||
* you might need to call `.reset` on them, which will essentially re-create them. | ||
* | ||
* Alternatively, you can set `globalThis[Symbol.for("apollo.cacheSize")]` before | ||
* you load the Apollo Client package: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💯
Co-authored-by: Jerel Miller <[email protected]>
"ae-internal-missing-underscore": { | ||
"logLevel": "none", | ||
"addToApiReportFile": false | ||
}, | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This suppresses this warning:
// Warning: (ae-internal-missing-underscore) The name "AutoCleanedStrongCache" should be prefixed with an underscore because the declaration is marked as @internal
I assume we don't want to prefix all our internal apis with _
:)
* encountered, but rather to hold a reasonable number of values that can be | ||
* assumed to be on the screen at any given time. | ||
* | ||
* We assume a "base value" of 1000 here, which is already very generous. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* We assume a "base value" of 1000 here, which is already very generous. | |
* We assume a "base value" of 1000 user-supplied Document Nodes here, which is already very generous. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉
Getting this merged and will do more comments tweaking in #11415 |
This PR adds a central configuration object for cache size configuration to Apollo Client.
Most of this is explained in very long comments in the code, but I also made this diagram (that will need a lot of zooming, sorry!) so I'll add at the end of this comment:
Open Questions:
resultCacheMaxSize
(added in add resultCacheMaxSize #8107 for historical context)? Currently it governsexecuteSelectionSet
,executeSubSelectedArray
andmaybeBroadcastWatch
which are some of the biggest caches, and has no maximum value.( I decided to make
resultCacheMaxSize
as@deprecated
and add three more options for these caches, but always giveresultCacheMaxSize
precedence if it is specified)Real-world values
Here are some example sizes that I got by plugging this into the Spotify showcase and clicking around until I visited pretty much every screen a few times:
Caches and their relationships
I'll go over everything to double-check if I didn't miss any caches, but this is already a first draft :)
Checklist: