Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add a "memory management" documentation page #11415

Merged
merged 20 commits into from
Dec 20, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
progress
  • Loading branch information
phryneas committed Dec 18, 2023
commit 21f8eead621472b6de55cc94c1427d19962743e1
2 changes: 1 addition & 1 deletion docs/shared/ApiDoc/DocBlock.js
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ export function Example({
if (!value) return null;
return (
<MaybeCollapsible collapsible={collapsible}>
<b>{mdToReact(value)}</b>
{mdToReact(value)}
</MaybeCollapsible>
);
}
Expand Down
76 changes: 71 additions & 5 deletions docs/source/caching/memory-management.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,15 @@
title: Memory management
api_doc:
- "@apollo/client!CacheSizes:interface"
- "@apollo/client!ApolloClient:class"
---

import { InterfaceDetails } from '../../shared/ApiDoc';
import { Remarks, PropertySignatureTable, Example } from '../../shared/ApiDoc';

## Cache Sizes

For better performance, Apollo Client caches a lot of internally calculated values.
For better performance, Apollo Client caches (or, in other words, memoizes) a lot
of internally calculated values.
In most cases, these values are cached in WeakCaches, which means that if the
source object is garbage-collected, the cached value will be garbage-collected,
too.
Expand Down Expand Up @@ -52,6 +54,70 @@ cacheSizes.print = 100;
print.reset();
```

### Cache Details

<InterfaceDetails canonicalReference="@apollo/client!CacheSizes:interface" headingLevel={3}/>
### Choosing good cache sizes

<Remarks
canonicalReference="@apollo/client!CacheSizes:interface"
/>

To choose good sizes for our memoization caches, you need to know what they
use as source values, and have a general understanding of the data flow inside of
Apollo Client.
Comment on lines +65 to +67
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To choose good sizes for our memoization caches, you need to know what they
use as source values, and have a general understanding of the data flow inside of
Apollo Client.
To choose appropriate sizes for memoization caches, you need to know what the caches use as source values and understand data flow inside Apollo Client at a high level.

Can we add a relevant link for "understand data flow inside Apollo Client" to give folks a sense of what they need to know?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That really should be an intro to the next paragraph that tries to give that overview :/ Do you have any idea how to bring that out better?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. Well the next sentences describe the source types, but the "data flow" part is unclear for me. Is that what's described when talking about transforming inputs to outputs?


For most memoized values, the source value will be a parsed GraphQL document -
a `DocumentNode`. Here, we need to distinguish between two types of documents:

* User-supplied `DocumentNode`s: These are DocumentNode objects that are created
by the user, for example by using the `gql` template literal tag.
These are the `QUERY`, `MUTATION` or `SUBSCRIPTION` variables that you pass e.g.
into your `useQuery` hook, or as the `query` option to `client.query`.
* Transformed `DocumentNode`s: These are DocumentNode objects are derived from
user-supplied `DocumentNode`s, e.g. by applying `DocumentTransform`s to them.

As a rule of thumb, you should set the cache sizes for caches using a Transformed
`DocumentNode` at least to the same size as for caches using a user-supplied
`DocumentNode`. If your application uses a custom `DocumentTransform` that does
not always transform the same input to the same output, you should set the cache
size for caches using a Transformed `DocumentNode` to a higher value than for
caches using a user-supplied `DocumentNode`.

By default, Apollo Client uses a "base value" of 1000 for caches using
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is "base value" the same as a default value? Is there a unit we can provide?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unit is "whatever we put into the cache" - we maybe could say "objects" of "cached values".
"Base value" here is the value we use to scale all other values of:
1000 for caches using user-provided DocumentNodes -> 2000 or 4000 for transformed ones.
For a base value of 500 it would be
500 for caches using user-provided DocumentNodes -> 1000 or 2000 for transformed ones.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, will put in a suggestion that reflects this info.

user-supplied `DocumentNode` instances, and scales other cache sizes relative
to that.

This should be plenty for almost all applications out there, but you might want
to tweak them if you have different requirements.

#### Measuring cache usage

As it can be hard to estimate good cache sizes for your application, Apollo Client
exposes an API for cache usage measurement.<br />
This way, you can click around in your application and then take a look at the
actual usage of the memoizing caches.

Keep in mind that this API is primarily meant for usage with our DevTools
(we will release an integration for this soon), and we might change it at any
point in time.<br />
It is also only included in development builds, not in production builds.

So please only use this for manual measurements, and don't rely on it in production
code or tests.

<Example
canonicalReference="@apollo/client!ApolloClient#getMemoryInternals:member"
index={0}
/>

<Example
collapsible
canonicalReference="@apollo/client!ApolloClient#getMemoryInternals:member"
index={1}
/>

### Cache options

<PropertySignatureTable
canonicalReference="@apollo/client!CacheSizes:interface"
methods
properties
/>
75 changes: 74 additions & 1 deletion src/core/ApolloClient.ts
Original file line number Diff line number Diff line change
Expand Up @@ -750,10 +750,83 @@ export class ApolloClient<TCacheShape> implements DataProxy {

/**
* @experimental
* @internal
* This is not a stable API - it is used in development builds to expose
* information to the DevTools.
* Use at your own risk!
* For more details, see [Memory Management](https://www.apollographql.com/docs/react/caching/memory-management/#measuring-cache-usage)
*
* @example
* ```ts
* console.log(client.getMemoryInternals())
* ```
* will log something in the form of
* @example
* ```json
*{
* limits: {
* parser: 1000,
* canonicalStringify: 1000,
* print: 2000,
* 'documentTransform.cache': 2000,
* 'queryManager.getDocumentInfo': 2000,
* 'PersistedQueryLink.persistedQueryHashes': 2000,
* 'fragmentRegistry.transform': 2000,
* 'fragmentRegistry.lookup': 1000,
* 'fragmentRegistry.findFragmentSpreads': 4000,
* 'cache.fragmentQueryDocuments': 1000,
* 'removeTypenameFromVariables.getVariableDefinitions': 2000,
* 'inMemoryCache.maybeBroadcastWatch': 5000,
* 'inMemoryCache.executeSelectionSet': 10000,
* 'inMemoryCache.executeSubSelectedArray': 5000
* },
* sizes: {
* parser: 26,
* canonicalStringify: 4,
* print: 14,
* addTypenameDocumentTransform: [
* {
* cache: 14,
* },
* ],
* queryManager: {
* getDocumentInfo: 14,
* documentTransforms: [
* {
* cache: 14,
* },
* {
* cache: 14,
* },
* ],
* },
* fragmentRegistry: {
* findFragmentSpreads: 34,
* lookup: 20,
* transform: 14,
* },
* cache: {
* fragmentQueryDocuments: 22,
* },
* inMemoryCache: {
* executeSelectionSet: 4345,
* executeSubSelectedArray: 1206,
* maybeBroadcastWatch: 32,
* },
* links: [
* {
* PersistedQueryLink: {
* persistedQueryHashes: 14,
* },
* },
* {
* removeTypenameFromVariables: {
* getVariableDefinitions: 14,
* },
* },
* ],
* },
* }
*```
*/
public getMemoryInternals?: typeof getApolloClientMemoryInternals;
}
Expand Down
15 changes: 10 additions & 5 deletions src/utilities/caching/sizes.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,18 @@ declare global {
/**
* The cache sizes used by various Apollo Client caches.
*
* Note that these caches are all derivative and if an item is cache-collected,
* it's not the end of the world - the cached item will just be recalculated.
* @remarks
* All configurable caches hold derivative (memoized) values and if an item is
* cache-collected, that only means a small performance hit, but it will not
* cause data loss, and a smaller cache size might save you memory.
*
* As a result, these cache sizes should not be chosen to hold every value ever
* encountered, but rather to hold a reasonable number of values that can be
* assumed to be on the screen at any given time.
*
* encountered, but rather to hold a reasonable number of values.
* To prevent too much recalculation, cache sizes should at least be chosen
* big enough to hold memoized values for all hooks/queries that are
* on the screen at any given time.
*/
/*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It won't let me leave a comment on the line above, but I would suggest something like this:

You should choose cache sizes appropriate for storing a reasonable number of values rather than every value. To prevent too much recalculation, choose cache sizes that are at least large enough to hold memoized values for all hooks/queries on the screen at any given time.

* We assume a "base value" of 1000 here, which is already very generous.
* In most applications, it will be very unlikely that 1000 different queries
* are on screen at the same time.
Expand Down