-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Offline mutation support #18
Comments
Offline mutations are still a work in progress. This is a non-trivial problem to solve. Apollo Client still hasn't implemented this feature, despite it being a major pain point for years. That said, the Ferry architecture should allow for them. I just haven't gotten around to implementing the feature yet. If you are interested in contributing to this feature, I'd be happy to help you get oriented with the relevant parts. |
Wow, didn't know that. |
@awaik sounds good. Let me know if there's something you have trouble understanding. |
lib/src/client/client.dart
Do I understand correctly that you suggest |
@awaik We will need to create an offline mutation queue to persist mutations when offline, along with a way to deserialize them into the original mutation type. I'd recommend using Hive for persistence since we already have a Hive data store implementation. Mutations should only be run once each time they are called. This means that if a mutation is added to the queryController then the client immediately goes offline before the network response is received, the mutation should NOT be stored and rerun when the client comes online. Therefore I was thinking of the following:
I'm sure there are a lot of edge cases that aren't addressed by the above. |
I think we should also provide hooks to allow custom logic to be run throughout the mutation execution lifecycle. At a minimum, these hooks should include:
By default, onConnect would just add the mutation back to the queryController, but this could be overridden with a custom callback. |
@smkhalsa I think there's a problem with this design because you often won't know the device lost its connection until the network fails after the query is sent. There's also very slow connections or the server going down. I have written a client and offline-capable cache out of necessity for my job. I've just come across Ferry and it seems to have everything I am looking for except the offline part. I'd like to bring the offline part to this repo but I'm not sure what I'd need to learn. I've done everything manually without normalization etc, and have only recently began to understand how all the pieces fit together and why you created the various gql packages. In a nutshell, the cache I made is basically a hashmap of Query+Variables => Response stored as JSON data in files. The app that uses it makes some minor efforts to repeat queries/variables to maximize cache hits. This has actually worked really well. Does the HiveStore store data normalized? The offline part is that any mutation that fails due to a network error is put into a queue. Duplicate mutations (query+variables) are not queued. Each offline-capable mutation must be able to make (and undo) changes to the full data graph in memory. The optimistic response might only include a created entity for example, and not reflect the side effect of associating that entity with another. I can try and give a more precise description if you think this is doable. |
After some thought I think the normalized cache would work as long as the |
@jifalops Thanks for the feedback.
Yes, I've thought about this, and we'd definitely need to address errors on inflight mutations.
Enqueuing every mutation that fails due to a network error wouldn't work for every use case. For example, in some cases, it might be important for a mutation to get run exactly once, and the server may have successfully executed the original query, but the client never received the response (due to going offline immediately after sending the mutation, for example). In this case, the mutation would get enqueued and run a second time when the client comes back online. We also have to consider that network retries are traditionally handled by
So what's missing for you that is causing you to consider
In
The |
It's not clear how my approach will scale, and it was largely created because of a lack of existing tools at the time. I like the idea of streamed responses and a normalized cache. In fact I've been moving towards a normalized in-memory cache naturally from the denormalized file cache, and it's sort of glaring at times that this should be done by a package. |
Rather than providing a single canonical solution for offline mutation support, I'm considering implementing a plugin architecture that would allow third-party plugins to intercept and arbitrarily process Since Ferry is essentially just a series of Stream transformations, the system would allow plugins to run custom transformations on the stream, both before and after the request is resolved. Using this system, a
Obviously, this architecture could also enable virtually endless additional features to be added to the client. |
Actually, much of the existing core ferry functionality could be reimplemented as plugins, including:
|
#67 is a refactor that implements the plugin architecture described above. |
I've added a basic Offline Mutation Plugin in #67. You can see how to instantiate it here |
@smkhalsa thanks for you work on this plugin, I've just moved over from |
Thanks for the quick response @smkhalsa , I just had a read of the doc on typed links and a look at the test for the offline mutation typed link . I'm wondering what the easiest way to extend the client is to add this link in? It seems the default client is a chain of links and I'd like to maintain that as I assume it includes functionality that i'd need/like to keep rather than re-invent a completely custom chain. EDIT: The test seems to only create a |
The |
Description
With optimistic cache make the mutation without the internet connection. After that turn on the connection.
Error code
Screenshots
It took 60 seconds to get the issue, sorry for the long animated gif.
The text was updated successfully, but these errors were encountered: