-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async KV Store Persister #1470
Comments
Oops missed that this is really just a specific instance of #1367, though one that we can fix easily. |
I'd be interested in picking this up soon to get some exposure to the Tokio/Persist side of things. |
I think this is going to snowball into about 3-4 PRs and a large cleanup of the |
Alright, go ahead! |
Doubt: |
Yep, basically. We actually already support this, but its a bit unsafe - monitor update functions can return |
Once we do this, we should also update the docs on the updatestatus's in-progress state variant. #1106 (comment) |
This has turned into a very large project :(. #1678 is the first step, but this is basically the 0.1 milestone, at this point, so I'm just gonna tag it as such. |
While we intend to ship fixes for most known async persist issues in 0.1, it doesn't make sense to hold 0.1 for the async persistence keystore API as well. Instead we should target this in the following release. |
Sensei ran into an issue where ldk-net-tokio (or some other async context) calls into ChannelManager, which handles some message and then goes to update the ChannelMonitor. Because Sensei is using the new KVStorePersister interface to simplify its persistence it ends up in a sync context in an tokio worker thread thread and then has to block on a future to talk to a (possibly remote) database. It then returns control to tokio by blocking on the future, but at the time we're still holding the channel and ChainMonitor locks, causing most other future execution to block.
While tokio should probably be willing to poll the IO driver in the block_on call from Sensei, even if it did it could still decide to run a different future which ends up blocking on the mutex already held and we're back to a deadlock.
Really if you're storing a monitor update via some async future you should use the
TemporaryFailure
return to do the monitor update asynchronously instead of blocking on IO with locks held, but we don't have good utilities for doing so today.We need to (a) create a AsyncKVStorePersister (it has to either be runtime-specific because it needs the ability to spawn background tasks, as an alternative we could make the user provide a
spawn
method which we can call), and (b) adapt the background processor to using it for async manager/graph/score persistence.The text was updated successfully, but these errors were encountered: