-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wasi Preview 2 Async Race Condition #6811
Comments
Thanks for the report! @alexcrichton and @elliottt and I had talked about this hypothetical but I hadn't realized how trivial it is to trip over. We'll work on a mechanism to hold onto the |
Think about this more, I think the right thing to do is to just remove the call to abort inside If we remove the call to |
The main consequence of letting things run in the background though was that each wasm guest could keep work arbitrarily sitting in the background for a long time, meaning that work could accumulate over time by accident before it's all flushed out and finished. The thinking was the we could add the ability for a host to explicitly opt-in to daemon-ing the background work to keep it going but additionally allow hosts to learn about the background work via some object too. In light of this we may want to revisit the considered default though since I think it's true that almost all the time having background work won't harm anyone or anything. So while I think it'd still be good to package up the background work for an embedder to get access to, @pchickey what do you think about switch to not-abort-by-default? |
This method flushes output and terminates background tasks. Background tasks still terminate as part of Drop! The problem with the current implementation is that there is no way to wait for output buffered in host background tasks to flush before aborting those tasks as part of dropping the Store/Table. This means that e.g. for a trivial component that prints "hello world\n" to stdout and returns, if the Store&Table drop immediately after execution of the component completes, there is a race and the output may not happen at all. I don't really love this design, but we got backed into a corner because all of the alternatives we could think up are worse: * We can't just get rid of the abort-on-drop completely ("daemonize" the tasks) because that means that streams that are connected to e.g. a stalled client connection will consume resources forever, which is not acceptable in some embeddings. * We can't ensure flushing on drop of a table/store because it requires an await, and rust does not have an async drop * We can't add an explicit destructor to a table/store which will terminate tasks, and if this destructor is not called tasks will "daemonize", because that means cancellation of the future executing a component before the explicit destructor is called will end up daemonizing the task. * We could configure all AsyncWriteStreams (and any other stream impls that spawn a task) at creation, or at insertion to the table, with whether they should daemonize on drop or not. This would mean plumbing a bunch of config into places it currently is not. Therefore, the only behavior we could come up with was to keep the abort-on-drop behavior for background tasks, and add methods to ensure that background tasks are joined (finished) gracefully. This means that both sync and async users of WasiCtx will need to call the appropriate method to wait on background tasks. This is easy enough for users to miss, but we decided that the alternatives are worse. Closes #6811
This method flushes output and terminates background tasks. Background tasks still terminate as part of Drop! The problem with the current implementation is that there is no way to wait for output buffered in host background tasks to flush before aborting those tasks as part of dropping the Store/Table. This means that e.g. for a trivial component that prints "hello world\n" to stdout and returns, if the Store&Table drop immediately after execution of the component completes, there is a race and the output may not happen at all. I don't really love this design, but we got backed into a corner because all of the alternatives we could think up are worse: * We can't just get rid of the abort-on-drop completely ("daemonize" the tasks) because that means that streams that are connected to e.g. a stalled client connection will consume resources forever, which is not acceptable in some embeddings. * We can't ensure flushing on drop of a table/store because it requires an await, and rust does not have an async drop * We can't add an explicit destructor to a table/store which will terminate tasks, and if this destructor is not called tasks will "daemonize", because that means cancellation of the future executing a component before the explicit destructor is called will end up daemonizing the task. * We could configure all AsyncWriteStreams (and any other stream impls that spawn a task) at creation, or at insertion to the table, with whether they should daemonize on drop or not. This would mean plumbing a bunch of config into places it currently is not. Therefore, the only behavior we could come up with was to keep the abort-on-drop behavior for background tasks, and add methods to ensure that background tasks are joined (finished) gracefully. This means that both sync and async users of WasiCtx will need to call the appropriate method to wait on background tasks. This is easy enough for users to miss, but we decided that the alternatives are worse. Closes bytecodealliance#6811
Moving streams to async has introduced a race condition. The
AsyncWriteStream
often gets dropped before the receiver inside the spawned task has time to receive a message.This makes sense as there's nothing to tie the spawned task's completion to the
Store
that contains theAsyncWriteStream
. It's very easy to drop theStore
before any handlers are allowed to run to completition.Test Case
Take the following "hello world" case:
The
example.wasm
file is just a simple wasm component that has one exporthello
which just uses Rust'sprintln!
to print "Hello, World!" to stdout.This technically works, but only by coincidence. If we see when the "Hello, World" gets flushed to stdout, it happens after the spawned task that is handling IO is aborted in the Drop handler.
Steps to Reproduce
You can reproduce those by running the above example with a local checkout of
wasmtime
. Simply add a logging statement to the drop handler ofAsyncWriteStream
and you'll see that the drop (and thus the cancellation of the spawned task) happens before the output to stdout.I've found this to be reliably reproducible in a user visible way if the wasm binary prints more than 1 line of text to stdout. The first line usually gets successfully flushed, but the second does not. Adding something to slow down the execution of the runtime can eliminates the bug (e.g., adding
tokio::time::sleep(std::time::Duration::from_millis(1)).await;
after the function call, gets rid of the race condition).I first caught this bug first when upgrading Spin to the latest version of
wasmtime
. The exported function would complete and the containingStore
would be dropped before the underlyingAsyncWriteStream
's spawned task had a chance to receive the bytes to be written to the innerAsyncWrite
type. Only every ~10th invocation would see the spawned task actually process the bytes before theAsyncWriteStream
's drop handler canceled the task.Expected Results
I would expect that there would be some mechanism for ensuring that ongoing async writes complete before the
Store
is dropped.Versions and Environment
This is the case on the latest version of
wasmtime
as of this writing.The text was updated successfully, but these errors were encountered: