-
Notifications
You must be signed in to change notification settings - Fork 509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlock with futures mpsc and rayon_wait #396
Comments
Have you tried this lately? It works fine for me on the current master rayon-futures, at least. |
Still deadlocks on master for me with a 4-core machine running macOS 10.12.6 Here's the version I ran, slightly modified to compile with rayon-futures from git: Cargo.toml [package]
name = "korv-test"
version = "0.1.0"
authors = ["Ulf Nilsson <[email protected]>"]
[dependencies]
rayon = { git = "https://github.com/rayon-rs/rayon" }
rayon-futures = { git = "https://github.com/rayon-rs/rayon" }
futures = "0.1.16" main.rs extern crate futures;
extern crate rayon;
extern crate rayon_futures;
use futures::{Sink, Stream};
use futures::sync::mpsc as futures_mpsc;
use rayon_futures::ScopeFutureExt;
use std::{thread, time};
fn main() {
let size = 5;
rayon::scope(|scope| {
let mut prev_receiver = None;
for i in 0..size {
let (sender, receiver) = futures_mpsc::channel(1);
scope.spawn(move |_| {
do_work(i, prev_receiver.take(), sender);
});
prev_receiver = Some(receiver);
}
});
}
fn do_work(
number: usize,
mut receiver: Option<futures_mpsc::Receiver<()>>,
mut sender: futures_mpsc::Sender<()>,
) {
if let Some(receiver) = receiver.take() {
let receiver_future = receiver.into_future();
println!("{} blocked", number);
//let _ = rayon::spawn_future(receiver_future).rayon_wait();
let _ = rayon::ThreadPool::global().spawn_future(receiver_future).rayon_wait();
println!("{} unblocked", number);
}
thread::sleep(time::Duration::from_millis(10));
let _ = sender.start_send(());
println!("{} sent", number);
} Here's the output from running with RAYON_LOG enabled
|
Hmm, OK, this reproduces for me on Linux, but not every time. 1 or 2 threads are consistently fine, 3-7 threads will hang most times (but not always!), then 8 threads seems OK again. |
FWIW, instead of throwing away the inner scope handle in |
This seems to be a dependency inversion. e.g. with future B depending on future A via mpsc. Suppose a given thread executes |
Reduced example: extern crate futures;
extern crate rayon;
extern crate rayon_futures;
use futures::{Sink, Stream};
use futures::sync::mpsc as futures_mpsc;
use rayon_futures::ScopeFutureExt;
fn main() {
let config = rayon::Configuration::new().num_threads(1);
let pool = rayon::ThreadPool::new(config).unwrap();
pool.scope(|scope| {
let (mut sender1, receiver1) = futures_mpsc::channel(1);
let (mut sender2, receiver2) = futures_mpsc::channel(1);
scope.spawn(move |_| {
sender1.start_send(()).unwrap();
});
scope.spawn(move |scope| {
scope.spawn_future(receiver2.into_future()).rayon_wait().unwrap();
});
scope.spawn(move |scope| {
scope.spawn_future(receiver1.into_future()).rayon_wait().unwrap();
sender2.start_send(()).unwrap();
});
});
} With a single thread, the spawns will run LIFO:
With more threads, the order can be arbitrarily mixed, since local threads steal LIFO and remote threads steal FIFO. So fixing this isn't just a matter of spawning in a better order. |
Hmm, interesting. Yes, I think the idea of I originally thought we would solve these sorts of things by growing additional threads, but I suppose that this example shows that even that isn't always a solution. (I can't tell you how many times I have rediscovered this exact deadlock pattern -- that is, one caused by the implicit edge of using the same stack -- in my life.) |
FWIW, |
This code was removed in #716. |
I'm spawning some tasks using rayon, where each task is partially dependent on the previous task.
futures::sync::mpsc
together withrayon::spawn_future
seemed like a good solution to manage this dependency but I'm having problems with deadlocks.Running the code provided below with two threads in the rayon threadpool works as expected
but with three threads it deadlocks quite reliably
The text was updated successfully, but these errors were encountered: