Skip to content

Commit f578d74

Browse files
committed
automata: reduce regex contention somewhat
> **Context:** A `Regex` uses internal mutable space (called a `Cache`) > while executing a search. Since a `Regex` really wants to be easily > shared across multiple threads simultaneously, it follows that a > `Regex` either needs to provide search functions that accept a `&mut > Cache` (thereby pushing synchronization to a problem for the caller > to solve) or it needs to do synchronization itself. While there are > lower level APIs in `regex-automata` that do the former, they are > less convenient. The higher level APIs, especially in the `regex` > crate proper, need to do some kind of synchronization to give a > search the mutable `Cache` that it needs. > > The current approach to that synchronization essentially uses a > `Mutex<Vec<Cache>>` with an optimization for the "owning" thread > that lets it bypass the `Mutex`. The owning thread optimization > makes it so the single threaded use case essentially doesn't pay for > any synchronization overhead, and that all works fine. But once the > `Regex` is shared across multiple threads, that `Mutex<Vec<Cache>>` > gets hit. And if you're doing a lot of regex searches on short > haystacks in parallel, that `Mutex` comes under extremely heavy > contention. To the point that a program can slow down by enormous > amounts. > > This PR attempts to address that problem. > > Note that it's worth pointing out that this issue can be worked > around. > > The simplest work-around is to clone a `Regex` and send it to other > threads instead of sharing a single `Regex`. This won't use any > additional memory (a `Regex` is reference counted internally), > but it will force each thread to use the "owner" optimization > described above. This does mean, for example, that you can't > share a `Regex` across multiple threads conveniently with a > `lazy_static`/`OnceCell`/`OnceLock`/whatever. > > The other work-around is to use the lower level search APIs on a > `meta::Regex` in the `regex-automata` crate. Those APIs accept a > `&mut Cache` explicitly. In that case, you can use the `thread_local` > crate or even an actual `thread_local!` or something else entirely. I wish I could say this PR was a home run that fixed the contention issues with `Regex` once and for all, but it's not. It just makes things a fair bit better by switching from one stack to eight stacks for the pool, plus a couple other heuristics. The stack is chosen by doing `self.stacks[thread_id % 8]`. It's a pretty dumb strategy, but it limits extra memory usage while at least reducing contention. Obviously, it works a lot better for the 8-16 thread case, and while it helps with the 64-128 thread case too, things are still pretty slow there. A benchmark for this problem is described in #934. We compare 8 and 16 threads, and for each thread count, we compare a `cloned` and `shared` approach. The `cloned` approach clones the regex before sending it to each thread where as the `shared` approach shares a single regex across multiple threads. The `cloned` approach is expected to be fast (and it is) because it forces each thread into the owner optimization. The `shared` approach, however, hit the shared stack behind a mutex and suffers majorly from contention. Here's what that benchmark looks like before this PR for 64 threads (on a 24-core CPU). ``` $ hyperfine "REGEX_BENCH_WHICH=cloned REGEX_BENCH_THREADS=64 ./target/release/repro" "REGEX_BENCH_WHICH=shared REGEX_BENCH_THREADS=64 ./tmp/repro-master" Benchmark 1: REGEX_BENCH_WHICH=cloned REGEX_BENCH_THREADS=64 ./target/release/repro Time (mean ± σ): 9.0 ms ± 0.6 ms [User: 128.3 ms, System: 5.7 ms] Range (min … max): 7.7 ms … 11.1 ms 278 runs Benchmark 2: REGEX_BENCH_WHICH=shared REGEX_BENCH_THREADS=64 ./tmp/repro-master Time (mean ± σ): 1.938 s ± 0.036 s [User: 4.827 s, System: 41.401 s] Range (min … max): 1.885 s … 1.992 s 10 runs Summary 'REGEX_BENCH_WHICH=cloned REGEX_BENCH_THREADS=64 ./target/release/repro' ran 215.02 ± 15.45 times faster than 'REGEX_BENCH_WHICH=shared REGEX_BENCH_THREADS=64 ./tmp/repro-master' ``` And here's what it looks like after this PR: ``` $ hyperfine "REGEX_BENCH_WHICH=cloned REGEX_BENCH_THREADS=64 ./target/release/repro" "REGEX_BENCH_WHICH=shared REGEX_BENCH_THREADS=64 ./target/release/repro" Benchmark 1: REGEX_BENCH_WHICH=cloned REGEX_BENCH_THREADS=64 ./target/release/repro Time (mean ± σ): 9.0 ms ± 0.6 ms [User: 127.6 ms, System: 6.2 ms] Range (min … max): 7.9 ms … 11.7 ms 287 runs Benchmark 2: REGEX_BENCH_WHICH=shared REGEX_BENCH_THREADS=64 ./target/release/repro Time (mean ± σ): 55.0 ms ± 5.1 ms [User: 1050.4 ms, System: 12.0 ms] Range (min … max): 46.1 ms … 67.3 ms 57 runs Summary 'REGEX_BENCH_WHICH=cloned REGEX_BENCH_THREADS=64 ./target/release/repro' ran 6.09 ± 0.71 times faster than 'REGEX_BENCH_WHICH=shared REGEX_BENCH_THREADS=64 ./target/release/repro' ``` So instead of things getting over 215x slower in the 64 thread case, it "only" gets 6x slower. Closes #934
1 parent 9a505a1 commit f578d74

File tree

1 file changed

+168
-19
lines changed

1 file changed

+168
-19
lines changed

regex-automata/src/util/pool.rs

+168-19
Original file line numberDiff line numberDiff line change
@@ -268,6 +268,64 @@ mod inner {
268268
/// do.
269269
static THREAD_ID_DROPPED: usize = 2;
270270

271+
/// The number of stacks we use inside of the pool. These are only used for
272+
/// non-owners. That is, these represent the "slow" path.
273+
///
274+
/// In the original implementation of this pool, we only used a single
275+
/// stack. While this might be okay for a couple threads, the prevalence of
276+
/// 32, 64 and even 128 core CPUs has made it untenable. The contention
277+
/// such an environment introduces when threads are doing a lot of searches
278+
/// on short haystacks (a not uncommon use case) is palpable and leads to
279+
/// huge slowdowns.
280+
///
281+
/// This constant reflects a change from using one stack to the number of
282+
/// stacks that this constant is set to. The stack for a particular thread
283+
/// is simply chosen by `thread_id % MAX_POOL_STACKS`. The idea behind
284+
/// this setup is that there should be a good chance that accesses to the
285+
/// pool will be distributed over several stacks instead of all of them
286+
/// converging to one.
287+
///
288+
/// This is not a particularly smart or dynamic strategy. Fixing this to a
289+
/// specific number has at least two downsides. First is that it will help,
290+
/// say, an 8 core CPU more than it will a 128 core CPU. (But, crucially,
291+
/// it will still help the 128 core case.) Second is that this may wind
292+
/// up being a little wasteful with respect to memory usage. Namely, if a
293+
/// regex is used on one thread and then moved to another thread, then it
294+
/// could result in creating a new copy of the data in the pool even though
295+
/// only one is actually needed.
296+
///
297+
/// And that memory usage bit is why this is set to 8 and not, say, 64.
298+
/// Keeping it at 8 limits, to an extent, how much unnecessary memory can
299+
/// be allocated.
300+
///
301+
/// In an ideal world, we'd be able to have something like this:
302+
///
303+
/// * Grow the number of stacks as the number of concurrent callers
304+
/// increases. I spent a little time trying this, but even just adding an
305+
/// atomic addition/subtraction for each pop/push for tracking concurrent
306+
/// callers led to a big perf hit. Since even more work would seemingly be
307+
/// required than just an addition/subtraction, I abandoned this approach.
308+
/// * The maximum amount of memory used should scale with respect to the
309+
/// number of concurrent callers and *not* the total number of existing
310+
/// threads. This is primarily why the `thread_local` crate isn't used, as
311+
/// as some environments spin up a lot of threads. This led to multiple
312+
/// reports of extremely high memory usage (often described as memory
313+
/// leaks).
314+
/// * Even more ideally, the pool should contract in size. That is, it
315+
/// should grow with bursts and then shrink. But this is a pretty thorny
316+
/// issue to tackle and it might be better to just not.
317+
/// * It would be nice to explore the use of, say, a lock-free stack
318+
/// instead of using a mutex to guard a `Vec` that is ultimately just
319+
/// treated as a stack. The main thing preventing me from exploring this
320+
/// is the ABA problem. The `crossbeam` crate has tools for dealing with
321+
/// this sort of problem (via its epoch based memory reclamation strategy),
322+
/// but I can't justify bringing in all of `crossbeam` as a dependency of
323+
/// `regex` for this.
324+
///
325+
/// See this issue for more context and discussion:
326+
/// https://github.com/rust-lang/regex/issues/934
327+
const MAX_POOL_STACKS: usize = 8;
328+
271329
thread_local!(
272330
/// A thread local used to assign an ID to a thread.
273331
static THREAD_ID: usize = {
@@ -291,6 +349,17 @@ mod inner {
291349
};
292350
);
293351

352+
/// This puts each stack in the pool below into its own cache line. This is
353+
/// an absolutely critical optimization that tends to have the most impact
354+
/// in high contention workloads. Without forcing each mutex protected
355+
/// into its own cache line, high contention exacerbates the performance
356+
/// problem by causing "false sharing." By putting each mutex in its own
357+
/// cache-line, we avoid the false sharing problem and the affects of
358+
/// contention are greatly reduced.
359+
#[derive(Debug)]
360+
#[repr(C, align(64))]
361+
struct CacheLine<T>(T);
362+
294363
/// A thread safe pool utilizing std-only features.
295364
///
296365
/// The main difference between this and the simplistic alloc-only pool is
@@ -299,12 +368,16 @@ mod inner {
299368
/// This makes the common case of running a regex within a single thread
300369
/// faster by avoiding mutex unlocking.
301370
pub(super) struct Pool<T, F> {
302-
/// A stack of T values to hand out. These are used when a Pool is
303-
/// accessed by a thread that didn't create it.
304-
stack: Mutex<Vec<Box<T>>>,
305371
/// A function to create more T values when stack is empty and a caller
306372
/// has requested a T.
307373
create: F,
374+
/// Multiple stacks of T values to hand out. These are used when a Pool
375+
/// is accessed by a thread that didn't create it.
376+
///
377+
/// Conceptually this is `Mutex<Vec<Box<T>>>`, but sharded out to make
378+
/// it scale better under high contention work-loads. We index into
379+
/// this sequence via `thread_id % stacks.len()`.
380+
stacks: Vec<CacheLine<Mutex<Vec<Box<T>>>>>,
308381
/// The ID of the thread that owns this pool. The owner is the thread
309382
/// that makes the first call to 'get'. When the owner calls 'get', it
310383
/// gets 'owner_val' directly instead of returning a T from 'stack'.
@@ -354,9 +427,17 @@ mod inner {
354427
unsafe impl<T: Send, F: Send + Sync> Sync for Pool<T, F> {}
355428

356429
// If T is UnwindSafe, then since we provide exclusive access to any
357-
// particular value in the pool, it should therefore also be considered
358-
// RefUnwindSafe. Also, since we use std::sync::Mutex, we get poisoning
359-
// from it if another thread panics while the lock is held.
430+
// particular value in the pool, the pool should therefore also be
431+
// considered UnwindSafe.
432+
//
433+
// We require `F: UnwindSafe + RefUnwindSafe` because we call `F` at any
434+
// point on demand, so it needs to be unwind safe on both dimensions for
435+
// the entire Pool to be unwind safe.
436+
impl<T: UnwindSafe, F: UnwindSafe + RefUnwindSafe> UnwindSafe for Pool<T, F> {}
437+
438+
// If T is UnwindSafe, then since we provide exclusive access to any
439+
// particular value in the pool, the pool should therefore also be
440+
// considered RefUnwindSafe.
360441
//
361442
// We require `F: UnwindSafe + RefUnwindSafe` because we call `F` at any
362443
// point on demand, so it needs to be unwind safe on both dimensions for
@@ -375,9 +456,13 @@ mod inner {
375456
// 'Pool::new' method as 'const' too. (The alloc-only Pool::new
376457
// is already 'const', so that should "just work" too.) The only
377458
// thing we're waiting for is Mutex::new to be const.
459+
let mut stacks = Vec::with_capacity(MAX_POOL_STACKS);
460+
for _ in 0..stacks.capacity() {
461+
stacks.push(CacheLine(Mutex::new(vec![])));
462+
}
378463
let owner = AtomicUsize::new(THREAD_ID_UNOWNED);
379464
let owner_val = UnsafeCell::new(None); // init'd on first access
380-
Pool { stack: Mutex::new(vec![]), create, owner, owner_val }
465+
Pool { create, stacks, owner, owner_val }
381466
}
382467
}
383468

@@ -401,6 +486,9 @@ mod inner {
401486
let caller = THREAD_ID.with(|id| *id);
402487
let owner = self.owner.load(Ordering::Acquire);
403488
if caller == owner {
489+
// N.B. We could also do a CAS here instead of a load/store,
490+
// but ad hoc benchmarking suggests it is slower. And a lot
491+
// slower in the case where `get_slow` is common.
404492
self.owner.store(THREAD_ID_INUSE, Ordering::Release);
405493
return self.guard_owned(caller);
406494
}
@@ -444,37 +532,82 @@ mod inner {
444532
return self.guard_owned(caller);
445533
}
446534
}
447-
let mut stack = self.stack.lock().unwrap();
448-
let value = match stack.pop() {
449-
None => Box::new((self.create)()),
450-
Some(value) => value,
451-
};
452-
self.guard_stack(value)
535+
let stack_id = caller % self.stacks.len();
536+
// We try to acquire exclusive access to this thread's stack, and
537+
// if so, grab a value from it if we can. We put this in a loop so
538+
// that it's easy to tweak and experiment with a different number
539+
// of tries. In the end, I couldn't see anything obviously better
540+
// than one attempt in ad hoc testing.
541+
for _ in 0..1 {
542+
let mut stack = match self.stacks[stack_id].0.try_lock() {
543+
Err(_) => continue,
544+
Ok(stack) => stack,
545+
};
546+
if let Some(value) = stack.pop() {
547+
return self.guard_stack(value);
548+
}
549+
// Unlock the mutex guarding the stack before creating a fresh
550+
// value since we no longer need the stack.
551+
drop(stack);
552+
let value = Box::new((self.create)());
553+
return self.guard_stack(value);
554+
}
555+
// We're only here if we could get access to our stack, so just
556+
// create a new value. This seems like it could be wasteful, but
557+
// waiting for exclusive access to a stack when there's high
558+
// contention is brutal for perf.
559+
self.guard_stack_transient(Box::new((self.create)()))
453560
}
454561

455562
/// Puts a value back into the pool. Callers don't need to call this.
456563
/// Once the guard that's returned by 'get' is dropped, it is put back
457564
/// into the pool automatically.
458565
fn put_value(&self, value: Box<T>) {
459-
let mut stack = self.stack.lock().unwrap();
460-
stack.push(value);
566+
let caller = THREAD_ID.with(|id| *id);
567+
let stack_id = caller % self.stacks.len();
568+
// As with trying to pop a value from this thread's stack, we
569+
// merely attempt to get access to push this value back on the
570+
// stack. If there's too much contention, we just give up and throw
571+
// the value away.
572+
//
573+
// Interestingly, in ad hoc benchmarking, it is beneficial to
574+
// attempt to push the value back more than once, unlike when
575+
// popping the value. I don't have a good theory for why this is.
576+
// I guess if we drop too many values then that winds up forcing
577+
// the pop operation to create new fresh values and thus leads to
578+
// less reuse. There's definitely a balancing act here.
579+
for _ in 0..10 {
580+
let mut stack = match self.stacks[stack_id].0.try_lock() {
581+
Err(_) => continue,
582+
Ok(stack) => stack,
583+
};
584+
stack.push(value);
585+
return;
586+
}
461587
}
462588

463589
/// Create a guard that represents the special owned T.
464590
fn guard_owned(&self, caller: usize) -> PoolGuard<'_, T, F> {
465-
PoolGuard { pool: self, value: Err(caller) }
591+
PoolGuard { pool: self, value: Err(caller), discard: false }
466592
}
467593

468594
/// Create a guard that contains a value from the pool's stack.
469595
fn guard_stack(&self, value: Box<T>) -> PoolGuard<'_, T, F> {
470-
PoolGuard { pool: self, value: Ok(value) }
596+
PoolGuard { pool: self, value: Ok(value), discard: false }
597+
}
598+
599+
/// Create a guard that contains a value from the pool's stack with an
600+
/// instruction to throw away the value instead of putting it back
601+
/// into the pool.
602+
fn guard_stack_transient(&self, value: Box<T>) -> PoolGuard<'_, T, F> {
603+
PoolGuard { pool: self, value: Ok(value), discard: true }
471604
}
472605
}
473606

474607
impl<T: core::fmt::Debug, F> core::fmt::Debug for Pool<T, F> {
475608
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
476609
f.debug_struct("Pool")
477-
.field("stack", &self.stack)
610+
.field("stacks", &self.stacks)
478611
.field("owner", &self.owner)
479612
.field("owner_val", &self.owner_val)
480613
.finish()
@@ -490,6 +623,12 @@ mod inner {
490623
/// in the special case of `Err(THREAD_ID_DROPPED)`, it means the
491624
/// guard has been put back into the pool and should no longer be used.
492625
value: Result<Box<T>, usize>,
626+
/// When true, the value should be discarded instead of being pushed
627+
/// back into the pool. We tend to use this under high contention, and
628+
/// this allows us to avoid inflating the size of the pool. (Because
629+
/// under contention, we tend to create more values instead of waiting
630+
/// for access to a stack of existing values.)
631+
discard: bool,
493632
}
494633

495634
impl<'a, T: Send, F: Fn() -> T> PoolGuard<'a, T, F> {
@@ -557,7 +696,17 @@ mod inner {
557696
#[inline(always)]
558697
fn put_imp(&mut self) {
559698
match core::mem::replace(&mut self.value, Err(THREAD_ID_DROPPED)) {
560-
Ok(value) => self.pool.put_value(value),
699+
Ok(value) => {
700+
// If we were told to discard this value then don't bother
701+
// trying to put it back into the pool. This occurs when
702+
// the pop operation failed to acquire a lock and we
703+
// decided to create a new value in lieu of contending for
704+
// the lock.
705+
if self.discard {
706+
return;
707+
}
708+
self.pool.put_value(value);
709+
}
561710
// If this guard has a value "owned" by the thread, then
562711
// the Pool guarantees that this is the ONLY such guard.
563712
// Therefore, in order to place it back into the pool and make

0 commit comments

Comments
 (0)