Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed nontermination issue in par operators #2239

Merged
merged 1 commit into from
Aug 16, 2021

Conversation

djspiewak
Copy link
Member

@djspiewak djspiewak commented Aug 16, 2021

Fixes #2238

This is a relatively high-priority fix, tbh, since par operations could be non-terminating in 3.2.2 with the current implementation. The performance impact is pretty real though:

Before:

[info] Benchmark                      (cpuTokens)  (size)   Mode  Cnt    Score    Error  Units
[info] ParallelBenchmark.parTraverse        10000    1000  thrpt    5  644.762 ± 49.298  ops/s
[info] ParallelBenchmark.traverse           10000    1000  thrpt    5  236.523 ±  9.855  ops/s

After

[info] Benchmark                      (cpuTokens)  (size)   Mode  Cnt    Score    Error  Units
[info] ParallelBenchmark.parTraverse        10000    1000  thrpt    5  596.967 ± 20.901  ops/s
[info] ParallelBenchmark.traverse           10000    1000  thrpt    5  237.569 ± 10.316  ops/s

So about 7.5% slower in a naively relative comparison. A better comparison is to look at the overhead costs by measuring relative to an idealized traverse. I have 16 physical threads, meaning that an (impossible) idealized overhead-free parTraverse implementation would get about 3784.368 ops/sec on the first run and 3801.104 ops/sec on the second. The actual throughput was 644.762 and 596.967, respectively, meaning the overhead accounts for 83.0% of the runtime before this change and 84.3% after, which is an increase of about 1.6%. That's unfortunate (also it sucks that the overhead is so high), but it's generally within the bounds of sanity.

In theory, an alternative implementation using two Deferreds (one for each fiber) and guaranteeCase within the body of each fiber would probably be slightly faster, but it would also require some more complex dynamic dispatch machinery similar to the original proposal in #2155. It's probably worth measuring that alternative implementation to see if it's a meaningful improvement.

@vasilmkd
Copy link
Member

Another annoyance I just thought of is the fact that IO won't side-channel the second exception in the case that both spawned fibers fail.

@djspiewak
Copy link
Member Author

Oh that's a good point. I'll think about how to resolve that.

@djspiewak
Copy link
Member Author

Btw given the priority of this fix, I think we shouldn't hold it for the side channel fix. Let's file that and fix in a follow up.

@vasilmkd
Copy link
Member

Yup, I'm just waiting for the CI.

@djspiewak djspiewak merged commit d6817ff into typelevel:series/3.x Aug 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Parallel map2/map2Eval fails to short-circuit on error
2 participants