Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve group by hash performance: avoid group-key/-state clones for hash-groupby #4651

Conversation

crepererum
Copy link
Contributor

Which issue does this PR close?

-

Rationale for this change

Just found a bunch of CPU-cycles spent in clone for large aggregations that involve strings. Seems that we don't need to clone that much data.

What changes are included in this PR?

A bit more data moving instead of cloning.

Are these changes tested?

cargo bench -p datafusion --bench aggregate_query_sql -- --baseline avoid_hash_group_scalarvalue_copy

aggregate_query_no_group_by 15 12
                        time:   [681.31 µs 682.55 µs 683.93 µs]
                        change: [-1.3475% -0.9347% -0.4997%] (p = 0.00 < 0.05)
                        Change within noise threshold.
Found 7 outliers among 100 measurements (7.00%)
  3 (3.00%) high mild
  4 (4.00%) high severe

aggregate_query_no_group_by_min_max_f64
                        time:   [623.32 µs 624.53 µs 625.77 µs]
                        change: [-1.5852% -1.1752% -0.7877%] (p = 0.00 < 0.05)
                        Change within noise threshold.
Found 5 outliers among 100 measurements (5.00%)
  1 (1.00%) low severe
  1 (1.00%) high mild
  3 (3.00%) high severe

aggregate_query_no_group_by_count_distinct_wide
                        time:   [2.4782 ms 2.4970 ms 2.5157 ms]
                        change: [-0.4410% +0.4967% +1.5074%] (p = 0.35 > 0.05)
                        No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) low mild

Benchmarking aggregate_query_no_group_by_count_distinct_narrow: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 8.6s, enable flat sampling, or reduce sample count to 50.
aggregate_query_no_group_by_count_distinct_narrow
                        time:   [1.6871 ms 1.6945 ms 1.7022 ms]
                        change: [-1.6426% -0.7138% +0.2828%] (p = 0.16 > 0.05)
                        No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
  1 (1.00%) low mild
  2 (2.00%) high mild
  1 (1.00%) high severe

aggregate_query_group_by
                        time:   [2.2208 ms 2.2368 ms 2.2537 ms]
                        change: [-1.7788% -0.7814% +0.2329%] (p = 0.13 > 0.05)
                        No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
  1 (1.00%) high mild
  1 (1.00%) high severe

Benchmarking aggregate_query_group_by_with_filter: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 5.7s, enable flat sampling, or reduce sample count to 60.
aggregate_query_group_by_with_filter
                        time:   [1.1228 ms 1.1256 ms 1.1288 ms]
                        change: [-3.8888% -3.0339% -2.2444%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
  1 (1.00%) high mild
  4 (4.00%) high severe

aggregate_query_group_by_u64 15 12
                        time:   [2.2511 ms 2.2662 ms 2.2822 ms]
                        change: [-1.3890% -0.4107% +0.6080%] (p = 0.43 > 0.05)
                        No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild

Benchmarking aggregate_query_group_by_with_filter_u64 15 12: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 5.7s, enable flat sampling, or reduce sample count to 60.
aggregate_query_group_by_with_filter_u64 15 12
                        time:   [1.1191 ms 1.1208 ms 1.1227 ms]
                        change: [-2.0168% -1.7182% -1.3923%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  3 (3.00%) low mild
  3 (3.00%) high mild
  1 (1.00%) high severe

aggregate_query_group_by_u64_multiple_keys
                        time:   [14.791 ms 15.115 ms 15.444 ms]
                        change: [-7.4168% -4.2996% -1.0317%] (p = 0.01 < 0.05)
                        Performance has improved.

aggregate_query_approx_percentile_cont_on_u64
                        time:   [3.7578 ms 3.7899 ms 3.8222 ms]
                        change: [-1.6803% -0.4507% +0.7961%] (p = 0.49 > 0.05)
                        No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
  1 (1.00%) high mild

aggregate_query_approx_percentile_cont_on_f32
                        time:   [3.2097 ms 3.2302 ms 3.2508 ms]
                        change: [-1.2514% -0.2948% +0.6973%] (p = 0.55 > 0.05)
                        No change in performance detected.
Found 3 outliers among 100 measurements (3.00%)
  1 (1.00%) low mild
  2 (2.00%) high mild

Are there any user-facing changes?

Faster group-bys.

@github-actions github-actions bot added the core Core DataFusion crate label Dec 15, 2022
Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great to me . Thank you @crepererum

cc @Dandandan and @tustvold

accumulators
.group_states
.iter()
.map(|group_state| group_state.group_by_values[i].clone()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

.into_iter()
.map(|group_state| {
(
VecDeque::from(group_state.group_by_values.to_vec()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 maybe we could use a VecDeque always and could avoid this copy too 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not a copy. Due to move semantics, Rust Std should just reuse the pointer to the allocated backing array.

Copy link
Contributor Author

@crepererum crepererum Dec 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alamb alamb changed the title avoid group-key/-state clones for hash-groupby Improve group by hash performance: avoid group-key/-state clones for hash-groupby Dec 15, 2022
@Dandandan Dandandan merged commit 9667887 into apache:master Dec 16, 2022
@Dandandan
Copy link
Contributor

Nice @crepererum !

@ursabot
Copy link

ursabot commented Dec 16, 2022

Benchmark runs are scheduled for baseline = 4ec559d and contender = 9667887. 9667887 is a master commit associated with this PR. Results will be available as each benchmark for each run completes.
Conbench compare runs links:
[Skipped ⚠️ Benchmarking of arrow-datafusion-commits is not supported on ec2-t3-xlarge-us-east-2] ec2-t3-xlarge-us-east-2
[Skipped ⚠️ Benchmarking of arrow-datafusion-commits is not supported on test-mac-arm] test-mac-arm
[Skipped ⚠️ Benchmarking of arrow-datafusion-commits is not supported on ursa-i9-9960x] ursa-i9-9960x
[Skipped ⚠️ Benchmarking of arrow-datafusion-commits is not supported on ursa-thinkcentre-m75q] ursa-thinkcentre-m75q
Buildkite builds:
Supported benchmarks:
ec2-t3-xlarge-us-east-2: Supported benchmark langs: Python, R. Runs only benchmarks with cloud = True
test-mac-arm: Supported benchmark langs: C++, Python, R
ursa-i9-9960x: Supported benchmark langs: Python, R, JavaScript
ursa-thinkcentre-m75q: Supported benchmark langs: C++, Java

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Core DataFusion crate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants