Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

colexec: integrate flat, compact decimal datums #74590

Merged
merged 2 commits into from
Jan 11, 2022

Conversation

nvanbenschoten
Copy link
Member

@nvanbenschoten nvanbenschoten commented Jan 8, 2022

Replaces #74369 and #57593.

This PR picks up the following changes to cockroachdb/apd:

Release note (performance improvement): The memory representation of DECIMAL datums has been optimized to save space, avoid heap allocations, and eliminate indirection. This increases the speed of DECIMAL arithmetic and aggregation by up to 20% on large data sets.


At a high-level, those changes implement the "compact memory representation" for Decimals described in cockroachdb/apd#102 (comment) and later implemented in cockroachdb/apd#103.

Compared to the approach on master, the approach in cockroachdb/apd#103 is a) faster, b) avoids indirection + heap allocation, c) smaller.

Compared to the alternate approach in cockroachdb/apd#102, the approach in cockroachdb/apd#103 is a) faster for most operations, b) more usable because values can be safely copied, c) half the memory size (32 bytes per Decimal, vs. 64).

The memory representation of the Decimal struct in this approach looks like:

type Decimal struct {
    Form     int8
    Negative bool
    Exponent int32
    Coeff    BigInt {
        _inner  *big.Int // nil when value fits in _inline
        _inline [2]uint
    }
} // sizeof = 32

With a two-word inline array, any value that would fit in a 128-bit integer (i.e. decimals with a scale-adjusted absolute value up to 2^128 - 1) fit in _inline. The indirection through _inner is only used for values larger than this.

Before this change, the memory representation of the Decimal struct looked like:

type Decimal struct {
    Form     int64
    Negative bool
    Exponent int32
    Coeff    big.Int {
        neg bool
        abs []big.Word {
            data uintptr ---------------. 
            len  int64                  v
            cap  int64         [uint, uint, ...] // sizeof = variable, but around cap = 4, so 32 bytes
        }
    }
} // sizeof = 48 flat bytes + variable-length heap allocated array

Performance impact

Speedup on TPC-DS dataset

The TPC-DS dataset is full of decimal columns, so it's a good playground to test this change. Unfortunately, the variance in the runtime performance of the TPC-DS queries themselves is high (many queries varied by 30-40% per attempt), so it was hard to get signal out of them. Instead, I imported the TPC-DS dataset with a scale factor of 10 and ran some custom aggregation queries against the largest table (web_sales, row count = 7,197,566):

Queries

# q1
select sum(ws_wholesale_cost + ws_ext_list_price) from web_sales;

# q2
select sum(2 * ws_wholesale_cost + ws_ext_list_price) - max(4 * ws_ext_ship_cost), min(ws_net_profit) from web_sales;

# q3
select max(ws_bill_customer_sk + ws_bill_cdemo_sk + ws_bill_hdemo_sk + ws_bill_addr_sk + ws_ship_customer_sk + ws_ship_cdemo_sk + ws_ship_hdemo_sk + ws_ship_addr_sk + ws_web_page_sk + ws_web_site_sk + ws_ship_mode_sk + ws_warehouse_sk + ws_promo_sk + ws_order_number + ws_quantity + ws_wholesale_cost + ws_list_price + ws_sales_price + ws_ext_discount_amt + ws_ext_sales_price + ws_ext_wholesale_cost + ws_ext_list_price + ws_ext_tax + ws_coupon_amt + ws_ext_ship_cost + ws_net_paid + ws_net_paid_inc_tax + ws_net_paid_inc_ship + ws_net_paid_inc_ship_tax + ws_net_profit) from web_sales;

Here's the difference in runtime of these three queries before and after this change on an n2-standard-4 instance:

name              old s/op   new s/op   delta
TPC-DS/custom/q1  7.21 ± 3%  6.59 ± 0%   -8.57%  (p=0.000 n=10+10)
TPC-DS/custom/q2  10.2 ± 0%   9.7 ± 3%   -5.42%  (p=0.000 n=10+10)
TPC-DS/custom/q3  21.9 ± 1%  17.3 ± 0%  -21.13%  (p=0.000 n=10+10)

Heap allocation reduction in TPC-DS

Part of the reason for this speedup was that it significantly reduces heap allocations because most decimal values are stored inline. We can see this in q3 from above. Before the change, a heap profile looks like:

Screen Shot 2022-01-07 at 7 12 49 PM

After the change, a heap profile looks like:

Screen Shot 2022-01-07 at 7 17 32 PM

(the dominant source of heap allocations is now coldata.(*Nulls).Or. #74592 should help here)

Heap allocation reduction in TPC-E

On the read-only portion of the TPC-E (77% of the full workload, in terms of txn mix), this change has a significant impact on total heap allocations. Before the change, math/big.nat.make was responsible for 51.07% of total heap allocations:

Screen Shot 2021-12-31 at 8 01 00 PM

After the change, math/big.nat.make is responsible for only 1.1% of total heap allocations:

Screen Shot 2021-12-31 at 9 04 24 PM

That equates to roughly a 50% reduction in heap allocations.

Microbenchmarks

name                                                                   old time/op    new time/op     delta
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1024-10          65.6µs ± 2%     42.5µs ± 0%  -35.15%  (p=0.000 n=9+8)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=1024-10          68.4µs ± 1%     48.4µs ± 1%  -29.20%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=32768-10         1.65ms ± 1%     1.20ms ± 1%  -27.31%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1048576-10       51.4ms ± 1%     38.3ms ± 1%  -25.59%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=32-10            12.5µs ± 1%      9.4µs ± 2%  -24.72%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=32-10            12.5µs ± 1%      9.6µs ± 2%  -23.24%  (p=0.000 n=8+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1-10             10.5µs ± 1%      8.0µs ± 1%  -23.22%  (p=0.000 n=9+9)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=32-10           12.4µs ± 1%      9.6µs ± 1%  -22.70%  (p=0.000 n=8+10)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=1024-10       60.5µs ± 1%     47.1µs ± 2%  -22.24%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=1024-10        61.2µs ± 1%     47.7µs ± 1%  -22.09%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=1024-10         62.3µs ± 1%     48.7µs ± 2%  -21.91%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=32768-10         1.31ms ± 0%     1.03ms ± 1%  -21.53%  (p=0.000 n=9+10)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=1024-10          82.3µs ± 1%     64.9µs ± 1%  -21.12%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=1024-10           86.6µs ± 1%     68.5µs ± 1%  -20.93%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=1024-10            96.0µs ± 1%     77.1µs ± 1%  -19.73%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=1048576-10       41.2ms ± 0%     33.1ms ± 0%  -19.64%  (p=0.000 n=8+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=32-10              17.5µs ± 1%     14.3µs ± 2%  -18.59%  (p=0.000 n=9+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1-10                14.8µs ± 3%     12.1µs ± 3%  -18.26%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=32-10               20.0µs ± 1%     16.4µs ± 1%  -18.04%  (p=0.000 n=9+9)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=32-10               20.9µs ± 1%     17.2µs ± 3%  -17.80%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=32768-10       884µs ± 0%      731µs ± 0%  -17.30%  (p=0.000 n=10+9)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=1048576-10    27.9ms ± 0%     23.1ms ± 0%  -17.27%  (p=0.000 n=9+9)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1024-10              218µs ± 2%      181µs ± 2%  -17.23%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=32768-10        911µs ± 1%      755µs ± 1%  -17.10%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=32768-10         957µs ± 1%      798µs ± 0%  -16.66%  (p=0.000 n=9+9)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=32768-10         1.54ms ± 1%     1.29ms ± 1%  -16.56%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=1024-10              188µs ± 1%      157µs ± 2%  -16.33%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=1048576-10     28.8ms ± 0%     24.1ms ± 0%  -16.14%  (p=0.000 n=9+9)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=1048576-10      30.4ms ± 0%     25.7ms ± 1%  -15.26%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=1048576-10          135ms ± 1%      114ms ± 1%  -15.21%  (p=0.000 n=10+9)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=32768-10          1.79ms ± 1%     1.52ms ± 1%  -15.14%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=32768-10            6.29ms ± 1%     5.50ms ± 1%  -12.62%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=1048576-10       62.2ms ± 0%     54.7ms ± 0%  -12.08%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=32768-10           2.46ms ± 1%     2.17ms ± 1%  -11.88%  (p=0.000 n=10+9)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=32768-10            5.64ms ± 0%     4.98ms ± 0%  -11.76%  (p=0.000 n=9+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1048576-10           354ms ± 2%      318ms ± 1%  -10.18%  (p=0.000 n=10+8)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=1048576-10        91.8ms ± 1%     83.3ms ± 0%   -9.25%  (p=0.000 n=9+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=1048576-10           396ms ± 1%      369ms ± 1%   -6.83%  (p=0.000 n=8+8)

name                                                                   old speed      new speed       delta
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1024-10         125MB/s ± 2%    193MB/s ± 0%  +54.20%  (p=0.000 n=9+8)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=1024-10         120MB/s ± 1%    169MB/s ± 1%  +41.24%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=32768-10        159MB/s ± 1%    219MB/s ± 1%  +37.57%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1048576-10      163MB/s ± 1%    219MB/s ± 1%  +34.39%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=32-10          20.4MB/s ± 1%   27.2MB/s ± 2%  +32.85%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1-10            764kB/s ± 2%    997kB/s ± 1%  +30.45%  (p=0.000 n=10+9)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=32-10          20.5MB/s ± 1%   26.8MB/s ± 2%  +30.28%  (p=0.000 n=8+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=32-10         20.7MB/s ± 1%   26.8MB/s ± 1%  +29.37%  (p=0.000 n=8+10)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=1024-10      135MB/s ± 1%    174MB/s ± 2%  +28.61%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=1024-10       134MB/s ± 1%    172MB/s ± 1%  +28.35%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=1024-10        131MB/s ± 1%    168MB/s ± 2%  +28.06%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=32768-10        200MB/s ± 0%    255MB/s ± 1%  +27.45%  (p=0.000 n=9+10)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=1024-10         100MB/s ± 1%    126MB/s ± 1%  +26.78%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=1024-10         94.6MB/s ± 1%  119.6MB/s ± 1%  +26.47%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=1024-10          85.3MB/s ± 1%  106.3MB/s ± 1%  +24.58%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=1048576-10      204MB/s ± 0%    254MB/s ± 0%  +24.44%  (p=0.000 n=8+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=32-10            14.6MB/s ± 1%   18.0MB/s ± 2%  +22.83%  (p=0.000 n=9+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1-10               544kB/s ± 3%    664kB/s ± 2%  +22.06%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=32-10             12.8MB/s ± 1%   15.6MB/s ± 1%  +22.02%  (p=0.000 n=9+9)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=32-10             12.3MB/s ± 1%   14.9MB/s ± 3%  +21.67%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=32768-10     296MB/s ± 0%    358MB/s ± 0%  +20.92%  (p=0.000 n=10+9)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=1048576-10   300MB/s ± 0%    363MB/s ± 0%  +20.87%  (p=0.000 n=9+9)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1024-10           37.5MB/s ± 2%   45.4MB/s ± 2%  +20.82%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=32768-10      288MB/s ± 1%    347MB/s ± 1%  +20.62%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=32768-10       274MB/s ± 1%    329MB/s ± 0%  +19.99%  (p=0.000 n=9+9)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=32768-10        170MB/s ± 1%    204MB/s ± 1%  +19.85%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=1024-10           43.6MB/s ± 1%   52.1MB/s ± 2%  +19.52%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=1048576-10    292MB/s ± 0%    348MB/s ± 0%  +19.25%  (p=0.000 n=9+9)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=1048576-10     276MB/s ± 0%    326MB/s ± 1%  +18.00%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=1048576-10       62.1MB/s ± 1%   73.3MB/s ± 1%  +17.94%  (p=0.000 n=10+9)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=32768-10         147MB/s ± 1%    173MB/s ± 1%  +17.83%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=32768-10          41.7MB/s ± 1%   47.7MB/s ± 1%  +14.44%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=1048576-10      135MB/s ± 0%    153MB/s ± 0%  +13.74%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=32768-10          106MB/s ± 1%    121MB/s ± 1%  +13.48%  (p=0.000 n=10+9)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=32768-10          46.5MB/s ± 0%   52.7MB/s ± 0%  +13.34%  (p=0.000 n=9+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1048576-10        23.7MB/s ± 2%   26.3MB/s ± 2%  +11.02%  (p=0.000 n=10+9)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=1048576-10      91.3MB/s ± 0%  100.7MB/s ± 0%  +10.27%  (p=0.000 n=8+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=1048576-10        21.2MB/s ± 1%   22.7MB/s ± 1%   +7.32%  (p=0.000 n=8+8)

name                                                                   old alloc/op   new alloc/op    delta
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=32768-10          354kB ± 0%      239kB ± 0%  -32.39%  (p=0.000 n=9+9)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=32768-10          348kB ± 0%      239kB ± 0%  -31.23%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1024-10           251kB ± 0%      177kB ± 0%  -29.44%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=1024-10           246kB ± 0%      177kB ± 0%  -28.28%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=32768-10         275kB ± 0%      198kB ± 0%  -28.06%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=1024-10          243kB ± 0%      177kB ± 0%  -27.15%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=1024-10         242kB ± 0%      177kB ± 0%  -27.09%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=1024-10        242kB ± 0%      177kB ± 0%  -27.06%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=32768-10        268kB ± 0%      198kB ± 0%  -26.05%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=32768-10       264kB ± 0%      198kB ± 0%  -25.04%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=32-10            75.1kB ± 0%     56.9kB ± 0%  -24.25%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=32-10            74.9kB ± 0%     56.9kB ± 0%  -24.12%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=32-10           74.8kB ± 0%     56.9kB ± 0%  -23.99%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1-10             69.6kB ± 0%     53.1kB ± 0%  -23.66%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1-10                95.2kB ± 0%     75.9kB ± 0%  -20.23%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=32-10                102kB ± 0%       82kB ± 0%  -20.04%  (p=0.000 n=8+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=32-10                103kB ± 0%       83kB ± 0%  -19.95%  (p=0.000 n=7+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=32-10               100kB ± 0%       80kB ± 0%  -19.90%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=1048576-10      1.14MB ± 0%     0.92MB ± 0%  -18.80%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=1024-10           271kB ± 0%      227kB ± 0%  -16.16%  (p=0.000 n=9+9)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=1048576-10       1.10MB ± 0%     0.92MB ± 0%  -15.92%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=1024-10            280kB ± 1%      235kB ± 1%  -15.91%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=1048576-10     1.09MB ± 1%     0.92MB ± 0%  -15.67%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=1024-10             291kB ± 0%      245kB ± 1%  -15.53%  (p=0.000 n=9+10)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=32768-10         1.11MB ± 0%     0.95MB ± 0%  -15.14%  (p=0.000 n=8+10)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=32768-10          1.22MB ± 0%     1.04MB ± 0%  -14.77%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=32768-10           1.65MB ± 0%     1.42MB ± 0%  -13.56%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1024-10              593kB ± 0%      513kB ± 0%  -13.36%  (p=0.000 n=9+8)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=1024-10              520kB ± 0%      454kB ± 0%  -12.82%  (p=0.000 n=9+8)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1048576-10       1.04MB ± 0%     0.92MB ± 0%  -11.06%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=1048576-10       2.48MB ± 0%     2.25MB ± 0%   -9.32%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=1048576-10     967kB ± 0%      881kB ± 0%   -8.89%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=1048576-10        7.86MB ± 0%     7.36MB ± 0%   -6.44%  (p=0.000 n=9+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=32768-10            14.2MB ± 1%     13.4MB ± 1%   -5.83%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=32768-10            12.3MB ± 0%     11.7MB ± 0%   -5.03%  (p=0.001 n=7+7)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=1048576-10         27.2MB ± 1%     25.9MB ± 1%   -4.84%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1048576-10           465MB ± 0%      445MB ± 0%   -4.32%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=1048576-10           403MB ± 0%      390MB ± 0%   -3.44%  (p=0.000 n=10+10)

name                                                                   old allocs/op  new allocs/op   delta
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1024-10           1.07k ± 0%      0.05k ± 0%  -95.70%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1048576-10            702k ± 0%        32k ± 0%  -95.46%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=1048576-10            489k ± 0%        28k ± 0%  -94.33%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=32768-10          4.40k ± 0%      0.30k ± 0%  -93.15%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=1024-10           1.11k ± 0%      0.09k ± 0%  -92.02%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=1024-10             561 ± 0%         46 ± 0%  -91.80%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=32768-10          3.45k ± 0%      0.30k ± 0%  -91.28%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=1024-10            1.19k ± 0%      0.15k ± 1%  -87.31%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=32768-10          4.87k ± 0%      0.70k ± 0%  -85.69%  (p=0.000 n=9+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=32768-10             32.2k ± 0%       6.3k ± 0%  -80.40%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=32768-10         1.45k ± 3%      0.29k ± 0%  -79.66%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=1024-10             1.39k ± 0%      0.30k ± 1%  -78.64%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=32768-10             26.2k ± 0%       6.8k ± 1%  -73.95%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=32768-10           6.64k ± 0%      1.95k ± 0%  -70.67%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1024-10              3.44k ± 1%      1.12k ± 1%  -67.48%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=1048576-10          62.4k ± 0%      20.4k ± 0%  -67.32%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=1024-10              2.95k ± 1%      1.05k ± 1%  -64.52%  (p=0.000 n=9+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=32768-10            10.8k ± 0%       4.5k ± 0%  -58.21%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=32768-10          628 ± 3%        294 ± 0%  -53.21%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=128/numInputRows=1048576-10         36.1k ± 0%      20.2k ± 0%  -44.06%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=1024-10           81.7 ± 3%       46.0 ± 0%  -43.67%  (p=0.000 n=9+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=1048576-10       14.4k ± 1%       8.2k ± 0%  -42.97%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=32-10              79.0 ± 0%       46.0 ± 0%  -41.77%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=1048576-10        13.7k ± 1%       8.2k ± 0%  -40.05%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=32-10                  191 ± 1%        120 ± 1%  -37.52%  (p=0.000 n=7+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=1048576-10      12.9k ± 2%       8.2k ± 0%  -36.17%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=2/numInputRows=32-10                  176 ± 2%        115 ± 1%  -34.33%  (p=0.000 n=10+9)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1048576-10        12.3k ± 0%       8.2k ± 0%  -33.21%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1024/numInputRows=1048576-10        21.8k ± 0%      15.2k ± 0%  -30.13%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=32/numInputRows=32-10                 118 ± 0%         84 ± 0%  -28.81%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=2/numInputRows=32-10              63.0 ± 0%       46.0 ± 0%  -26.98%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=128/numInputRows=1024-10          57.2 ±14%       46.0 ± 0%  -19.58%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=1048576-10     9.69k ± 1%      8.23k ± 0%  -15.07%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=32768-10         340 ± 2%        294 ± 0%  -13.43%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1/numInputRows=1-10               48.0 ± 0%       46.0 ± 0%   -4.17%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=32/numInputRows=32-10             48.0 ± 0%       46.0 ± 0%   -4.17%  (p=0.000 n=10+10)
Aggregator/MIN/ordered/decimal/groupSize=1024/numInputRows=1024-10         48.0 ± 0%       46.0 ± 0%   -4.17%  (p=0.000 n=10+10)
Aggregator/MIN/hash/decimal/groupSize=1/numInputRows=1-10                  82.0 ± 0%       79.0 ± 0%   -3.66%  (p=0.000 n=10+10)

@nvanbenschoten nvanbenschoten requested review from jordanlewis, yuzefovich and a team January 8, 2022 01:02
@nvanbenschoten nvanbenschoten requested review from a team as code owners January 8, 2022 01:02
@nvanbenschoten nvanbenschoten requested a review from a team January 8, 2022 01:02
@nvanbenschoten nvanbenschoten requested a review from a team as a code owner January 8, 2022 01:02
@nvanbenschoten nvanbenschoten requested a review from a team January 8, 2022 01:02
@nvanbenschoten nvanbenschoten requested a review from a team as a code owner January 8, 2022 01:02
@nvanbenschoten nvanbenschoten requested review from adityamaru and stevendanna and removed request for a team January 8, 2022 01:02
@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Member

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Major kudos for working on this! Thanks for all of the detailed descriptions of the changes.

General thoughts:

  • should we prohibit the usage of math/big.Int and force everyone to use apd.BigInt?
  • similar to the second commit, I think we can remove execgen.OverloadHelper.TmpDecs. Curious whether you want to include the corresponding change into this PR. Also, leaving a TODO(yuzefovich) for this also sounds good to me.

Reviewed 116 of 119 files at r1, 8 of 8 files at r3, 3 of 3 files at r4, all commit messages.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru, @jordanlewis, @nvanbenschoten, and @stevendanna)


-- commits, line 9 at r1:
nit: no release note.


go.mod, line 35 at r1 (raw file):

	github.com/cenkalti/backoff v2.2.1+incompatible
	github.com/client9/misspell v0.3.4
	github.com/cockroachdb/apd/v3 v3.0.0-20220107230737-bd134301dd9a

I've never updated the vendored stuff myself, so it'd be good if someone else took a look at that.


pkg/sql/colmem/allocator.go, line 398 at r3 (raw file):

// decimalEstimate is our guess for how much space a single apd.Decimal element
// will take up.
const decimalEstimate = 50

Let's adjust this estimate to be the size of the zero value of apd.Decimal.


pkg/sql/sem/builtins/aggregate_builtins.go, line 2007 at r3 (raw file):

	a.sxy.Set(&a.tmpSxy)

	size := int64(a.n.Sign() +

These should be Size, not Sign :)


pkg/util/duration/duration.go, line 302 at r3 (raw file):

}

// FromBigInt converts an apb.BigInt number of nanoseconds to a duration. Inverse

nit: s/apb/apd/.

@shermanCRL shermanCRL requested a review from HonoreDB January 8, 2022 21:41
craig bot pushed a commit that referenced this pull request Jan 10, 2022
74341: sql/catalog: restore fast-path in FullIndexColumnIDs r=ajwerner a=nvanbenschoten

This commit restores a [fast-path](c9e116e#diff-19625608f4a6e23e6fe0818f3a621e716615765cb338d18fe34b43f0a535f06dL140)
in `FullIndexColumnIDs` which was lost in c9e116e. The fast-path avoided
the allocation of a `ColumnID` slice and a `IndexDescriptor_Direction`
slice in `FullIndexColumnIDs` when given a unique index. In such cases,
these slices are already stored on the `IndexDescriptor`.

```
name                   old time/op    new time/op    delta
KV/Scan/SQL/rows=1-10    94.9µs ±10%    94.9µs ± 8%    ~     (p=0.739 n=10+10)

name                   old alloc/op   new alloc/op   delta
KV/Scan/SQL/rows=1-10    20.1kB ± 0%    20.1kB ± 1%    ~     (p=0.424 n=10+10)

name                   old allocs/op  new allocs/op  delta
KV/Scan/SQL/rows=1-10       245 ± 0%       241 ± 0%  -1.63%  (p=0.000 n=10+8)
```
----

This is part of a collection of assorted micro-optimizations:
- #74336
- #74337
- #74338
- #74339
- #74340
- #74341
- #74342
- #74343
- #74344
- #74345
- #74346
- #74347
- #74348

Combined, these changes have the following effect on end-to-end SQL query performance:
```
name                      old time/op    new time/op    delta
KV/Scan/SQL/rows=1-10       94.4µs ±10%    92.3µs ±11%   -2.20%  (p=0.000 n=93+93)
KV/Scan/SQL/rows=10-10       102µs ±10%      99µs ±10%   -2.16%  (p=0.000 n=94+94)
KV/Update/SQL/rows=10-10     378µs ±15%     370µs ±11%   -2.04%  (p=0.003 n=95+91)
KV/Insert/SQL/rows=1-10      133µs ±14%     132µs ±12%     ~     (p=0.738 n=95+93)
KV/Insert/SQL/rows=10-10     197µs ±14%     196µs ±13%     ~     (p=0.902 n=95+94)
KV/Update/SQL/rows=1-10      186µs ±14%     185µs ±14%     ~     (p=0.351 n=94+93)
KV/Delete/SQL/rows=1-10      132µs ±13%     132µs ±14%     ~     (p=0.473 n=94+94)
KV/Delete/SQL/rows=10-10     254µs ±16%     250µs ±16%     ~     (p=0.086 n=100+99)

name                      old alloc/op   new alloc/op   delta
KV/Scan/SQL/rows=1-10       20.1kB ± 0%    19.1kB ± 1%   -4.91%  (p=0.000 n=96+96)
KV/Scan/SQL/rows=10-10      21.7kB ± 0%    20.7kB ± 1%   -4.61%  (p=0.000 n=96+97)
KV/Delete/SQL/rows=10-10    64.0kB ± 3%    63.7kB ± 3%   -0.55%  (p=0.000 n=100+100)
KV/Update/SQL/rows=1-10     45.8kB ± 1%    45.5kB ± 1%   -0.55%  (p=0.000 n=97+98)
KV/Update/SQL/rows=10-10     105kB ± 1%     105kB ± 1%   -0.10%  (p=0.008 n=97+98)
KV/Delete/SQL/rows=1-10     40.8kB ± 0%    40.7kB ± 0%   -0.08%  (p=0.001 n=95+96)
KV/Insert/SQL/rows=1-10     37.4kB ± 1%    37.4kB ± 0%     ~     (p=0.698 n=97+96)
KV/Insert/SQL/rows=10-10    76.4kB ± 1%    76.4kB ± 0%     ~     (p=0.822 n=99+98)

name                      old allocs/op  new allocs/op  delta
KV/Scan/SQL/rows=1-10          245 ± 0%       217 ± 0%  -11.43%  (p=0.000 n=95+92)
KV/Scan/SQL/rows=10-10         280 ± 0%       252 ± 0%  -10.11%  (p=0.000 n=75+97)
KV/Delete/SQL/rows=10-10       478 ± 0%       459 ± 0%   -4.04%  (p=0.000 n=94+97)
KV/Delete/SQL/rows=1-10        297 ± 1%       287 ± 1%   -3.34%  (p=0.000 n=97+97)
KV/Update/SQL/rows=1-10        459 ± 0%       444 ± 0%   -3.27%  (p=0.000 n=97+97)
KV/Insert/SQL/rows=1-10        291 ± 0%       286 ± 0%   -1.72%  (p=0.000 n=82+86)
KV/Update/SQL/rows=10-10       763 ± 1%       750 ± 1%   -1.68%  (p=0.000 n=96+98)
KV/Insert/SQL/rows=10-10       489 ± 0%       484 ± 0%   -1.03%  (p=0.000 n=98+98)
```


74355: kv: protect Replica's lastToReplica and lastFromReplica fields with raftMu r=nvanbenschoten a=nvanbenschoten

This commit moves the Replica's lastToReplica and lastFromReplica from
under the `Replica.mu` mutex to the `Replica.raftMu` mutex. These are
strictly Raft-specific pieces of state, so we don't need fine-grained
locking around them. As a reward, we don't need to grab the `Replica.mu`
exclusively (or at all) when setting the fields in
`Store.withReplicaForRequest`.

The locking in `setLastReplicaDescriptors` showed up in a mutex profile
under a write-heavy workload. It was responsible for **3.44%** of mutex
wait time. Grabbing the mutex was probably also slowing down request
processing, as the exclusive lock acquisition had to wait for read locks
to be dropped.

<img width="1584" alt="Screen Shot 2021-12-30 at 9 45 08 PM" src="https://user-images.githubusercontent.com/5438456/147800455-8da74dfd-5fd3-4831-818c-7e3c65763435.png">

74592: coldata: operate on Nulls value, not reference r=yuzefovich a=nvanbenschoten

This commit changes `col.Vec.SetNulls` to accept a `Nulls` struct by value instead of by pointer. This lets us avoid a heap allocation on each call to `Nulls.Or`.

We saw this in the "after" heap profiles in #74590, which looked like:

<img width="1749" alt="Screen Shot 2022-01-07 at 7 17 32 PM" src="https://user-images.githubusercontent.com/5438456/148624263-777a6d93-4df7-40da-84a3-18d5e47ab633.png">

```
      File: cockroach
Type: alloc_objects
Time: Jan 8, 2022 at 12:17am (UTC)
Showing nodes accounting for 5943494, 100% of 5943494 total
----------------------------------------------------------+-------------
      flat  flat%   sum%        cum   cum%   calls calls% + context 	 	 
----------------------------------------------------------+-------------
                                            843873 48.47% |   github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj.projPlusDecimalDecimalOp.Next.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj/proj_non_const_ops.eg.go:3938
                                            823389 47.29% |   github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj.projPlusInt64Int64Op.Next.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj/proj_non_const_ops.eg.go:5732
                                             73736  4.24% |   github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj.projPlusInt64DecimalOp.Next.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj/proj_non_const_ops.eg.go:5870
   1740998 29.29% 29.29%    1740998 29.29%                | github.com/cockroachdb/cockroach/pkg/col/coldata.(*Nulls).Or /go/src/github.com/cockroachdb/cockroach/pkg/col/coldata/nulls.go:350
----------------------------------------------------------+-------------
                                            819219 49.50% |   github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj.projPlusInt64Int64Op.Next.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj/proj_non_const_ops.eg.go:5732
                                            704530 42.57% |   github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj.projPlusDecimalDecimalOp.Next.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj/proj_non_const_ops.eg.go:3938
                                            131076  7.92% |   github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj.projPlusInt64DecimalOp.Next.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/colexecproj/proj_non_const_ops.eg.go:5870
   1654825 27.84% 57.14%    1654825 27.84%                | github.com/cockroachdb/cockroach/pkg/col/coldata.(*Nulls).Or /go/src/github.com/cockroachdb/cockroach/pkg/col/coldata/nulls.go:348
----------------------------------------------------------+-------------
```

This PR eliminates one of these two heap allocations.

Co-authored-by: Nathan VanBenschoten <[email protected]>
Copy link
Contributor

@HonoreDB HonoreDB left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CDC changes LGTM

@nvanbenschoten nvanbenschoten force-pushed the nvanbenschoten/bigInt3 branch 5 times, most recently from 7d50825 to c8fe6c3 Compare January 11, 2022 04:44
This commit picks up the following changes to `cockroachdb/apd`:
- cockroachdb/apd#103
- cockroachdb/apd#104
- cockroachdb/apd#107
- cockroachdb/apd#108
- cockroachdb/apd#109
- cockroachdb/apd#110
- cockroachdb/apd#111

Release note (performance improvement): The memory representation of
DECIMAL datums has been optimized to save space, avoid heap allocations,
and eliminate indirection. This increases the speed of DECIMAL arithmetic
and aggregation by up to 20% on large data sets.
`apd.Decimal` can now be entirely stack allocated during arithmetic, so
there's no longer any need for this.

With cockroachdb/apd#104, this does not introduce
any new heap allocations:
```
➜ (cd pkg/sql/sem/tree && goescape . | grep moved | wc -l)
     328
```
Copy link
Member Author

@nvanbenschoten nvanbenschoten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TFTRs!

should we prohibit the usage of math/big.Int and force everyone to use apd.BigInt?

I considered this, but there are a few valid reasons to continue using big.Int in a few edge cases. For instance, we still need to use it when interacting with big.Rat and we still need to use it when interacting with external libraries like github.com/jackc/pgtype. So for now, I think I'll leave it as an option, but we can crack down on it later.

similar to the second commit, I think we can remove execgen.OverloadHelper.TmpDecs. Curious whether you want to include the corresponding change into this PR. Also, leaving a TODO(yuzefovich) for this also sounds good to me.

This is a good idea. I think I'll leave it out of this PR to avoid expanding its scope any further, but I added a TODO to pkg/sql/colexec/execgen/overloads_util.go.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @adityamaru, @jordanlewis, @stevendanna, and @yuzefovich)


-- commits, line 9 at r1:

Previously, yuzefovich (Yahor Yuzefovich) wrote…

nit: no release note.

Done.


pkg/sql/colmem/allocator.go, line 398 at r3 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

Let's adjust this estimate to be the size of the zero value of apd.Decimal.

Done.


pkg/sql/sem/builtins/aggregate_builtins.go, line 2007 at r3 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

These should be Size, not Sign :)

🤦‍♂️ Done.


pkg/util/duration/duration.go, line 302 at r3 (raw file):

Previously, yuzefovich (Yahor Yuzefovich) wrote…

nit: s/apb/apd/.

Done.

@nvanbenschoten
Copy link
Member Author

I posted some benchmark results from BenchmarkAggregator on the PR description. The results generally look as expected. We see a 7-35% speedup, a 7-54% throughput increase, a 3-32% reduction in allocated heap bytes, and a 3-96% reduction in allocated heap objects.

Copy link
Member

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! 🚀 :lgtm_strong:

Reviewed 82 of 82 files at r5, 5 of 5 files at r6, all commit messages.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @adityamaru, @jordanlewis, and @stevendanna)

@nvanbenschoten
Copy link
Member Author

bors r+

@craig
Copy link
Contributor

craig bot commented Jan 11, 2022

Build succeeded:

@craig craig bot merged commit 51af1ca into cockroachdb:master Jan 11, 2022
@nvanbenschoten nvanbenschoten deleted the nvanbenschoten/bigInt3 branch January 11, 2022 19:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants