Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flowinfra: make max_running_flows default depend on the number of CPUs #71787

Merged
merged 1 commit into from
Oct 21, 2021

Conversation

yuzefovich
Copy link
Member

@yuzefovich yuzefovich commented Oct 20, 2021

We think that it makes sense to scale the default value for
max_running_flows based on how beefy the machines are, so we make it a
multiple of the number of available CPU cores. We do so in
a backwards-compatible fashion by treating the positive values of
sql.distsql.max_running_flows as absolute values (the previous
meaning) and the negative values as multiples of the number of the CPUs.

The choice of 128 as the default multiple is driven by the old default
value of 500 and is such that if we have 4 CPUs, then we'll get the value
of 512, pretty close to the old default.

Informs: #34229.

Release note (ops change): The meaning of
sql.distsql.max_running_flows cluster setting has been extended so
that when the value is negative, it would be multiplied by the number of
CPUs on the node to get the maximum number of concurrent remote flows on
the node. The default value is -128, meaning that on a 4 CPU machine we
will have up to 512 concurrent remote DistSQL flows, but on a 8 CPU
machine up to 1024. The previous default was 500.

@yuzefovich yuzefovich requested review from RaduBerinde and a team October 20, 2021 22:13
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@yuzefovich
Copy link
Member Author

I haven't run any benchmarks for the choice of 128 as the multiple but based it on a recent stress test I did. There, I used m5.4xlarge instances (that have 16 vCPUs) and bumped max_running_flows to 2k (and more), at which point CPU was saturated. I think we don't recommend running on machines smaller than 4 vCPUs, so that's the baseline as well for keeping about the same previous default value.

Copy link
Collaborator

@michae2 michae2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're changing the default to be a multiple of cores, it might be simpler semantically to change the meaning of the variable to be per-core, and then make the default a constant 128.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @RaduBerinde)

Copy link
Member

@RaduBerinde RaduBerinde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it would be nice to make the 128 itself configurable.

What I did elsewhere (#70328) was keep the "absolute" meaning for positive values and add a "per-CPU" meaning for negative value (e.g. 500 means limit=500, -100 means 100*GOMAXPROCS). It's a bit hacky but handles previously changed values well.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained

Copy link
Collaborator

@michae2 michae2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I did elsewhere (#70328) was keep the "absolute" meaning for positive values and add a "per-CPU" meaning for negative value (e.g. 500 means limit=500, -100 means 100*GOMAXPROCS).

🤯 Nice.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained

We think that it makes sense to scale the default value for
`max_running_flows` based on how beefy the machines are, so we make it a
multiple of the number of available CPU cores. We do so in
a backwards-compatible fashion by treating the positive values of
`sql.distsql.max_running_flows` as absolute values (the previous
meaning) and the negative values as multiples of the number of the CPUs.

The choice of 128 as the default multiple is driven by the old default
value of 500 and is such that if we have 4 CPUs, then we'll get the value
of 512, pretty close to the old default.

Release note (ops change): The meaning of
`sql.distsql.max_running_flows` cluster setting has been extended so
that when the value is negative, it would be multiplied by the number of
CPUs on the node to get the maximum number of concurrent remote flows on
the node. The default value is -128, meaning that on a 4 CPU machine we
will have up to 512 concurrent remote DistSQL flows, but on a 8 CPU
machine up to 1024. The previous default was 500.
Copy link
Member Author

@yuzefovich yuzefovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, these are interesting ideas, implemented both suggestions.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained

Copy link
Collaborator

@michae2 michae2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained

@yuzefovich
Copy link
Member Author

TFTRs!

bors r+

@craig
Copy link
Contributor

craig bot commented Oct 21, 2021

Build succeeded:

@yuzefovich
Copy link
Member Author

blathers backport 21.2

@blathers-crl
Copy link

blathers-crl bot commented Jan 25, 2022

Encountered an error creating backports. Some common things that can go wrong:

  1. The backport branch might have already existed.
  2. There was a merge conflict.
  3. The backport branch contained merge commits.

You might need to create your backport manually using the backport tool.


error creating merge commit from 53273ea to blathers/backport-release-21.2-71787: POST https://api.github.com/repos/cockroachdb/cockroach/merges: 409 Merge conflict []

you may need to manually resolve merge conflicts with the backport tool.

Backport to branch 21.2 failed. See errors above.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is otan.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants