-
Notifications
You must be signed in to change notification settings - Fork 911
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add automaxprocs. #4301
Add automaxprocs. #4301
Conversation
This will automate the setting of GOMAXPROCS in Kubernetes/docker environments where it is not already set as part of the deployment. Setting GOMAXPROCS to match resource limits (rather than the total core count of the node) allows Go to more efficiently use the available cores and reduces CPU throttling (eliminating it if limits are set to an integer number of cores). This issue was highlighted during benchmarking but probably effects a large number of real world Kubernetes deployments where CPU limits are set but GOMAXPROCS is not.
Example of correcting Note: This is was not done using this PR as docker builds are not automatically published for PRs. This was done by manually setting |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This issue was highlighted during benchmarking
Do you have any benchmark results to compare before and after this change? I see the throttling disappear from the screenshot, but it looks like this change increased average latency in the example that the library has in their docs. It'd be good to know how it would affect our latency profile.
I don't have the cluster now but I can recreate, which metrics would you like comparing before/after? |
I think task processing p50 and p99 is good enough |
|
The setup is 8 core node, CPU limit for history pods set to 1. |
This will automate the setting of
GOMAXPROCS
in Kubernetes/docker environments where it is not already set as part of the deployment.What changed?
automaxprocs
library was added to setGOMAXPROCS
to match the CPU limits set on a container. This is a no-op ifGOMAXPROCS
environment variable is already set, or if not running in a container.Why?
Setting
GOMAXPROCS
to match resource limits (rather than the total core count of the node) allows Go to more efficiently use the available cores and reduces CPU throttling (eliminating it entirely if limits are set to an integer number of cores).This issue was highlighted during benchmarking but probably effects a large number of real world Kubernetes deployments where CPU limits are set but
GOMAXPROCS
is not.How did you test it?
Potential risks
Is hotfix candidate?
No.