-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change default minReadySeconds to 5 seconds #1780
Change default minReadySeconds to 5 seconds #1780
Conversation
After tests on several platforms, we decided to change default minReadySeconds of ScyllaCluster Pods from 10s to 5s. Test consisted of spawning multiple ScyllaCluster's having single node in parallel to overload kube-proxy recociling Endpoints and iptable rules. After ScyllaCluster passed Available=True,Progressing=False,Degraded=False test validated after how long it's possible to connect via identity ClusterIP Service. This measures how big the discrepancy is between when we call ScyllaCluster Available and when it's actually available. On different platforms and setups results were as the following (in seconds): * GKE with kube-proxy iptables mode: ``` 0.004304-2.272 74.6% █████▏ 1067 2.272-4.54 13.7% █ 196 4.54-6.808 7.34% ▌ 105 6.808-9.075 3.92% ▎ 56 9.075-11.34 0.28% ▏ 4 11.34-13.61 0.0699% ▏ 1 13.61-15.88 0% ▏ 15.88-18.15 0% ▏ 18.15-20.41 0% ▏ 20.41-22.68 0.0699% ▏ 1 ``` * GKE with Dataplane V2 Enabled (Cillium): ``` 0.004604-0.08347 94.3% █████▏ 943 0.08347-0.1623 3.7% ▏ 37 0.1623-0.2412 1.3% ▏ 13 0.2412-0.3201 0.1% ▏ 1 0.3201-0.3989 0.1% ▏ 1 0.3989-0.4778 0.2% ▏ 2 0.4778-0.5567 0.2% ▏ 2 0.5567-0.6355 0.1% ▏ 1 ``` * EKS with kube-proxy iptables mode: ``` 0.003163-0.129 95.6% █████▏ 956 0.129-0.2549 0.9% ▏ 9 0.2549-0.3807 0.3% ▏ 3 0.3807-0.5066 0.8% ▏ 8 0.5066-0.6324 1.4% ▏ 14 0.6324-0.7583 0% ▏ 0.7583-0.8841 0.6% ▏ 6 0.8841-1.01 0.4% ▏ 4 ``` After reproducing it locally, the root cause of slowness of GKE kube-proxy setup, seems to be slowness of iptables commands. kube-proxy logs traces when iptable execution takes long, and logs contained lots of such traces, sometimes taking even 15s. Because there's an alternative on GKE, we lower minReadySeconds to 5s to give enough time to kube-proxy and not delay rollouts too much.
be16c80
to
2b5e8f4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
/lgtm
thanks for all the effort on getting the stats for this
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: tnozicka, zimnx The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
After tests on several platforms, we decided to change default minReadySeconds of ScyllaCluster Pods from 10s to 5s.
Test consisted of spawning multiple ScyllaCluster's having single node in parallel to overload kube-proxy recociling Endpoints and iptable rules. After ScyllaCluster passed Available=True,Progressing=False,Degraded=False test validated after how long it's possible to connect via identity ClusterIP Service. This measures how big the discrepancy is between when we call ScyllaCluster Available and when it's actually available.
On different platforms and setups results were as the following (in seconds):
GKE with kube-proxy iptables mode:
GKE with Dataplane V2 Enabled (Cillium):
EKS with kube-proxy iptables mode:
After reproducing it locally, the root cause of slowness of GKE kube-proxy setup, seems to be slowness of iptables commands. kube-proxy logs traces when iptable execution takes long, and logs contained lots of such traces, sometimes taking even 15s. Because there's an alternative on GKE, we lower minReadySeconds to 5s to give enough time to kube-proxy and not delay rollouts too much.