Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change default minReadySeconds to 5 seconds #1780

Merged

Conversation

zimnx
Copy link
Collaborator

@zimnx zimnx commented Feb 29, 2024

After tests on several platforms, we decided to change default minReadySeconds of ScyllaCluster Pods from 10s to 5s.

Test consisted of spawning multiple ScyllaCluster's having single node in parallel to overload kube-proxy recociling Endpoints and iptable rules. After ScyllaCluster passed Available=True,Progressing=False,Degraded=False test validated after how long it's possible to connect via identity ClusterIP Service. This measures how big the discrepancy is between when we call ScyllaCluster Available and when it's actually available.

On different platforms and setups results were as the following (in seconds):

  • GKE with kube-proxy iptables mode:

    0.004304-2.272  74.6%    █████▏  1067
    2.272-4.54      13.7%    █       196
    4.54-6.808      7.34%    ▌       105
    6.808-9.075     3.92%    ▎       56
    9.075-11.34     0.28%    ▏       4
    11.34-13.61     0.0699%  ▏       1
    13.61-15.88     0%       ▏
    15.88-18.15     0%       ▏
    18.15-20.41     0%       ▏
    20.41-22.68     0.0699%  ▏       1
    
  • GKE with Dataplane V2 Enabled (Cillium):

    0.004604-0.08347  94.3%  █████▏  943
    0.08347-0.1623    3.7%   ▏       37
    0.1623-0.2412     1.3%   ▏       13
    0.2412-0.3201     0.1%   ▏       1
    0.3201-0.3989     0.1%   ▏       1
    0.3989-0.4778     0.2%   ▏       2
    0.4778-0.5567     0.2%   ▏       2
    0.5567-0.6355     0.1%   ▏       1
    
  • EKS with kube-proxy iptables mode:

    0.003163-0.129  95.6%  █████▏  956
    0.129-0.2549    0.9%   ▏       9
    0.2549-0.3807   0.3%   ▏       3
    0.3807-0.5066   0.8%   ▏       8
    0.5066-0.6324   1.4%   ▏       14
    0.6324-0.7583   0%     ▏
    0.7583-0.8841   0.6%   ▏       6
    0.8841-1.01     0.4%   ▏       4
    

After reproducing it locally, the root cause of slowness of GKE kube-proxy setup, seems to be slowness of iptables commands. kube-proxy logs traces when iptable execution takes long, and logs contained lots of such traces, sometimes taking even 15s. Because there's an alternative on GKE, we lower minReadySeconds to 5s to give enough time to kube-proxy and not delay rollouts too much.

@zimnx zimnx added kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Feb 29, 2024
@scylla-operator-bot scylla-operator-bot bot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Feb 29, 2024
After tests on several platforms, we decided to change default minReadySeconds of ScyllaCluster Pods from 10s to 5s.

Test consisted of spawning multiple ScyllaCluster's having single node in parallel to overload kube-proxy recociling Endpoints and iptable rules.
After ScyllaCluster passed Available=True,Progressing=False,Degraded=False test validated after how long it's possible to connect via identity ClusterIP Service.
This measures how big the discrepancy is between when we call ScyllaCluster Available and when it's actually available.

On different platforms and setups results were as the following (in seconds):
* GKE with kube-proxy iptables mode:

  ```
  0.004304-2.272  74.6%    █████▏  1067
  2.272-4.54      13.7%    █       196
  4.54-6.808      7.34%    ▌       105
  6.808-9.075     3.92%    ▎       56
  9.075-11.34     0.28%    ▏       4
  11.34-13.61     0.0699%  ▏       1
  13.61-15.88     0%       ▏
  15.88-18.15     0%       ▏
  18.15-20.41     0%       ▏
  20.41-22.68     0.0699%  ▏       1
  ```

* GKE with Dataplane V2 Enabled (Cillium):

  ```
  0.004604-0.08347  94.3%  █████▏  943
  0.08347-0.1623    3.7%   ▏       37
  0.1623-0.2412     1.3%   ▏       13
  0.2412-0.3201     0.1%   ▏       1
  0.3201-0.3989     0.1%   ▏       1
  0.3989-0.4778     0.2%   ▏       2
  0.4778-0.5567     0.2%   ▏       2
  0.5567-0.6355     0.1%   ▏       1
  ```

* EKS with kube-proxy iptables mode:

  ```
  0.003163-0.129  95.6%  █████▏  956
  0.129-0.2549    0.9%   ▏       9
  0.2549-0.3807   0.3%   ▏       3
  0.3807-0.5066   0.8%   ▏       8
  0.5066-0.6324   1.4%   ▏       14
  0.6324-0.7583   0%     ▏
  0.7583-0.8841   0.6%   ▏       6
  0.8841-1.01     0.4%   ▏       4
  ```

After reproducing it locally, the root cause of slowness of GKE kube-proxy setup, seems to be slowness of iptables commands.
kube-proxy logs traces when iptable execution takes long, and logs contained lots of such traces, sometimes taking even 15s.
Because there's an alternative on GKE, we lower minReadySeconds to 5s to give enough time to kube-proxy and not delay rollouts too much.
Copy link
Contributor

@tnozicka tnozicka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve
/lgtm

thanks for all the effort on getting the stats for this

@scylla-operator-bot scylla-operator-bot bot added the lgtm Indicates that a PR is ready to be merged. label Feb 29, 2024
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: tnozicka, zimnx

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@scylla-operator-bot scylla-operator-bot bot merged commit 8c713c8 into scylladb:master Feb 29, 2024
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/feature Categorizes issue or PR as related to a new feature. lgtm Indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants