Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add default kube-api-qps, burst, and worker-threads values in CSI driver #826

Merged
merged 1 commit into from
Oct 17, 2024

Conversation

outscale-hmi
Copy link
Contributor

Is this a bug fix or adding new feature?
For context : https://docs.google.com/document/d/1wyq_9-EFsr7U90JMYXOHxoJlChwWXJqWsah3ctmuDDo/edit?tab=t.0

Added values:
--kube-api-qps

  • helps throttle the rate at which the CSI driver interacts with the Kubernetes API. If the provisioner is making too many API requests, this can overwhelm the API server, leading to performance degradation.
  • Default Value: Kubernetes components usually have a default QPS value (e.g., 5 or 10). However, for high-performance workloads, a higher value is often set.
    => Setting --kube-api-qps=20 means that the CSI driver can send up to 20 requests per second to the API server.

--kube-api-burst:

  • Definition: This flag controls how many API requests can be sent in a short burst. It defines the burst rate of requests above the QPS rate.
  • Usage: This is useful when there is a sudden need for a higher rate of API requests, such as during volume provisioning spikes. Once the burst rate is exceeded, the QPS rate will throttle the requests.
  • Default Value: The default burst rate is usually higher than the QPS rate. For example, a default burst rate could be 10 when the QPS rate is 5. It allows the system to handle short spikes without overwhelming the Kubernetes API.
    => setting --kube-api-burst=100, you're allowing the CSI provisioner to send up to 100 requests in a burst when necessary, which helps manage high-demand periods.

--worker-threads:

  • Definition: This flag sets the number of concurrent worker threads that the CSI provisioner uses to handle volume provisioning tasks.
  • Usage: More worker threads mean that the CSI provisioner can handle multiple requests in parallel, such as creating volumes or managing snapshots. This can improve overall throughput and responsiveness, particularly in clusters with high storage activity.
  • Default Value: Provisioners often default to a smaller number of worker threads (e.g., 10). In high-performance environments, increasing the number of threads helps balance the workload better.
    => Setting --worker-threads=100 allows the CSI provisioner to handle up to 100 concurrent volume provisioning tasks, which can improve efficiency in larger clusters.

=> These settings help optimize the interaction between the CSI driver and the Kubernetes API server,
The ideal values for these flags depend on your cluster size, node count, and storage usage patterns. In small clusters, lower values might be sufficient, while larger clusters may need higher values.

@@ -142,6 +142,11 @@ spec:
{{- end }}
- --csi-address=$(ADDRESS)
- --v={{ .Values.verbosity }}
{{- if not (regexMatch "(-kube-api-qps)|(-kube-api-burst)|(-worker-threads)" (join " " .Values.sidecars.provisionerImage.additionalArgs)) }}
Copy link

@outscale-rce outscale-rce Oct 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why using regexMatch and not contains ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using regexMatch here is more efficient and clean because it allows you to match multiple patterns in a single, compact expression, whereas contains would require multiple separate checks.

@outscale-hmi outscale-hmi merged commit d3b3525 into master Oct 17, 2024
2 of 3 checks passed
@jfbus jfbus deleted the newParatmeter branch January 29, 2025 14:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants