Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Improvement] Improve performance with large number of secrets/configmaps. Strain on kube-apiserver #108

Closed
jwierzbo opened this issue Oct 27, 2020 · 8 comments
Assignees
Labels
enhancement New feature or request stale

Comments

@jwierzbo
Copy link

On our test cluster we have 2500 Secrets and 4000 ConfigMaps objects.
Once kubernetes-reflector has been installed/started, kube-apiserver needs many times more resources then before:
kube API drain

In the logs of kube-apiserver I've observed tons of requests from kubernetes-reflector.
Also kubernetes-reflector required pretty much resources (2 GB of memory).

Is there any way to reduce such a load on kube-apiserver (e.g. define the list of monitored namespaces, extend the monitoring interval, etc.)?

@winromulus winromulus self-assigned this Oct 27, 2020
@winromulus winromulus added the enhancement New feature or request label Oct 27, 2020
@winromulus
Copy link
Contributor

@jwierzbo what version of reflector are you using? Normally reflector should be limited to 256MB max (in the default chart)
As for limiting reflector there are no options right now, but I can put that on my list for this or next weekend.

@winromulus winromulus changed the title kube-apiserver performane isssue [Improvement] Improve performance with large number of secrets/configmaps. Strain on kube-apiserver Oct 27, 2020
@jwierzbo
Copy link
Author

I'm using newest version 5.4.17.

I've tested it also on different, smaller cluster with 400 Secrets and 100 ConfigMaps and in that case 1GB of memory was sufficient

@stale
Copy link

stale bot commented Nov 5, 2020

Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Nov 5, 2020
@stale
Copy link

stale bot commented Nov 12, 2020

Automatically closed stale item.

@stale stale bot closed this as completed Nov 12, 2020
@k-ayache
Copy link

k-ayache commented Dec 3, 2020

hi,
I have exactly the same issue with the same version 5.4.17, can someone give more details please about that

@stv0g
Copy link

stv0g commented Dec 20, 2020

Me too

@cwrau
Copy link

cwrau commented Mar 10, 2021

We're having the same problem, APIServer using more than 2 Cores and more than 200MiB/s traffic

Without reflector, it's using 0.5 Cores and 25MiB/s

Our cluster is maybe medium size

@mshade
Copy link

mshade commented Aug 20, 2021

Us, too - etcd and k8s apiserver get a prohibitive amount of traffic caused by the reflector.

winromulus added a commit that referenced this issue Oct 16, 2021
- New multi-arch pipeline with proper tagging convention
- Removed cert-manager extension (deprecated due to new support from cert-manager) Fixes: #191
- Fixed healthchecks. Fixes: #208
- Removed Slack support links (GitHub issues only). Fixes: #199
- Simplified startup and improved performance. Fixes: #194
- Huge improvements in performance and stability. Fixes: #187 #182 #166 #150 #138 #121 #108
winromulus added a commit that referenced this issue Oct 16, 2021
- New multi-arch pipeline with proper tagging convention
- Removed cert-manager extension (deprecated due to new support from cert-manager) Fixes: #191
- Fixed healthchecks. Fixes: #208
- Removed Slack support links (GitHub issues only). Fixes: #199
- Simplified startup and improved performance. Fixes: #194
- Huge improvements in performance and stability. Fixes: #187 #182 #166 #150 #138 #121 #108
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale
Development

No branches or pull requests

6 participants