-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOM keda operator and metricServer #4687
Comments
Could you share the logs please? |
Of course. |
I see that you are registering 106 custom CAs, is that correct? |
How can I check this thing with the CA cause I don't see that i'm registering 106 CAs?. This are the deployments |
oh f**k, |
Could you share the ScaledObject that you are deploying? |
Of course |
I see you are using 2.8.1 could you please update the version? I recall there were some critical issues fixed. |
Hey I can't try this right now but I have a memory profile. |
Found out what the problem was. When I looked at the differences between the requests heading to the api-server of kubernetes I could see that in version 2.8.1 it queries: Just added here the explanation because I thought it would be good to other people as well. |
@yuvalweber Thanks, appreciate that! |
Report
For some reason after deploying only one scaledObject in my cluster (very large cluster) the keda-operator started crashing due to OOM (before that he was using only 20Mi).
I am using the default spec of keda which means I have 100Mi and limits to 1000Mi.
Because of the OOM I changed the pod to have 2Gi and now he can survive with 600Mi of memory.
After that the metrics server started crashing due to OOM as well and when I changed his configuration as well he manage to work but bounced as well to this amount of memory.
my question is how can I investigate what causing this memory burst cause with debug logs I can’t see anything which seems related.
Expected Behavior
Shouldn’t jump to 30 times more memory consumption due to only one scaled object
Actual Behavior
Jump to large amount of memory
Steps to Reproduce the Problem
Logs from KEDA operator
KEDA Version
2.8.1
Kubernetes Version
1.23
Platform
Amazon Web Services
Scaler Details
Prometheus
Anything else?
No response
The text was updated successfully, but these errors were encountered: