-
Notifications
You must be signed in to change notification settings - Fork 317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Common Horizontal Pod Autoscaler events in AKS cluster #1137
Comments
Could this be caused by this issue with slow response from the api? kubernetes/kubernetes#75752 I've seen metrics become unavailable through kubectl top nodes, with e.g. 10 containers on a Windows node. |
Hmmm, that issue is definitely suspicious looking, but it is hard for me to say if that would actually cause this. I do see "kubectl top nodes" return "unknown" on my Windows nodepool currently. |
Hi, have you set resource limit in the deployment yaml definition? |
@cpunella Yes I do have both CPU and RAM limits in all of my deployments. Does that somehow affect hpa or access to metrics data? |
It should be the metrics api issue, you can check the metrics server pod logs, you will find something like: |
Indeed @zhiweiv I am seeing events like: mixed in with a ton of: Thanks for the steps to confirm. |
@brobichaud yes, I found that if you don't set limits in the deployment, metrics are not collected ... |
But I do have limits on all of my deployments, yet metrics seem hit or miss... |
Hi @brobichaud apiVersion: autoscaling/v1 And make sure your deployment file you have allocate resources for your pods |
Does anyone know if it is accurate to say that this should be addressed by this change: kubernetes/kubernetes#74991 |
Thanks my understanding - @marosset to confirm. |
I think so. |
If you do end up getting this into earlier k8s releases could you update this issue? |
This got fixed in kubernetes/kubernetes#87730, but introduced this issue: kubernetes/kubernetes#90554, which is now fixed and deployed in AKS from 1.16.9+, 1.17.5+ and 1.18.1+ |
What happened:
I am frequently seeing warning events regarding metrics and the horizontal pod autoscaler as seen below:
It's unclear to me whether this is truly an AKS setup issue or is really a k8s or hpa issue, so I'm starting here. I didn't see these when I had a raw AKS-Engine based cluster not long ago.
What you expected to happen:
For the HPA to successfully acquire metrics on every call.
How to reproduce it (as minimally and precisely as possible):
Setup a simple AKS cluster with Windows nodepool, deploy some pods and setup simple CPU-based autoscale rules, such as:
Monitor system events and you should see events similar to above referenced.
Anything else we need to know?:
I was not seeing these with a raw AKS-Engine based cluster of similar configuration.
Environment:
kubectl version
): v1.14.3The text was updated successfully, but these errors were encountered: