-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node is terminated to early when scale-down-unneeded-time is set to 10m #5952
Comments
Can you provide more logs of the workload? I am curious if there are other pieces that it ran into. this is related to #5870 and #5618, which I fixed in #5890 The state being inconsistent for the deletion candidates in the unneeded nodes map, results in some funky behavior.Once this change is patched in, I would be curious to see if you observe the problem. Can you try also setting a flag max-scale-down-parallelism=2 ? Max Scale Down Parallelism should always have the same value as max-empty-bulk-delete, as to avoid inconsistent deletion thresholds. Read the issues I linked to learn more. |
/assign Bryce-Soghigian |
@Bryce-Soghigian Sorry for late reply. Somehow I missed that. Will provide some logs asap! |
@Bryce-Soghigian attached some logs from yesterday. We changed the config as you suggested. Issue still there.
|
@Bryce-Soghigian We monitored the issue the last days with suggested change.. With adding max-scale-down-parallelism=2 we facing this issue more often.. |
We are also facing this exact same issue in EKS 1.26. It started happening recently when we reduced the |
After observing logs of several instances when this happened, I noticed that the common theme between the nodes (I looked at 3-5 available instances in past few days) was they all were active for relatively longer period (~ 1 hour or more). Not sure how this may be contributing to only these nodes facing issue. I tried to look at the code (I am not so familiar with Go lang but still gave it a shot) to figure out why this might be happening and it appeared the nodes are getting removed because of this since I can see in my logs that this is getting executed just before the nodes getting killed. I am unsure how/when legacy code gets called (going by the name) and I may be totally wrong in understanding but just wanted to share my observation. |
@msardana94 Yes.. I guess we found our solution. The problem was a bit hidden, since the logs in Cloudtrail, among others, were absolutely not helpful (regarding node termination). As it looks, our setup came into conflict with the az rebalancing. The CA is not the evil one at this point, but still helped us to identify the problem (by adjusting the settings and more verbose logs). We had previously configured three availably zones per node group. Now we have changed it and split this single node group into three and configured only one subnet per group. this way we avoid the automatic az rebalancing. The CA takes care of the balancing of the node groups (balance-similar-node-groups) at this point. |
@mohanisch-sixt Thanks for sharing the solution. I just went through gotchas mentioned in CA readme relating to what you had mentioned. They mentioned some alternatives to handle this. Curious to know if you tried those out before taking the decision of splitting the nodegroups?
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
1.27.1 / Chart 9.29.0
Component version:
What k8s version are you using (
kubectl version
)?:"v1.24.14-eks-c12679a
What environment is this in?:
AWS EKS
What did you expect to happen?:
Node is terminated only after 10 minutes, after it has been marked as no longer needed
What happened instead?:
Node is terminated earlier than expected, e.g. after 2 minuted
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
In our cluster we are running Jenkins with K8s agents. Sometimes we have jobs which have no resource consumption as they are waiting for other jobs or just doing some things which has low resource consumption. We monitored this a long time and figured out, that a value of
0.06
forscale-down-utilization-threshold
is working good for us as a node which has nothing todo has a value of 0.053.. . In cases where a pod is scheduled which is "just running", we have this utilisation as well and it happens, that the node got a marker as unneeded. In some cases these nodes are terminated after less than 10 minutes although 10 minutes waiting time is configured.One example:
After last line there is no newer information like "node termianted" or something. It is just gone.
CA is configured as followed:
The text was updated successfully, but these errors were encountered: