Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node is terminated to early when scale-down-unneeded-time is set to 10m #5952

Closed
mohanisch-sixt opened this issue Jul 14, 2023 · 13 comments
Closed
Assignees
Labels
area/cluster-autoscaler area/core-autoscaler Denotes an issue that is related to the core autoscaler and is not specific to any provider. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mohanisch-sixt
Copy link

Which component are you using?:
cluster-autoscaler

What version of the component are you using?:
1.27.1 / Chart 9.29.0

Component version:

What k8s version are you using (kubectl version)?:
"v1.24.14-eks-c12679a

What environment is this in?:

AWS EKS

What did you expect to happen?:
Node is terminated only after 10 minutes, after it has been marked as no longer needed

What happened instead?:
Node is terminated earlier than expected, e.g. after 2 minuted

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
In our cluster we are running Jenkins with K8s agents. Sometimes we have jobs which have no resource consumption as they are waiting for other jobs or just doing some things which has low resource consumption. We monitored this a long time and figured out, that a value of 0.06 for scale-down-utilization-threshold is working good for us as a node which has nothing todo has a value of 0.053.. . In cases where a pod is scheduled which is "just running", we have this utilisation as well and it happens, that the node got a marker as unneeded. In some cases these nodes are terminated after less than 10 minutes although 10 minutes waiting time is configured.

One example:

I0712 06:30:07.697717       1 klogx.go:87] Node ip-172-25-12-34.eu-central-1.compute.internal - cpu utilization 0.053729
I0712 06:30:07.697837       1 cluster.go:155] ip-172-25-16-25.eu-central-1.compute.internal for removal
I0712 06:31:49.738129       1 nodes.go:126] ip-172-25-12-34.eu-central-1.compute.internal was unneeded for 1m42.382742246s

After last line there is no newer information like "node termianted" or something. It is just gone.

CA is configured as followed:

    skip-nodes-with-local-storage: true
    skip-nodes-with-custom-controller-pods: true
    cordon-node-before-terminating: true
    scale-down-utilization-threshold: 0.06
    scan-interval: 10s
    scale-down-unneeded-time: 10m
    skip-nodes-with-system-pods: true
    max-empty-bulk-delete: 2
@mohanisch-sixt mohanisch-sixt added the kind/bug Categorizes issue or PR as related to a bug. label Jul 14, 2023
@Bryce-Soghigian
Copy link
Member

Bryce-Soghigian commented Jul 18, 2023

Can you provide more logs of the workload? I am curious if there are other pieces that it ran into.

this is related to #5870 and #5618, which I fixed in #5890

The state being inconsistent for the deletion candidates in the unneeded nodes map, results in some funky behavior.Once this change is patched in, I would be curious to see if you observe the problem.

Can you try also setting a flag max-scale-down-parallelism=2 ? Max Scale Down Parallelism should always have the same value as max-empty-bulk-delete, as to avoid inconsistent deletion thresholds. Read the issues I linked to learn more.

@Bryce-Soghigian
Copy link
Member

/assign Bryce-Soghigian

@mohanisch-sixt
Copy link
Author

@Bryce-Soghigian Sorry for late reply. Somehow I missed that. Will provide some logs asap!

@mohanisch-sixt
Copy link
Author

mohanisch-sixt commented Sep 7, 2023

@Bryce-Soghigian attached some logs from yesterday. We changed the config as you suggested. Issue still there.
At the time, a Jenkins pod was scheduled on that node. The pod had a local volume. My understanding is that this node should not disappear according to the configuration.

17:56:15.404637       1 nodes.go:84] ip-172-22-33-123.eu-central-1.compute.internal is unneeded since 2023-09-06 17:55:02.405360998 +0000 UTC m=+14254.392769435 duration 1m12.991859892s
17:56:15.404824       1 nodes.go:126] ip-172-22-33-123.eu-central-1.compute.internal was unneeded for 1m12.991859892s
17:56:15.404414       1 klogx.go:87] Node ip-172-22-33-123.eu-central-1.compute.internal - cpu utilization 0.053729
17:56:05.370935       1 nodes.go:126] ip-172-22-33-123.eu-central-1.compute.internal was unneeded for 1m2.748095048s
17:56:05.370037       1 klogx.go:87] Node ip-172-22-33-123.eu-central-1.compute.internal - cpu utilization 0.053729
17:56:05.370402       1 nodes.go:84] ip-172-22-33-123.eu-central-1.compute.internal is unneeded since 2023-09-06 17:55:02.405360998 +0000 UTC m=+14254.392769435 duration 1m2.748095048s
17:55:55.095995       1 nodes.go:126] ip-172-22-33-123.eu-central-1.compute.internal was unneeded for 52.681483473s
17:55:55.095487       1 nodes.go:84] ip-172-22-33-123.eu-central-1.compute.internal is unneeded since 2023-09-06 17:55:02.405360998 +0000 UTC m=+14254.392769435 duration 52.681483473s
17:55:55.095218       1 klogx.go:87] Node ip-172-22-33-123.eu-central-1.compute.internal - cpu utilization 0.053729
17:55:45.061557       1 nodes.go:84] ip-172-22-33-123.eu-central-1.compute.internal is unneeded since 2023-09-06 17:55:02.405360998 +0000 UTC m=+14254.392769435 duration 42.648950529s
17:55:45.061369       1 klogx.go:87] Node ip-172-22-33-123.eu-central-1.compute.internal - cpu utilization 0.053729
17:55:45.061701       1 nodes.go:126] ip-172-22-33-123.eu-central-1.compute.internal was unneeded for 42.648950529s
17:55:35.016801       1 nodes.go:126] ip-172-22-33-123.eu-central-1.compute.internal was unneeded for 32.599023661s
17:55:35.016554       1 nodes.go:84] ip-172-22-33-123.eu-central-1.compute.internal is unneeded since 2023-09-06 17:55:02.405360998 +0000 UTC m=+14254.392769435 duration 32.599023661s
17:55:35.015860       1 klogx.go:87] Node ip-172-22-33-123.eu-central-1.compute.internal - cpu utilization 0.053729
17:55:24.973014       1 nodes.go:126] ip-172-22-33-123.eu-central-1.compute.internal was unneeded for 21.160108208s
17:55:24.972213       1 nodes.go:84] ip-172-22-33-123.eu-central-1.compute.internal is unneeded since 2023-09-06 17:55:02.405360998 +0000 UTC m=+14254.392769435 duration 21.160108208s
17:55:24.970961       1 klogx.go:87] Node ip-172-22-33-123.eu-central-1.compute.internal - cpu utilization 0.053729
17:55:13.535285       1 nodes.go:84] ip-172-22-33-123.eu-central-1.compute.internal is unneeded since 2023-09-06 17:55:02.405360998 +0000 UTC m=+14254.392769435 duration 11.113491158s
17:55:13.534998       1 klogx.go:87] Node ip-172-22-33-123.eu-central-1.compute.internal - cpu utilization 0.053729
17:55:13.535830       1 nodes.go:126] ip-172-22-33-123.eu-central-1.compute.internal was unneeded for 11.113491158s
17:55:02.710156       1 taints.go:162] Successfully added DeletionCandidateTaint on node ip-172-22-33-123.eu-central-1.compute.internal
17:55:02.681422       1 klogx.go:87] Node ip-172-22-33-123.eu-central-1.compute.internal - cpu utilization 0.053729
17:55:02.681546       1 nodes.go:84] ip-172-22-33-123.eu-central-1.compute.internal is unneeded since 2023-09-06 17:55:02.405360998 +0000 UTC m=+14254.392769435 duration 0s
17:55:02.681767       1 nodes.go:126] ip-172-22-33-123.eu-central-1.compute.internal was unneeded for 0s
17:54:52.373020       1 eligibility.go:154] Node ip-172-22-33-123.eu-central-1.compute.internal is not suitable for removal - memory utilization too big (0.514903)
17:54:42.335416       1 eligibility.go:154] Node ip-172-22-33-123.eu-central-1.compute.internal is not suitable for removal - memory utilization too big (0.514903)
17:54:32.197836       1 eligibility.go:154] Node ip-172-22-33-123.eu-central-1.compute.internal is not suitable for removal - memory utilization too big (0.652431)
17:54:22.156547       1 eligibility.go:154] Node ip-172-22-33-123.eu-central-1.compute.internal is not suitable for removal - memory utilization too big (0.652431)

@mohanisch-sixt
Copy link
Author

@Bryce-Soghigian We monitored the issue the last days with suggested change.. With adding max-scale-down-parallelism=2 we facing this issue more often..

@msardana94
Copy link

msardana94 commented Sep 22, 2023

We are also facing this exact same issue in EKS 1.26. It started happening recently when we reduced the scale-down-utilization-threshold to 0.2 (and also increased the scan-interval to 60s). @mohanisch-sixt were you able to figure out a solution for this?

@msardana94
Copy link

msardana94 commented Sep 22, 2023

After observing logs of several instances when this happened, I noticed that the common theme between the nodes (I looked at 3-5 available instances in past few days) was they all were active for relatively longer period (~ 1 hour or more). Not sure how this may be contributing to only these nodes facing issue.

I tried to look at the code (I am not so familiar with Go lang but still gave it a shot) to figure out why this might be happening and it appeared the nodes are getting removed because of this since I can see in my logs that this is getting executed just before the nodes getting killed. I am unsure how/when legacy code gets called (going by the name) and I may be totally wrong in understanding but just wanted to share my observation.

@mohanisch-sixt
Copy link
Author

@msardana94 Yes.. I guess we found our solution. The problem was a bit hidden, since the logs in Cloudtrail, among others, were absolutely not helpful (regarding node termination). As it looks, our setup came into conflict with the az rebalancing. The CA is not the evil one at this point, but still helped us to identify the problem (by adjusting the settings and more verbose logs).

We had previously configured three availably zones per node group. Now we have changed it and split this single node group into three and configured only one subnet per group. this way we avoid the automatic az rebalancing. The CA takes care of the balancing of the node groups (balance-similar-node-groups) at this point.

@msardana94
Copy link

@mohanisch-sixt Thanks for sharing the solution. I just went through gotchas mentioned in CA readme relating to what you had mentioned. They mentioned some alternatives to handle this. Curious to know if you tried those out before taking the decision of splitting the nodegroups?

On creation time, the ASG will have the AZRebalance process enabled, which means it will actively work to balance the number of instances between AZs, and possibly terminate instances. If your applications could be impacted from sudden termination, you can either suspend the AZRebalance feature, or use a tool for automatic draining upon ASG scale-in such as the k8s-node-drainer. The aws/aws-node-termination-handler#95 will also support this use-case in the future.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 28, 2024
@towca towca added the area/core-autoscaler Denotes an issue that is related to the core autoscaler and is not specific to any provider. label Mar 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 20, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler area/core-autoscaler Denotes an issue that is related to the core autoscaler and is not specific to any provider. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants