You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried using HorizontalRunnerAutoscaler with multiple RunnerDeployment existing in the same namespace,
and found that the autoscaling controller does not work properly.
Install actions-runner-controller following the getting started
Create 2 RunnerDeployment, named example-runner-deployment and example-runner-deployment, with 2 replicas for each
Create HorizontalRunnerAutoscaler with maxReplicas: 4, scaleUpThreshold: '0.6' and scaleDownFactor: '1' for example-runner-deployment
Then, a reconciliation was triggered right after HorizontalRunnerAutoscaler was created and the autoscaling controller created 2 new Runners under example-runner-deployment, even though there was no Runner with the Busy status.
So, there were 4 Runners for example-runner-deployment and 2 Runners for example-runner-deployment-2 in the end.
What I expected to happen:
No Runners are created for example-runner-deployment.
I read this part of the code, and I'm guessing that
the autoscaling controller calculates the desired replicas for scaleTargetRef including all the runner existing in the same namespace.
In my case, the desired replicas is calculated like example-runner-deployment + example-runner-deployment-2 = 4 as
the log of the controller says that computed_replicas_desired is 4.
The controller should scale out Runners with filtering the targets using labels, but now it does not seem to manage Runners using label selectors like k8s ReplicaSet.
IMHO, we should fix the replicaset controller to manage Runners with label selectors first, and then fix the autoscaling controller to filter Runners with labels.
What do you think about it? If you think it's reasonable, I'm willing to fix it as I wrote.
What happened:
I tried using
HorizontalRunnerAutoscaler
with multipleRunnerDeployment
existing in the same namespace,and found that the autoscaling controller does not work properly.
RunnerDeployment
, namedexample-runner-deployment
andexample-runner-deployment
, with 2 replicas for eachHorizontalRunnerAutoscaler
withmaxReplicas: 4
,scaleUpThreshold: '0.6'
andscaleDownFactor: '1'
forexample-runner-deployment
Then, a reconciliation was triggered right after
HorizontalRunnerAutoscaler
was created and the autoscaling controller created 2 newRunner
s underexample-runner-deployment
, even though there was noRunner
with theBusy
status.So, there were 4
Runner
s forexample-runner-deployment
and 2Runner
s forexample-runner-deployment-2
in the end.What I expected to happen:
No
Runner
s are created forexample-runner-deployment
.I read this part of the code, and I'm guessing that
the autoscaling controller calculates the desired replicas for
scaleTargetRef
including all the runner existing in the same namespace.In my case, the desired replicas is calculated like
example-runner-deployment
+example-runner-deployment-2
= 4 asthe log of the controller says that computed_replicas_desired is 4.
The controller should scale out
Runner
s with filtering the targets using labels, but now it does not seem to manageRunner
s using label selectors like k8sReplicaSet
.IMHO, we should fix the replicaset controller to manage
Runner
s with label selectors first, and then fix the autoscaling controller to filterRunner
s with labels.What do you think about it? If you think it's reasonable, I'm willing to fix it as I wrote.
Thank you!
Environment:
OS: ubuntu 20.04
k8s cluster: v1.19.7
k8s bootstrap: kind v0.10.0
actions-runner-controller: v0.16.1
actions-runner: v2.276.1
Manifests:
The text was updated successfully, but these errors were encountered: