You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am testing auto scaler version 1.0.0 as I see it was lately released. experiencing this behavior:
after having scale up, and the work finished, scale down does not happen.
looking on my queue in AWS console I see its empty without messages in flights for more than 20 mins:
here is monitor status for the queue, where you can see that all the message handling had finished before 22:00:
when looking on the auto-scaler logs I see that the replicas are not scaled down because there are received messages:
two snapshots of times in logs, where time differences more than 20 mins (the cache according to the documentation in the code should be 1 min)
it looks like the msgsReceived cache is never refreshed.
Normal Scheduled 45m default-scheduler Successfully assigned kube-system/workerpodautoscaler-57fc6bf9d9-225db to ip-192-168-127-142.us-east-2.compute.internal
Normal Pulling 45m kubelet, ip-192-168-127-142.us-east-2.compute.internal Pulling image "practodev/workerpodautoscaler:v1.0.0"
Normal Pulled 45m kubelet, ip-192-168-127-142.us-east-2.compute.internal Successfully pulled image "practodev/workerpodautoscaler:v1.0.0"
Normal Created 45m kubelet, ip-192-168-127-142.us-east-2.compute.internal Created container wpa
Normal Started 45m kubelet, ip-192-168-127-142.us-east-2.compute.internal Started container wpa
I am testing auto scaler version 1.0.0 as I see it was lately released. experiencing this behavior:
![image](https://user-images.githubusercontent.com/31182104/87234870-e2e3f380-c3dd-11ea-93a8-7733e8718ca9.png)
![image](https://user-images.githubusercontent.com/31182104/87234988-67834180-c3df-11ea-8b2a-b78ab6af0fb0.png)
after having scale up, and the work finished, scale down does not happen.
looking on my queue in AWS console I see its empty without messages in flights for more than 20 mins:
here is monitor status for the queue, where you can see that all the message handling had finished before 22:00:
when looking on the auto-scaler logs I see that the replicas are not scaled down because there are received messages:
![image](https://user-images.githubusercontent.com/31182104/87234907-4ff78900-c3de-11ea-8b9c-29aefe1529a2.png)
two snapshots of times in logs, where time differences more than 20 mins (the cache according to the documentation in the code should be 1 min)
it looks like the msgsReceived cache is never refreshed.
Pod describe info:
Name: workerpodautoscaler-57fc6bf9d9-225db
Namespace: kube-system
Priority: 1000
Priority Class Name: infra-normal-priority
Node: ip-192-168-127-142.us-east-2.compute.internal/192.168.127.142
Start Time: Sun, 12 Jul 2020 00:41:20 +0300
Labels: app=workerpodautoscaler
pod-template-hash=57fc6bf9d9
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.126.38
Controlled By: ReplicaSet/workerpodautoscaler-57fc6bf9d9
Containers:
wpa:
Container ID: docker://4898ad92c38baed27d84a0f206ee60b85f0b149526142a2abfd956dccc676069
Image: practodev/workerpodautoscaler:v1.0.0
Image ID: docker-pullable://practodev/workerpodautoscaler@sha256:2bdcaa251e2a2654e73121721589ac5bb8536fbeebc2b7a356d24199ced84e73
Port:
Host Port:
Command:
/workerpodautoscaler
run
--resync-period=60
--wpa-threads=10
--aws-regions=us-east-2
--sqs-short-poll-interval=20
--sqs-long-poll-interval=20
--wpa-default-max-disruption=0
State: Running
Started: Sun, 12 Jul 2020 00:41:22 +0300
Ready: True
Restart Count: 0
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 10m
memory: 20Mi
Environment Variables from:
workerpodautoscaler-secret-env Secret Optional: false
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from workerpodautoscaler-token-j8lvc (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
workerpodautoscaler-token-j8lvc:
Type: Secret (a volume populated by a Secret)
SecretName: workerpodautoscaler-token-j8lvc
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: :NoExecute
:NoSchedule
Events:
Type Reason Age From Message
Normal Scheduled 45m default-scheduler Successfully assigned kube-system/workerpodautoscaler-57fc6bf9d9-225db to ip-192-168-127-142.us-east-2.compute.internal
Normal Pulling 45m kubelet, ip-192-168-127-142.us-east-2.compute.internal Pulling image "practodev/workerpodautoscaler:v1.0.0"
Normal Pulled 45m kubelet, ip-192-168-127-142.us-east-2.compute.internal Successfully pulled image "practodev/workerpodautoscaler:v1.0.0"
Normal Created 45m kubelet, ip-192-168-127-142.us-east-2.compute.internal Created container wpa
Normal Started 45m kubelet, ip-192-168-127-142.us-east-2.compute.internal Started container wpa
WPA deployment:
apiVersion: k8s.practo.dev/v1alpha1
kind: WorkerPodAutoScaler
metadata:
creationTimestamp: "2020-01-28T14:59:16Z"
generation: 5316
name: processor-ip4m
namespace: default
resourceVersion: "52253623"
selfLink: /apis/k8s.practo.dev/v1alpha1/namespaces/default/workerpodautoscalers/processor-ip4m
uid: c111ba43-41de-11ea-b4d5-066ce59a32e8
spec:
deploymentName: processor-ip4m
maxDisruption: null
maxReplicas: 80
minReplicas: 1
queueURI: **************
secondsToProcessOneJob: 10
targetMessagesPerWorker: 720
status:
CurrentMessages: 0
CurrentReplicas: 31
DesiredReplicas: 31
The text was updated successfully, but these errors were encountered: