-
Notifications
You must be signed in to change notification settings - Fork 917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The replicas of deployment is incorrectly when the related HPA is abnormal, #4109
Comments
/assign |
The root cause of this issue is the instability of HPA. The current implementation of #4072 heavily relies on HPA. If there is an exception with HPA, the number of replicas synchronized from the member cluster to the control plane becomes meaningless. And since HPA itself also heavily relies on the stability of metrics-server, HPA itself becomes even more unstable. There are two failures that can occur with HPA:
Therefore, we are trying to introduce a new mechanism to avoid strong dependency on HPA: Solution 2: |
@XiShanYongYe-Chang @jwcesign @lxtywypc @RainbowMango Do you have any other solutions? |
Look for more people's ideas. |
Humm...In fact we chose solution 2 for our own implement. We introduced some 'parser's to tell what replicas is for each kind of workload. We also consider that if it is necessary to expand the |
Doesn't this require a new component?
Can you expand on what's relevant to the current issue? And we can start a new issue to talk about the rest. |
We hard-coded some parsers in our own project.
I mean that if we could introduce a new hook point to interpret actual replica-related info in each member clusters into status of work, we could use these info directly in Like this: apiVersion: work.karmada.io/v1alpha1
kind: Work
metadata:
name: workload-example
namespace: karmada-es-cluster1
spec:
workload:
# ...
status:
manifestStatuses:
- status:
# ...
replicas: 1 # new replica-related field, could be used in hpaReplicasSyncer
readyReplicas: 1 # new replica-related field |
I think this is a good idea. |
I get it. After that, we need to extend the Work API to record the info, then it will be used by All these works seem dedicated to |
Now it seems dedicated to hpaReplicasSyncer, but I believe the replicas related could help do more in the future, especially on scheduling. Maybe we could invite more others to raise their mind. |
/close |
@XiShanYongYe-Chang: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What happened:
The hpaReplicasSyncer controller is enabled. When the hpa delivered to the member cluster is abnormal, the desiredReplicas of the hpa is 0. In this case, the replicas synchronized to the control plane deployment are incorrect. Expected to use currentReplicas instead of desiredReplicas as the calculated value when hpa is abnormal
What you expected to happen:
Expected to use currentReplicas instead of desiredReplicas as the calculated value when hpa is abnormal.
How to reproduce it (as minimally and precisely as possible):
1.The hpaReplicasSyncer controller is enabled. Delivering Deployment and HPA to member cluster A.
2.The HPA in cluster A is abnormal. In this case, the value of desiredReplicas of the HPA is 0.
3.On the control plane, the replicas of the deployment is 0, which is expected to be 1.
Anything else we need to know?:
Environment:
kubectl-karmada version
orkarmadactl version
):The text was updated successfully, but these errors were encountered: