-
Notifications
You must be signed in to change notification settings - Fork 917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
search filter out not ready cluster #3010
search filter out not ready cluster #3010
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3010 +/- ##
==========================================
- Coverage 38.66% 38.62% -0.04%
==========================================
Files 206 206
Lines 18815 18818 +3
==========================================
- Hits 7274 7269 -5
- Misses 11114 11120 +6
- Partials 427 429 +2
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/cc @ikaven1024 @liys87x
Thanks @yanfeng1992 , it's LGTM. While I found a serious problems: Because cluster ready status is updated periodically, before control panel detects a member cluster down, search component will still request to the down cluster and get an error. |
Signed-off-by: huangyanfeng <[email protected]> search filter out not ready cluster fix ut Signed-off-by: huangyanfeng <[email protected]> search filter out not ready cluster fix lint Signed-off-by: huangyanfeng <[email protected]> search filter out not ready cluster fix log and add comment Signed-off-by: huangyanfeng <[email protected]> search filter out not ready cluster fix comment Signed-off-by: huangyanfeng <[email protected]>
dd5232c
to
04d16b4
Compare
ok, i squash the commits into one @ikaven1024 |
One thing I don't understand is why getting the cache of other clusters fails when one cluster fails. |
When client list pods with karmada/pkg/search/proxy/store/multi_cluster_cache.go Lines 227 to 231 in 7e24783
|
Is there anything else I need to modify in this PR? @XiShanYongYe-Chang |
I thinks it's okay, |
/lgtm |
Hi @yanfeng1992, can you help update the release note to:
We may need to cherry-pick this patch to the previous release v1.2-v1.4. Can you help do it? |
i have updated @XiShanYongYe-Chang |
Thanks~ |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: XiShanYongYe-Chang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @yanfeng1992, can you help cherry-pick this patch to the previous release? You can refer to the https://karmada.io/docs/contributor/cherry-picks/ |
|
…010-upstream-release-1.3 Automated cherry pick of #3010: search filter out not ready cluster
…Ready-cluster search filter out not ready cluster
…010-upstream-release-1.2 Automated cherry pick of #3010: search filter out not ready cluster
…010-upstream-release-1.4 Automated cherry pick of #3010: search filter out not ready cluster
Signed-off-by: huangyanfeng [email protected]
What type of PR is this?
/kind bug
What this PR does / why we need it:
I have three cluster member1, member2, member3,member1 status is not ready
I did a few steps
step1 create a resourceregistries for all cluster
step 2 kubectl get --raw /apis/search.karmada.io/v1alpha1/search/cache/api/v1/nodes --kubeconfig /etc/karmada/karmada-apiserver.config, but get nothing
error log in karmada-search
I think the not ready cluster should be filtered out
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: