-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Fix klusterlet-info
command and update hub-info
output.
#453
✨ Fix klusterlet-info
command and update hub-info
output.
#453
Conversation
Signed-off-by: Rokibul Hasan <[email protected]>
klusterlet-info
command and update hub-info
output.klusterlet-info
command and update hub-info
output.
@@ -59,6 +59,7 @@ const ( | |||
|
|||
componentNameRegistrationController = "cluster-manager-registration-controller" | |||
componentNameRegistrationWebhook = "cluster-manager-registration-webhook" | |||
componentNameWorkController = "cluster-manager-work-controller" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we do not have a work controller component on the hub side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I enable the ManifestWorkReplicaSet
feature, then the work controller gets deployed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so I think we should print the work-controller only when the ManifestWorkReplicaSet feature is enabled.
BTW, the cluster-manager-addon-manager-controller
is in the same situation. if the AddonManagement
feature is enabled, we should print the addon-manager info.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated code to print each controller only when its feature is enabled
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you add some tests or paste some test result because when I try to use the command to get info, I got:
╰─# oc get pod -n open-cluster-management 2 ↵
NAME READY STATUS RESTARTS AGE
managedcluster-import-controller-99968fd7-s6dk2 1/1 Running 0 17h
╰─# oc get pod -n open-cluster-management-hub
NAME READY STATUS RESTARTS AGE
cluster-manager-addon-manager-controller-664d75f6bf-kgb8b 1/1 Running 0 17h
cluster-manager-registration-controller-84c6489fd9-rxnld 1/1 Running 0 17h
cluster-manager-registration-webhook-69d78f8744-wt4hf 1/1 Running 0 17h
cluster-manager-work-webhook-69f55b7fb5-shsld 1/1 Running 0 17h
╰─# oc get clustermanager cluster-manager -oyaml
apiVersion: operator.open-cluster-management.io/v1
kind: ClusterManager
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"operator.open-cluster-management.io/v1","kind":"ClusterManager","metadata":{"annotations":{},"name":"cluster-manager"},"spec":{"addOnManagerImagePullSpec":"quay.io/stolostron/addon-manager:main","deployOption":{"mode":"Default"},"placementImagePullSpec":"quay.io/stolostron/placement:main","registrationConfiguration":{"featureGates":[{"feature":"DefaultClusterSet","mode":"Enable"}]},"registrationImagePullSpec":"quay.io/stolostron/registration:main","workImagePullSpec":"quay.io/stolostron/work:main"}}
creationTimestamp: "2024-10-25T09:10:46Z"
finalizers:
- operator.open-cluster-management.io/cluster-manager-cleanup
generation: 1
name: cluster-manager
resourceVersion: "852"
uid: 963a4a8a-c94f-4cce-beb9-8d0afb415a82
spec:
addOnManagerImagePullSpec: quay.io/stolostron/addon-manager:main
deployOption:
mode: Default
placementImagePullSpec: quay.io/stolostron/placement:main
registrationConfiguration:
featureGates:
- feature: DefaultClusterSet
mode: Enable
registrationImagePullSpec: quay.io/stolostron/registration:main
workConfiguration:
workDriver: kube
workImagePullSpec: quay.io/stolostron/work:main
╰─# clusteradm get hub-info
Registration Operator:
Controller: (0/0) quay.io/open-cluster-management/registration-operator:latest
CustomResourceDefinition:
(installed) clustermanagers.operator.open-cluster-management.io [*v1]
Components:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2dd9af3]
goroutine 1 [running]:
open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo.(*Options).printComponents(0xc000887000)
/home/go/src/open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo/exec.go:121 +0xb3
open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo.(*Options).run(0xc000887000)
/home/go/src/open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo/exec.go:76 +0x27
open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo.NewCmd.func2(0xc000906200?, {0x57a9ec0, 0x0, 0x0})
/home/go/src/open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo/cmd.go:39 +0x6f
github.com/spf13/cobra.(*Command).execute(0xc0008dcc08, {0x57a9ec0, 0x0, 0x0})
/home/go/src/open-cluster-management.io/clusteradm/vendor/github.com/spf13/cobra/command.go:985 +0xaca
github.com/spf13/cobra.(*Command).ExecuteC(0xc0008c4008)
/home/go/src/open-cluster-management.io/clusteradm/vendor/github.com/spf13/cobra/command.go:1117 +0x3ff
github.com/spf13/cobra.(*Command).Execute(...)
/home/go/src/open-cluster-management.io/clusteradm/vendor/github.com/spf13/cobra/command.go:1041
main.main()
/home/go/src/open-cluster-management.io/clusteradm/cmd/clusteradm/clusteradm.go:131 +0xc3c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found even there is no addOnManagerConfiguration
configured in the clustermanager.spec, the addon-manager is enabled by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so maybe we need to refactor the IsFeatureEnabled
func.
references:
pkg/cmd/get/klusterletinfo/exec.go
Outdated
@@ -57,8 +57,7 @@ const ( | |||
registrationOperatorNamespace = "open-cluster-management" | |||
klusterletCRD = "klusterlets.operator.open-cluster-management.io" | |||
|
|||
componentNameRegistrationAgent = "klusterlet-registration-agent" | |||
componentNameWorkAgent = "klusterlet-work-agent" | |||
componentNameKlusterletAgent = "klusterlet-agent" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have 2 deploy modes for the klusterlet:
- Default mode: there will be two components on the managed cluster, registration-agent and work-agent
- Singleton mode: there will be only one component klusterlet-agent
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here maybe we need to print the agent components based on the klusterlet deploy mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
Signed-off-by: Rokibul Hasan <[email protected]>
Signed-off-by: Rokibul Hasan <[email protected]>
0aa627f
to
d5073ed
Compare
Signed-off-by: Rokibul Hasan <[email protected]>
d5073ed
to
ad43749
Compare
func IsFeatureEnabled(featureGates []operatorv1.FeatureGate, feature string) bool { | ||
for _, fg := range featureGates { | ||
if fg.Feature == feature && fg.Mode == operatorv1.FeatureGateModeTypeEnable { | ||
return true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still does not fit the add-on manager case, because it is enabled by default.
so there are two cases:
- enabled by default(eg. addon-manager): as long as the feature gate flag is not explicitly set to false(including the feature gate config is not set), we should output the info
- disabled by default(eg. work-controller): only output the info when the feature gate flag is explicitly set to true
example: https://github.com/open-cluster-management-io/ocm/blob/865ae069b3e5eab72faf3c1bcd2eb52bb7c1b8c6/pkg/registration/spoke/spokeagent.go#L419
feature gate default values: https://github.com/open-cluster-management-io/api/blob/f6c65820279078afbe536d5a6012e0b3badde3c5/feature/feature.go#L90
/cc @qiujian16 |
Signed-off-by: Rokibul Hasan <[email protected]>
93e4866
to
5511482
Compare
@RokibulHasan7 Thanks!!! :) |
/approve thanks, this is great. Something in my mind as the future enhancement: we should also link all the related events in this command. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: qiujian16, RokibulHasan7 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
e5a4b77
into
open-cluster-management-io:main
Summary
Related issue(s)
Fixes #