Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Fix klusterlet-info command and update hub-info output. #453

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions pkg/cmd/get/hubinfo/exec.go
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ const (

componentNameRegistrationController = "cluster-manager-registration-controller"
componentNameRegistrationWebhook = "cluster-manager-registration-webhook"
componentNameWorkController = "cluster-manager-work-controller"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we do not have a work controller component on the hub side.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I enable the ManifestWorkReplicaSet feature, then the work controller gets deployed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so I think we should print the work-controller only when the ManifestWorkReplicaSet feature is enabled.
BTW, the cluster-manager-addon-manager-controller is in the same situation. if the AddonManagement feature is enabled, we should print the addon-manager info.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated code to print each controller only when its feature is enabled

Copy link
Member

@zhujian7 zhujian7 Oct 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you add some tests or paste some test result because when I try to use the command to get info, I got:

╰─# oc get pod -n open-cluster-management                                                                                          2 ↵
NAME                                              READY   STATUS    RESTARTS   AGE
managedcluster-import-controller-99968fd7-s6dk2   1/1     Running   0          17h

╰─# oc get pod -n open-cluster-management-hub
NAME                                                        READY   STATUS    RESTARTS   AGE
cluster-manager-addon-manager-controller-664d75f6bf-kgb8b   1/1     Running   0          17h
cluster-manager-registration-controller-84c6489fd9-rxnld    1/1     Running   0          17h
cluster-manager-registration-webhook-69d78f8744-wt4hf       1/1     Running   0          17h
cluster-manager-work-webhook-69f55b7fb5-shsld               1/1     Running   0          17h

╰─# oc get clustermanager cluster-manager -oyaml
apiVersion: operator.open-cluster-management.io/v1
kind: ClusterManager
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"operator.open-cluster-management.io/v1","kind":"ClusterManager","metadata":{"annotations":{},"name":"cluster-manager"},"spec":{"addOnManagerImagePullSpec":"quay.io/stolostron/addon-manager:main","deployOption":{"mode":"Default"},"placementImagePullSpec":"quay.io/stolostron/placement:main","registrationConfiguration":{"featureGates":[{"feature":"DefaultClusterSet","mode":"Enable"}]},"registrationImagePullSpec":"quay.io/stolostron/registration:main","workImagePullSpec":"quay.io/stolostron/work:main"}}
  creationTimestamp: "2024-10-25T09:10:46Z"
  finalizers:
  - operator.open-cluster-management.io/cluster-manager-cleanup
  generation: 1
  name: cluster-manager
  resourceVersion: "852"
  uid: 963a4a8a-c94f-4cce-beb9-8d0afb415a82
spec:
  addOnManagerImagePullSpec: quay.io/stolostron/addon-manager:main
  deployOption:
    mode: Default
  placementImagePullSpec: quay.io/stolostron/placement:main
  registrationConfiguration:
    featureGates:
    - feature: DefaultClusterSet
      mode: Enable
  registrationImagePullSpec: quay.io/stolostron/registration:main
  workConfiguration:
    workDriver: kube
  workImagePullSpec: quay.io/stolostron/work:main

╰─# clusteradm get hub-info
Registration Operator:
  Controller:	(0/0) quay.io/open-cluster-management/registration-operator:latest
  CustomResourceDefinition:
    (installed) clustermanagers.operator.open-cluster-management.io [*v1]
Components:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2dd9af3]

goroutine 1 [running]:
open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo.(*Options).printComponents(0xc000887000)
	/home/go/src/open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo/exec.go:121 +0xb3
open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo.(*Options).run(0xc000887000)
	/home/go/src/open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo/exec.go:76 +0x27
open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo.NewCmd.func2(0xc000906200?, {0x57a9ec0, 0x0, 0x0})
	/home/go/src/open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo/cmd.go:39 +0x6f
github.com/spf13/cobra.(*Command).execute(0xc0008dcc08, {0x57a9ec0, 0x0, 0x0})
	/home/go/src/open-cluster-management.io/clusteradm/vendor/github.com/spf13/cobra/command.go:985 +0xaca
github.com/spf13/cobra.(*Command).ExecuteC(0xc0008c4008)
	/home/go/src/open-cluster-management.io/clusteradm/vendor/github.com/spf13/cobra/command.go:1117 +0x3ff
github.com/spf13/cobra.(*Command).Execute(...)
	/home/go/src/open-cluster-management.io/clusteradm/vendor/github.com/spf13/cobra/command.go:1041
main.main()
	/home/go/src/open-cluster-management.io/clusteradm/cmd/clusteradm/clusteradm.go:131 +0xc3c

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found even there is no addOnManagerConfiguration configured in the clustermanager.spec, the addon-manager is enabled by default.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

componentNameWorkWebhook = "cluster-manager-work-webhook"
componentNamePlacementController = "cluster-manager-placement-controller"
)
Expand Down Expand Up @@ -141,6 +142,11 @@ func (o *Options) printRegistration(cmgr *v1.ClusterManager) error {

func (o *Options) printWork(cmgr *v1.ClusterManager) error {
o.printer.Write(printer.LEVEL_1, "Work:\n")
err := printer.PrintComponentsDeploy(o.printer, o.kubeClient, cmgr.Status.RelatedResources, componentNameWorkController)
if err != nil {
return err
}

return printer.PrintComponentsDeploy(o.printer, o.kubeClient, cmgr.Status.RelatedResources, componentNameWorkWebhook)
}

Expand Down
19 changes: 5 additions & 14 deletions pkg/cmd/get/klusterletinfo/exec.go
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,7 @@ const (
registrationOperatorNamespace = "open-cluster-management"
klusterletCRD = "klusterlets.operator.open-cluster-management.io"

componentNameRegistrationAgent = "klusterlet-registration-agent"
componentNameWorkAgent = "klusterlet-work-agent"
componentNameKlusterletAgent = "klusterlet-agent"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have 2 deploy modes for the klusterlet:

  • Default mode: there will be two components on the managed cluster, registration-agent and work-agent
  • Singleton mode: there will be only one component klusterlet-agent

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here maybe we need to print the agent components based on the klusterlet deploy mode.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

)

func (o *Options) run() error {
Expand Down Expand Up @@ -139,10 +138,7 @@ func (o *Options) printComponents(klet *v1.Klusterlet) error {

o.printer.Write(printer.LEVEL_0, "Components:\n")

if err := o.printRegistration(klet); err != nil {
return err
}
if err := o.printWork(klet); err != nil {
if err := o.printAgent(klet); err != nil {
return err
}
if err := o.printComponentsCRD(klet); err != nil {
Expand All @@ -151,14 +147,9 @@ func (o *Options) printComponents(klet *v1.Klusterlet) error {
return nil
}

func (o *Options) printRegistration(klet *v1.Klusterlet) error {
o.printer.Write(printer.LEVEL_1, "Registration:\n")
return printer.PrintComponentsDeploy(o.printer, o.kubeClient, klet.Status.RelatedResources, componentNameRegistrationAgent)
}

func (o *Options) printWork(klet *v1.Klusterlet) error {
o.printer.Write(printer.LEVEL_1, "Work:\n")
return printer.PrintComponentsDeploy(o.printer, o.kubeClient, klet.Status.RelatedResources, componentNameWorkAgent)
func (o *Options) printAgent(klet *v1.Klusterlet) error {
o.printer.Write(printer.LEVEL_1, "Controller:\n")
return printer.PrintComponentsDeploy(o.printer, o.kubeClient, klet.Status.RelatedResources, componentNameKlusterletAgent)
}

func (o *Options) printComponentsCRD(klet *v1.Klusterlet) error {
Expand Down
Loading