Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Fix klusterlet-info command and update hub-info output. #453

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions pkg/cmd/get/hubinfo/exec.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ package hubinfo
import (
"context"
"fmt"
"open-cluster-management.io/api/feature"
"open-cluster-management.io/clusteradm/pkg/helpers/check"

"github.com/spf13/cobra"
"k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
Expand Down Expand Up @@ -59,8 +61,10 @@ const (

componentNameRegistrationController = "cluster-manager-registration-controller"
componentNameRegistrationWebhook = "cluster-manager-registration-webhook"
componentNameWorkController = "cluster-manager-work-controller"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we do not have a work controller component on the hub side.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I enable the ManifestWorkReplicaSet feature, then the work controller gets deployed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so I think we should print the work-controller only when the ManifestWorkReplicaSet feature is enabled.
BTW, the cluster-manager-addon-manager-controller is in the same situation. if the AddonManagement feature is enabled, we should print the addon-manager info.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated code to print each controller only when its feature is enabled

Copy link
Member

@zhujian7 zhujian7 Oct 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you add some tests or paste some test result because when I try to use the command to get info, I got:

╰─# oc get pod -n open-cluster-management                                                                                          2 ↵
NAME                                              READY   STATUS    RESTARTS   AGE
managedcluster-import-controller-99968fd7-s6dk2   1/1     Running   0          17h

╰─# oc get pod -n open-cluster-management-hub
NAME                                                        READY   STATUS    RESTARTS   AGE
cluster-manager-addon-manager-controller-664d75f6bf-kgb8b   1/1     Running   0          17h
cluster-manager-registration-controller-84c6489fd9-rxnld    1/1     Running   0          17h
cluster-manager-registration-webhook-69d78f8744-wt4hf       1/1     Running   0          17h
cluster-manager-work-webhook-69f55b7fb5-shsld               1/1     Running   0          17h

╰─# oc get clustermanager cluster-manager -oyaml
apiVersion: operator.open-cluster-management.io/v1
kind: ClusterManager
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"operator.open-cluster-management.io/v1","kind":"ClusterManager","metadata":{"annotations":{},"name":"cluster-manager"},"spec":{"addOnManagerImagePullSpec":"quay.io/stolostron/addon-manager:main","deployOption":{"mode":"Default"},"placementImagePullSpec":"quay.io/stolostron/placement:main","registrationConfiguration":{"featureGates":[{"feature":"DefaultClusterSet","mode":"Enable"}]},"registrationImagePullSpec":"quay.io/stolostron/registration:main","workImagePullSpec":"quay.io/stolostron/work:main"}}
  creationTimestamp: "2024-10-25T09:10:46Z"
  finalizers:
  - operator.open-cluster-management.io/cluster-manager-cleanup
  generation: 1
  name: cluster-manager
  resourceVersion: "852"
  uid: 963a4a8a-c94f-4cce-beb9-8d0afb415a82
spec:
  addOnManagerImagePullSpec: quay.io/stolostron/addon-manager:main
  deployOption:
    mode: Default
  placementImagePullSpec: quay.io/stolostron/placement:main
  registrationConfiguration:
    featureGates:
    - feature: DefaultClusterSet
      mode: Enable
  registrationImagePullSpec: quay.io/stolostron/registration:main
  workConfiguration:
    workDriver: kube
  workImagePullSpec: quay.io/stolostron/work:main

╰─# clusteradm get hub-info
Registration Operator:
  Controller:	(0/0) quay.io/open-cluster-management/registration-operator:latest
  CustomResourceDefinition:
    (installed) clustermanagers.operator.open-cluster-management.io [*v1]
Components:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x2dd9af3]

goroutine 1 [running]:
open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo.(*Options).printComponents(0xc000887000)
	/home/go/src/open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo/exec.go:121 +0xb3
open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo.(*Options).run(0xc000887000)
	/home/go/src/open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo/exec.go:76 +0x27
open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo.NewCmd.func2(0xc000906200?, {0x57a9ec0, 0x0, 0x0})
	/home/go/src/open-cluster-management.io/clusteradm/pkg/cmd/get/hubinfo/cmd.go:39 +0x6f
github.com/spf13/cobra.(*Command).execute(0xc0008dcc08, {0x57a9ec0, 0x0, 0x0})
	/home/go/src/open-cluster-management.io/clusteradm/vendor/github.com/spf13/cobra/command.go:985 +0xaca
github.com/spf13/cobra.(*Command).ExecuteC(0xc0008c4008)
	/home/go/src/open-cluster-management.io/clusteradm/vendor/github.com/spf13/cobra/command.go:1117 +0x3ff
github.com/spf13/cobra.(*Command).Execute(...)
	/home/go/src/open-cluster-management.io/clusteradm/vendor/github.com/spf13/cobra/command.go:1041
main.main()
	/home/go/src/open-cluster-management.io/clusteradm/cmd/clusteradm/clusteradm.go:131 +0xc3c

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found even there is no addOnManagerConfiguration configured in the clustermanager.spec, the addon-manager is enabled by default.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

componentNameWorkWebhook = "cluster-manager-work-webhook"
componentNamePlacementController = "cluster-manager-placement-controller"
componentNameAddOnManagerController = "cluster-manager-addon-manager-controller"
)

func (o *Options) run() error {
Expand Down Expand Up @@ -114,6 +118,10 @@ func (o *Options) printComponents() error {
}

o.printer.Write(printer.LEVEL_0, "Components:\n")

if err := o.printAddOnManager(cmgr); err != nil {
return err
}
if err := o.printRegistration(cmgr); err != nil {
return err
}
Expand Down Expand Up @@ -141,6 +149,12 @@ func (o *Options) printRegistration(cmgr *v1.ClusterManager) error {

func (o *Options) printWork(cmgr *v1.ClusterManager) error {
o.printer.Write(printer.LEVEL_1, "Work:\n")
if cmgr.Spec.WorkConfiguration != nil && check.IsFeatureEnabled(cmgr.Spec.WorkConfiguration.FeatureGates, string(feature.ManifestWorkReplicaSet)) {
err := printer.PrintComponentsDeploy(o.printer, o.kubeClient, cmgr.Status.RelatedResources, componentNameWorkController)
if err != nil {
return err
}
}
return printer.PrintComponentsDeploy(o.printer, o.kubeClient, cmgr.Status.RelatedResources, componentNameWorkWebhook)
}

Expand All @@ -149,6 +163,14 @@ func (o *Options) printPlacement(cmgr *v1.ClusterManager) error {
return printer.PrintComponentsDeploy(o.printer, o.kubeClient, cmgr.Status.RelatedResources, componentNamePlacementController)
}

func (o *Options) printAddOnManager(cmgr *v1.ClusterManager) error {
if cmgr.Spec.AddOnManagerConfiguration != nil && !check.IsFeatureEnabled(cmgr.Spec.AddOnManagerConfiguration.FeatureGates, string(feature.AddonManagement)) {
return nil
}
o.printer.Write(printer.LEVEL_1, "AddOn Manager:\n")
return printer.PrintComponentsDeploy(o.printer, o.kubeClient, cmgr.Status.RelatedResources, componentNameAddOnManagerController)
}

func (o *Options) printComponentsCRD(cmgr *v1.ClusterManager) error {
o.printer.Write(printer.LEVEL_1, "CustomResourceDefinition:\n")
return printer.PrintComponentsCRD(o.printer, o.crdClient, cmgr.Status.RelatedResources)
Expand Down
24 changes: 18 additions & 6 deletions pkg/cmd/get/klusterletinfo/exec.go
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ const (

componentNameRegistrationAgent = "klusterlet-registration-agent"
componentNameWorkAgent = "klusterlet-work-agent"
componentNameKlusterletAgent = "klusterlet-agent"
)

func (o *Options) run() error {
Expand Down Expand Up @@ -136,14 +137,20 @@ func (o *Options) printRegistrationOperator() error {
}

func (o *Options) printComponents(klet *v1.Klusterlet) error {

o.printer.Write(printer.LEVEL_0, "Components:\n")

if err := o.printRegistration(klet); err != nil {
return err
}
if err := o.printWork(klet); err != nil {
return err
mode := klet.Spec.DeployOption.Mode
if mode == v1.InstallModeSingleton || mode == v1.InstallModeSingletonHosted {
if err := o.printAgent(klet); err != nil {
return err
}
} else {
if err := o.printRegistration(klet); err != nil {
return err
}
if err := o.printWork(klet); err != nil {
return err
}
}
if err := o.printComponentsCRD(klet); err != nil {
return err
Expand All @@ -161,6 +168,11 @@ func (o *Options) printWork(klet *v1.Klusterlet) error {
return printer.PrintComponentsDeploy(o.printer, o.kubeClient, klet.Status.RelatedResources, componentNameWorkAgent)
}

func (o *Options) printAgent(klet *v1.Klusterlet) error {
o.printer.Write(printer.LEVEL_1, "Controller:\n")
return printer.PrintComponentsDeploy(o.printer, o.kubeClient, klet.Status.RelatedResources, componentNameKlusterletAgent)
}

func (o *Options) printComponentsCRD(klet *v1.Klusterlet) error {
o.printer.Write(printer.LEVEL_1, "CustomResourceDefinition:\n")
return printer.PrintComponentsCRD(o.printer, o.crdClient, klet.Status.RelatedResources)
Expand Down
10 changes: 9 additions & 1 deletion pkg/helpers/check/check.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ package check

import (
"fmt"

"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
clusterclient "open-cluster-management.io/api/client/cluster/clientset/versioned"
Expand Down Expand Up @@ -79,3 +78,12 @@ func findResource(list *metav1.APIResourceList, resourceName string) bool {
}
return false
}

func IsFeatureEnabled(featureGates []operatorv1.FeatureGate, feature string) bool {
for _, fg := range featureGates {
if fg.Feature == feature && fg.Mode == operatorv1.FeatureGateModeTypeEnable {
return true
Copy link
Member

@zhujian7 zhujian7 Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still does not fit the add-on manager case, because it is enabled by default.
so there are two cases:

  • enabled by default(eg. addon-manager): as long as the feature gate flag is not explicitly set to false(including the feature gate config is not set), we should output the info
  • disabled by default(eg. work-controller): only output the info when the feature gate flag is explicitly set to true

example: https://github.com/open-cluster-management-io/ocm/blob/865ae069b3e5eab72faf3c1bcd2eb52bb7c1b8c6/pkg/registration/spoke/spokeagent.go#L419
feature gate default values: https://github.com/open-cluster-management-io/api/blob/f6c65820279078afbe536d5a6012e0b3badde3c5/feature/feature.go#L90

}
}
return false
}
Loading