Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

operator:1.1.0 got SIGSEGV panic (possibly because of no spec.ingress?) #3

Closed
sgielen opened this issue Jan 19, 2025 · 2 comments
Closed

Comments

@sgielen
Copy link

sgielen commented Jan 19, 2025

This happens hourly ever since I first deployed the operator - I think it happens during the second hourly reconciliation every run of the container (considering the timestamps in the log and the amount of restarts per hour). The image is ghcr.io/coroot/coroot-operator:1.1.0.

coroot-operator-86c86b8845-dvc5f        1/1     Running   44 (54m ago)   44h
2025-01-19T10:06:07Z	INFO	got app versions: map[coroot:1.7.2 coroot-cluster-agent:1.1.3 coroot-ee:1.7.2 coroot-node-agent:1.23.4]
2025-01-19T10:06:07Z	INFO	starting manager
2025-01-19T10:06:07Z	INFO	starting server	{"name": "health probe", "addr": "[::]:8081"}
2025-01-19T10:06:07Z	INFO	Starting EventSource	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "source": "kind source: *v1.Coroot"}
2025-01-19T10:06:07Z	INFO	Starting EventSource	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "source": "kind source: *v1.Deployment"}
2025-01-19T10:06:07Z	INFO	Starting EventSource	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "source": "kind source: *v1.StatefulSet"}
2025-01-19T10:06:07Z	INFO	Starting EventSource	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "source": "kind source: *v1.DaemonSet"}
2025-01-19T10:06:07Z	INFO	Starting EventSource	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "source": "kind source: *v1.Service"}
2025-01-19T10:06:07Z	INFO	Starting EventSource	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "source": "kind source: *v1.ServiceAccount"}
2025-01-19T10:06:07Z	INFO	Starting EventSource	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "source": "kind source: *v1.ClusterRole"}
2025-01-19T10:06:07Z	INFO	Starting EventSource	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "source": "kind source: *v1.ClusterRoleBinding"}
2025-01-19T10:06:07Z	INFO	Starting EventSource	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "source": "kind source: *v1.PersistentVolumeClaim"}
2025-01-19T10:06:07Z	INFO	Starting EventSource	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "source": "kind source: *v1.Secret"}
2025-01-19T10:06:07Z	INFO	Starting EventSource	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "source": "kind source: *v1.Ingress"}
2025-01-19T10:06:07Z	INFO	Starting Controller	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot"}
2025-01-19T10:06:08Z	INFO	Starting workers	{"controller": "coroot", "controllerGroup": "coroot.com", "controllerKind": "Coroot", "worker count": 1}
2025-01-19T11:06:08Z	INFO	got app versions: map[coroot:1.7.2 coroot-cluster-agent:1.1.3 coroot-ee:1.7.2 coroot-node-agent:1.23.4]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x1178a71]

goroutine 79 [running]:
golang.org/x/time/rate.(*Limiter).wait(0xc0001781e0, {0x0, 0x0}, 0x1, {0x31?, 0xc000a76eca?, 0x2a45440?}, 0x1b66538)
	/go/pkg/mod/golang.org/x/[email protected]/rate/rate.go:261 +0x1f1
golang.org/x/time/rate.(*Limiter).WaitN(0xc0001781e0, {0x0, 0x0}, 0x1)
	/go/pkg/mod/golang.org/x/[email protected]/rate/rate.go:246 +0x50
golang.org/x/time/rate.(*Limiter).Wait(...)
	/go/pkg/mod/golang.org/x/[email protected]/rate/rate.go:231
k8s.io/client-go/util/flowcontrol.(*tokenBucketRateLimiter).Wait(0x18c5320?, {0x0?, 0x0?})
	/go/pkg/mod/k8s.io/[email protected]/util/flowcontrol/throttle.go:131 +0x25
k8s.io/client-go/rest.(*Request).tryThrottleWithInfo(0xc000aafd40, {0x0, 0x0}, {0x0, 0x0})
	/go/pkg/mod/k8s.io/[email protected]/rest/request.go:617 +0xb4
k8s.io/client-go/rest.(*Request).tryThrottle(...)
	/go/pkg/mod/k8s.io/[email protected]/rest/request.go:645
k8s.io/client-go/rest.(*Request).request(0xc000aafd40, {0x0, 0x0}, 0xc0005a1a30)
	/go/pkg/mod/k8s.io/[email protected]/rest/request.go:1128 +0x215
k8s.io/client-go/rest.(*Request).Do(0xc000aafd40, {0x0, 0x0})
	/go/pkg/mod/k8s.io/[email protected]/rest/request.go:1202 +0xad
sigs.k8s.io/controller-runtime/pkg/client.(*typedClient).Delete(0xc000344ab0?, {0x0, 0x0}, {0x1d1afe8?, 0xc000e03ce0?}, {0x0, 0x0, 0xc0005a1bb8?})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/typed_client.go:87 +0x35a
sigs.k8s.io/controller-runtime/pkg/client.(*client).Delete(0xc000344ab0?, {0x0?, 0x0?}, {0x1d1afe8?, 0xc000e03ce0?}, {0x0?, 0x0?, 0x0?})
	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/client.go:304 +0x75
github.io/coroot/operator/controller.(*CorootReconciler).CreateOrUpdate(0xc000288140, {0x0, 0x0}, 0xc000934008, {0x1d1afe8, 0xc000e03ce0}, 0x1, 0xc000179d10)
	/workspace/controller/controller.go:172 +0x303
github.io/coroot/operator/controller.(*CorootReconciler).CreateOrUpdateIngress(0xc000288140, {0x0, 0x0}, 0xc000934008, 0xc000e03ce0, 0x1)
	/workspace/controller/controller.go:270 +0x14b
github.io/coroot/operator/controller.(*CorootReconciler).Reconcile(0xc000288140, {0x0, 0x0}, {{{0xc000112fd8, 0x6}, {0xc000113046, 0x6}}})
	/workspace/controller/controller.go:136 +0x17b9
github.io/coroot/operator/controller.NewCorootReconciler.func1()
	/workspace/controller/controller.go:58 +0x2d1
created by github.io/coroot/operator/controller.NewCorootReconciler in goroutine 1
	/workspace/controller/controller.go:51 +0x136

This is the Coroot YAML I currently have in the cluster:

apiVersion: coroot.com/v1
kind: Coroot
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: <snip>
  creationTimestamp: "2025-01-17T15:24:14Z"
  generation: 2
  labels:
    argocd.argoproj.io/instance: coroot
  name: coroot
  namespace: coroot
  resourceVersion: "16082256271"
  uid: 9a23dc13-7043-4803-90a1-f7e3a7a0290e
spec:
  clickhouse:
    replicas: 1
    shards: 1
    storage:
      size: 30Gi
  enterpriseEdition:
    licenseKey: <snip>
  prometheus:
    storage:
      size: 10Gi
  storage:
    size: 10Gi

Some additional information since I see the stack trace mentions ingresses: I have never had a spec.ingress configured, and the namespace has never contained an ingress. I intend to deploy an Ingress later on, but for now I'm just trouble-shooting with kubectl port-forward until I'm confident about the setup.

@sgielen sgielen changed the title operator:1.1.0 got SIGSEGV panic operator:1.1.0 got SIGSEGV panic (possibly because of no spec.ingress?) Jan 19, 2025
@apetruhin
Copy link
Member

@sgielen, thank you for the report. We'll try to fix it ASAP.

@apetruhin
Copy link
Member

Fixed in operator:1.1.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants