You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
serviceMonitor:
enabled: trueinterval: 60s# -- The namespace where Prometheus expects to find service monitors#namespace: ~excludeNamespaces: "openshift,kube-system,infra-trivy-system,openshift-*"## ,infra-backup-projectsoperator:
replicas: 2scanJobsConcurrentLimit: 3# keep scanjob during x time before deleting itscanJobTTL: "10m"trivy:
command: imageignoreUnfixed: true# # -- httpProxy is the HTTP proxy used by Trivy to download the vulnerabilities database from GitHub.# httpProxy: ~# # -- httpsProxy is the HTTPS proxy used by Trivy to download the vulnerabilities database from GitHub.# httpsProxy: ~# # -- noProxy is a comma separated list of IPs and domain names that are not subject to proxy settings.# noProxy: ~## in oder to test fs optiontrivyOperator:
scanJobAutomountServiceAccountToken: truescanJobPodTemplateContainerSecurityContext:
# For filesystem scanning, Trivy needs to run as the root user# https://aquasecurity.github.io/trivy-operator/v0.16.1/tutorials/private-registries/ runAsUser: 0privileged: trueallowPrivilegeEscalation: truereadOnlyRootFilesystem: true
Trivy is not able to scan image from private registry even if the image is already available on the cluster(openshift) node
{"level":"error","ts":"2024-03-08T10:45:16Z","logger":"reconciler.scan job","msg":"Scan job container","job":"infra-trivy-system/scan-vulnerabilityreport-79db7ff9bb","container":"backup","status.reason":"Error","status.message":"2024-03-08T10:45:25.304Z\t\u001b[31mFATAL\u001b[0m\timage scan error: scan error: unable to initialize a scanner: unable to initialize an image scanner: 4 errors occurred:\n\t* docker error: unable to inspect the image (registry.redhat.io/openshift4/ose-cli:v4.11): Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\t* containerd error: containerd socket not found: /run/containerd/containerd.sock\n\t* podman error: unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory\n\t* remote error: GET https://registry.redhat.io/auth/realms/rhcc/protocol/redhat-docker-v2/auth?scope=repository%3Aopenshift4%2Fose-cli%3Apull&service=docker-registry: UNAUTHORIZED: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\n\n\n
So, the podman sock is available on nodes on following path:
/run/podman/podman.sock
/run/user/1000/podman/podman.sock
But trivy scan jobs is still failing, it cannot find it : podman error: unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory\n\t* remote error: GET
It is like, trivy is looging in root path: / , /podman/podman.sock instead of the appropriate folder for sock: /run/....
If there was option to add extraEnvs, the path could be fixed with XDG_RUNTIME_DIR but it is not the case.
ANd even if the path is fixed, it will still need the extra/custom volumeMounts
I manually created a job with the operator job template where I mounted the podman.sock with cmd overrided with sleep
and I manually run trivy image --slow 'registry.redhat.io/openshift4/ose-cli:v4.11' --scanners vuln --image-config-scanners secret --skip-db-update --cache-dir /tmp/trivy/.cache --quiet --list-all-pkgs --format json
it works fine if the pod is scheduled on node where the image is available and visible in the output of podman images
What steps did you take and what happened:
I deployed the operator with the helm chart
Trivy is not able to scan image from private registry even if the image is already available on the cluster(openshift) node
What did you expect to happen:
I enabled the podman on clusters nodes as documented here: https://aquasecurity.github.io/trivy/v0.49/docs/target/container_image/#podman
systemctl --user enable --now podman.socket
So, the podman sock is available on nodes on following path:
/run/podman/podman.sock
/run/user/1000/podman/podman.sock
But trivy scan jobs is still failing, it cannot find it :
podman error: unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory\n\t* remote error: GET
It is like, trivy is looging in root path: / , /podman/podman.sock instead of the appropriate folder for sock: /run/....
If there was option to add extraEnvs, the path could be fixed with
XDG_RUNTIME_DIR
but it is not the case.ANd even if the path is fixed, it will still need the extra/custom volumeMounts
I manually created a job with the operator job template where I mounted the podman.sock with cmd overrided with sleep
and I manually run
trivy image --slow 'registry.redhat.io/openshift4/ose-cli:v4.11' --scanners vuln --image-config-scanners secret --skip-db-update --cache-dir /tmp/trivy/.cache --quiet --list-all-pkgs --format json
it works fine if the pod is scheduled on node where the image is available and visible in the output of podman images
After my manual test, I wanted to add the podman.sock volumeMount in the Operator Chart so it could be mounted by trivy-operator automatically
The current available option "volume, volumeMounts" , only mounts them in the trivy-operator pod and not in the scan jobs pods
So adding extraEnvs and extraVolumeMounts for the scan jobs/pods will be nice !
Environment:
trivy-operator version
): 0.20.6kubectl version
): v1.26.9+ | openshift 4.13.xThe text was updated successfully, but these errors were encountered: