Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the possibility to mount Custom volumes to Scan Jobs #1900

Closed
cdtzabra opened this issue Mar 8, 2024 · 2 comments · Fixed by #2020
Closed

Add the possibility to mount Custom volumes to Scan Jobs #1900

cdtzabra opened this issue Mar 8, 2024 · 2 comments · Fixed by #2020
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@cdtzabra
Copy link

cdtzabra commented Mar 8, 2024

What steps did you take and what happened:

I deployed the operator with the helm chart

  serviceMonitor:
    enabled: true
    interval: 60s
    # -- The namespace where Prometheus expects to find service monitors
  #namespace: ~
  excludeNamespaces: "openshift,kube-system,infra-trivy-system,openshift-*"
  ## ,infra-backup-projects

  operator:
    replicas: 2
    scanJobsConcurrentLimit: 3
    # keep scanjob during x time before deleting it
    scanJobTTL: "10m"

  trivy:
    command: image
    ignoreUnfixed: true
#   # -- httpProxy is the HTTP proxy used by Trivy to download the vulnerabilities database from GitHub.
#   httpProxy: ~
#   # -- httpsProxy is the HTTPS proxy used by Trivy to download the vulnerabilities database from GitHub.
#   httpsProxy: ~
#   # -- noProxy is a comma separated list of IPs and domain names that are not subject to proxy settings.
#   noProxy: ~

## in oder to test fs option
  trivyOperator:
      scanJobAutomountServiceAccountToken: true
      scanJobPodTemplateContainerSecurityContext:
          # For filesystem scanning, Trivy needs to run as the root user
          # https://aquasecurity.github.io/trivy-operator/v0.16.1/tutorials/private-registries/   
          runAsUser: 0
          privileged: true
          allowPrivilegeEscalation: true
          readOnlyRootFilesystem: true

Trivy is not able to scan image from private registry even if the image is already available on the cluster(openshift) node

{"level":"error","ts":"2024-03-08T10:45:16Z","logger":"reconciler.scan job","msg":"Scan job container","job":"infra-trivy-system/scan-vulnerabilityreport-79db7ff9bb","container":"backup","status.reason":"Error","status.message":"2024-03-08T10:45:25.304Z\t\u001b[31mFATAL\u001b[0m\t
image scan error: scan error: unable to initialize a scanner: unable to initialize an image scanner: 4 errors occurred:\n\t* docker error: unable to inspect the image (registry.redhat.io/openshift4/ose-cli:v4.11):
 Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
 Is the docker daemon running?\n\t* 
containerd error: containerd socket not found: /run/containerd/containerd.sock\n\t*
 podman error: unable to initialize Podman client:
 no podman socket found: stat podman/podman.sock: no such file or directory\n\t* 
remote error: GET https://registry.redhat.io/auth/realms/rhcc/protocol/redhat-docker-v2/auth?scope=repository%3Aopenshift4%2Fose-cli%3Apull&service=docker-registry: UNAUTHORIZED: 
Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication\n\n\n

What did you expect to happen:

I enabled the podman on clusters nodes as documented here: https://aquasecurity.github.io/trivy/v0.49/docs/target/container_image/#podman

systemctl --user enable --now podman.socket

So, the podman sock is available on nodes on following path:

  • /run/podman/podman.sock
  • /run/user/1000/podman/podman.sock

But trivy scan jobs is still failing, it cannot find it : podman error: unable to initialize Podman client: no podman socket found: stat podman/podman.sock: no such file or directory\n\t* remote error: GET

It is like, trivy is looging in root path: / , /podman/podman.sock instead of the appropriate folder for sock: /run/....
If there was option to add extraEnvs, the path could be fixed with XDG_RUNTIME_DIR but it is not the case.
ANd even if the path is fixed, it will still need the extra/custom volumeMounts

I manually created a job with the operator job template where I mounted the podman.sock with cmd overrided with sleep
and I manually run trivy image --slow 'registry.redhat.io/openshift4/ose-cli:v4.11' --scanners vuln --image-config-scanners secret --skip-db-update --cache-dir /tmp/trivy/.cache --quiet --list-all-pkgs --format json

it works fine if the pod is scheduled on node where the image is available and visible in the output of podman images

      containers:
      - args:
        - -c
        - sleep 900
        command:
        - /bin/sh


## add podman.sock in volumes
      volumes:
      - name: sock
        hostPath:
          path: /run/podman/podman.sock
      - emptyDir: {}
        name: tmp
      - emptyDir: {}
        name: scanresult

        volumeMounts:
        - name: sock
          mountPath: /run/podman/podman.sock
        - name: sock
          mountPath: podman/podman.sock
        - mountPath: /tmp
          name: tmp
        - mountPath: /tmp/scan
          name: scanresult

After my manual test, I wanted to add the podman.sock volumeMount in the Operator Chart so it could be mounted by trivy-operator automatically

The current available option "volume, volumeMounts" , only mounts them in the trivy-operator pod and not in the scan jobs pods

So adding extraEnvs and extraVolumeMounts for the scan jobs/pods will be nice !

Environment:

  • Trivy-Operator version (use trivy-operator version): 0.20.6
  • Kubernetes version (use kubectl version): v1.26.9+ | openshift 4.13.x
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc):
@cdtzabra cdtzabra added the kind/bug Categorizes issue or PR as related to a bug. label Mar 8, 2024
@chen-keinan
Copy link
Contributor

@cdtzabra contributions are welcome

@cdtzabra
Copy link
Author

Thanks !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants