Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assistance with NVIDIA Quadro P4000 Integration in Glances (Docker on TrueNAS Scale) #3096

Open
yeeahnick opened this issue Jan 27, 2025 · 8 comments

Comments

@yeeahnick
Copy link

yeeahnick commented Jan 27, 2025

Hello,

I'm encountering an issue where my NVIDIA Quadro P4000 is not being detected by Glances. I'm using the docker-compose (latest-full) configuration and have enabled NVIDIA GPU support in the application settings while building the app in TrueNAS. This configuration sets the NVIDIA_VISIBLE_DEVICES and NVIDIA_DRIVER_CAPABILITIES variables.

With these settings, I can see the NVIDIA driver listed under the file system pane in Glances, but the GPU does not appear when I access the endpoint:
http://IP:61208/api/4/gpu.

Interestingly, when I navigate to http://IP:61208/api/4/full, I can see several NVIDIA-related entries.

To ensure the GPU is properly assigned in the Docker Compose configuration, I ran the following command in the TrueNAS shell:

midclt call -job app.update glances-custom '{"values": {"resources": {"gpus": {"use_all_gpus": false, "nvidia_gpu_selection": {"PCI_SLOT": {"use_gpu": true, "uuid": "GPU-95943d54-8d67-b91e-00cb-ca3662cfd863"}}}}}}'

Despite this, the GPU still doesn’t show up in the /gpu endpoint.

Does anyone have suggestions or insights on what might be missing or misconfigured? Any help would be greatly appreciated!

Thank you!

@yeeahnick yeeahnick changed the title Assistance with NVIDIA Quadro P4000 Integration in Glances (Docker on TrueNAS) Assistance with NVIDIA Quadro P4000 Integration in Glances (Docker on TrueNAS Scale) Jan 27, 2025
@nicolargo
Copy link
Owner

Hi @yeeahnick

can you copy/paste the result of a curl on http://ip:61208/api/4/full ?

Thanks.

@yeeahnick
Copy link
Author

Hi @nicolargo

Thanks for the quick response.

Unfortunately the curl /full no longer shows the NVIDIA gpu (same thing under file system in Glances). There was a TrueNAS Scale update (24.10.2) yesterday that included NVIDIA fixes which I guess made it worst for Glances. To be clear my GPU is working in other dockers running on the same system.

But I can give more information.

When I run "ls /dev | grep nvidia" in the shell of Glances I see the following:

nvidia-caps
nvidia-modeset
nvidia-uvm
nvidia-uvm-tools
nvidia0
nviaiactl

When I do a nvidia-smi nothing is found (this works in other dockers on the same system).

When I run "env" in the shell of Glances I see that the NVIDIA capabilities and devices are enabled. (environment variables)

When I run "glances | grep -i runtime" in the shell of Glances it just hangs.

I will fiddle with it again tonight to see if I can repopulate the curl /full.

Let me know if I need to provide anything else.

Cheers!

@nicolargo
Copy link
Owner

In the shell of Glances, can you run the following command:

glances -V

It will display the path to the glances.log file.

then run:

glances -d --stdout gpu --stop-after 3

And copy paste:

  • the glances.log file (relevants lines)
  • output of the command

Thanks !

@XSvirusSAFE
Copy link

XSvirusSAFE commented Jan 30, 2025

Having the same issue here. Hope the info within the screenshot can help. Screenshot_2025-01-30-20-09-19-93_40deb401b9ffe8e1df2f1cc5ba480b12.jpg

@yeeahnick
Copy link
Author

yeeahnick commented Jan 30, 2025

@nicolargo

Image

Image

@yeeahnick
Copy link
Author

yeeahnick commented Jan 30, 2025

Having the same issue here. Hope the info within the screenshot can help.

You can run this "cat /tmp/glances-root.log" in the Glances shell to view the log file.

@kbirger
Copy link

kbirger commented Jan 31, 2025

Same exact results here. I have also noticed that inside the container nvidia-smi reports "not found". Bizarre, because it's there

/app # which nvidia-smi
/usr/bin/nvidia-smi
/app # ls -l /usr/bin/nvidia-smi
-rwxr-xr-x    1 root     root       1068640 Jan 30 04:54 /usr/bin/nvidia-smi
/app # stat /usr/bin/nvidia-smi
  File: /usr/bin/nvidia-smi
  Size: 1068640         Blocks: 2088       IO Block: 4096   regular file
Device: fc06h/64518d    Inode: 9710205     Links: 1
Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2025-01-30 04:55:14.164896111 +0000
Modify: 2025-01-30 04:54:48.715960042 +0000
Change: 2025-01-30 04:54:48.715960042 +0000
/app # nvidia-smi
sh: nvidia-smi: not found
/app # /usr/bin/nvidia-smi
sh: /usr/bin/nvidia-smi: not found

/app # id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)

from the root log:

2025-01-31 01:57:40,193 -- DEBUG -- NVML Shared Library (libnvidia-ml.so.1) not Found, Nvidia GPU plugin is disabled

However, I've got other containers on the system that are using the GPU no problem.

Please let me know if you want to see any other parts of the log

@yeeahnick
Copy link
Author

Same with nvidia-smi

Image

/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants