-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Underutilization of GPU #37
Comments
Very interesting! Multi-GPU support is definitively on our list, possibly for release 0.3. With regards to GPU usage that is strange - we have to run some tests to see what happens here. |
same issue here on NER model +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... Off | 00000000:00:1E.0 Off | 0 | | N/A 37C P0 38W / 300W | 2353MiB / 16130MiB | 16% Default | +-------------------------------+----------------------+----------------------+ Used the ner model. |
Just to chime in, I'm experiencing the same issue using the BiLSTM+CRF model - the GPU utilisation typically hovers somewhere around 30%, occasionally jumping up to 70 or 80.
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
A lot of improvements were added over time so GPU usage should be much higher now. |
When running an experiment with NER (run_ner.py), I found that the model does not utilize my GPU fully. I have 2 GPUs Nvidia Titan Xp on my machine but only 1 of them is used with the utilization varying from 10-60%.
Training my data NeuroNLP2 yields utilization of both GPUs at more than 90%.
The text was updated successfully, but these errors were encountered: