-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RTX2080 GPU is only being used at 30% - whilst a single CPU core is maxed out #1080
Comments
What task are you trying to perform? A work is on going: Few others are coming. |
Hi, NER is named entity recognition? I'm not doing that. What is LM? Once trained, I run a script ./predict "Some text" and get back list of labels and their score. I have created my corpus in FastText Format. I have eliminated labels that occur infrequently. My training code is..
|
LM -> Language model. I have not yet worked on classification optimization, however what I can tell you is that to best way to reduce the inference time is through increasing GPU use by spotting and removing as much as possible slow operations. Parallelization may be an answer for some parts of the process but so far I had better results by vectorizing and avoiding memory copy. If you are curious you can check what slow down using a profiler like cProfile or the pytorch profiler (easy to use, and produce an easy to exploit report) |
Thanks, I'll take a look at that. Also, for this kind of classification task, should I be using Word Embeddings or Document Embeddings? |
Your code looks good, i.e. use document embeddings that use a stack of word embeddings. |
I'm running a training model right now using the latest code and it only appears to be using 3-10% of my GPU. 2019-10-02 10:48:29,136 ---------------------------------------------------------------------------------------------------- +-----------------------------------------------------------------------------+ +-----------------------------------------------------------------------------+ A colleague ran some lower level torch code on the same machine yesterday and was get 99% gpu utilisation |
I have worked on |
ok thanks - looking forward to it. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I am trying to train a model and I find that the GPU (RTX 2080) is only being used approx 30% whilst 1 of my CPU cores (9900k@5Ghz) appears to be maxed out.
I'm trying to train a corpus of roughly 100k records tagged with approx 100 keywords.
Is it expected that the GPU is not fully utilised?
Can it not use more than 1 cpu core?
What other info should I provide?
The text was updated successfully, but these errors were encountered: