Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-GPU or how to choose a particular GPU #12

Open
mcastrorennes opened this issue Oct 9, 2020 · 1 comment
Open

Multi-GPU or how to choose a particular GPU #12

mcastrorennes opened this issue Oct 9, 2020 · 1 comment
Assignees
Labels
question Further information is requested

Comments

@mcastrorennes
Copy link

Hi,

I have a confusion in the use of the GPUs. the function Neural_Network use le variable gpu_number (version 0.34), but if the GPU=0 is busy and I get an error. The last version of miscnn use the variable multi_gpu=boolean.

Could you explain to me how to set the use of a particular gpu.

Thank

@muellerdo
Copy link
Member

muellerdo commented Oct 9, 2020

Hey @mcastrorennes,

the old parameter gpu_number of the Neural_Network class defines the number of gpus which will be used in parallel for multi-gpu usage. So a single model training uses multiple gpu's in order to speed up the process.

The Neural Network class used the Keras 'multi_gpu_model' function in which you can specify how many gpus you want to use. Now, this function is deprecated and was replaced with the MirroredStrategy from Tensorflow, which works more like a boolean on/off switch. Therefore, I replaced the Neural_Network class parameter gpu_number (int, by default=1) with the new variable multi_gpu (boolean, by default False).

References:
https://www.tensorflow.org/api_docs/python/tf/keras/utils/multi_gpu_model
https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy
frankkramer-lab/MIScnn#44

Specify a single or multiple particular GPUs:

Could you explain to me how to set the use of a particular gpu.

If you have multiple GPUs in your cluster, let's say we have 4 GPUs, but the first two GPUs (0 & 1) are already in use by your colleague, than you have to tell your system to use only the last 2 GPUs (2 & 3).

How can we do this:

-> Via Bash:

# Define the environment variable CUDA_VISIBLE_DEVICES with a single GPU id
export CUDA_VISIBLE_DEVICES=2

# Or for multiple GPU ids
export CUDA_VISIBLE_DEVICES=2,3

-> Via Python and environment variables at the start of the script:

import os
# Single GPU ID 2
os.environ["CUDA_VISIBLE_DEVICES"] = "2"

# Multiple GPUs (2 & 3)
os.environ["CUDA_VISIBLE_DEVICES"] = "2,3"

Be aware that the Python variant requires that the ids are provided as string.

-> Via Python with Tensorflow:
Check out this:
https://www.tensorflow.org/guide/gpu#using_a_single_gpu_on_a_multi-gpu_system

Hope that I was able to help you.

Cheers,
Dominik

@muellerdo muellerdo self-assigned this Oct 9, 2020
@muellerdo muellerdo added the question Further information is requested label Oct 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants