Trainer(accelerator="tpu")
should raise exception if TPU not found
#12047
Labels
accelerator: cuda
Compute Unified Device Architecture GPU
accelerator: tpu
Tensor Processing Unit
bug
Something isn't working
🐛 Bug
In an environment without TPU,
Trainer(accelerator="tpu")
should raise an exception just likeTrainer(tpu_cores=8)
does.To Reproduce
Expected behavior
See title.
Environment
conda
,pip
, source):torch.__config__.show()
:Additional context
Trainer(accelerator="gpu", devices=8)
expectedly raises an exception if no gpus are found.cc @kaushikb11 @rohitgr7 @justusschock @awaelchli @akihironitta
The text was updated successfully, but these errors were encountered: