Skip to content

Commit

Permalink
[Fix] Ensure we set the default device before initializing deepspeed (#…
Browse files Browse the repository at this point in the history
…6460)

* Ensure we set the default device before initializing deepspeed

* Add CHANGELOG.md

* Update pytorch_lightning/plugins/training_type/deepspeed.py

Co-authored-by: Kaushik B <[email protected]>

Co-authored-by: Kaushik B <[email protected]>
  • Loading branch information
SeanNaren and kaushikb11 authored Mar 10, 2021
1 parent 7d4e74c commit 1c013b4
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 0 deletions.
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed logger creating directory structure too early in DDP ([#6380](https://github.com/PyTorchLightning/pytorch-lightning/pull/6380))


- Fixed DeepSpeed additional memory use on rank 0 when default device not set early enough ([#6460](https://github.com/PyTorchLightning/pytorch-lightning/pull/6460))


- Fixed LightningModule `all_gather` on cpu tensors ([#6416](https://github.com/PyTorchLightning/pytorch-lightning/pull/6416))


Expand Down
2 changes: 2 additions & 0 deletions pytorch_lightning/plugins/training_type/deepspeed.py
Original file line number Diff line number Diff line change
Expand Up @@ -231,6 +231,8 @@ def _init_scheduler_optimizer(self):
return optimizer, scheduler, optimizer_frequencies

def _initialize_deepspeed_train(self, model):
if self.on_gpu:
torch.cuda.set_device(self.root_device)
optimizer, lightning_scheduler, optimizer_frequencies = None, None, None
if "optimizer" not in self.config:
rank_zero_info(
Expand Down

0 comments on commit 1c013b4

Please sign in to comment.