Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace mentions of .type_as() in our docs #14554

Closed
awaelchli opened this issue Sep 6, 2022 · 5 comments · Fixed by #15027
Closed

Replace mentions of .type_as() in our docs #14554

awaelchli opened this issue Sep 6, 2022 · 5 comments · Fixed by #15027
Assignees
Labels
docs Documentation related good first issue Good for newcomers

Comments

@awaelchli
Copy link
Contributor

awaelchli commented Sep 6, 2022

📚 Documentation

As a follow up to #2585, we should consider removing mentions of the .type_as() syntax in our docs and replace it with best practices for device placement and type conversion.


If you enjoy Lightning, check out our other projects! ⚡

  • Metrics: Machine learning metrics for distributed, scalable PyTorch applications.

  • Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.

  • Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.

  • Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.

  • Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.

cc @Borda @rohitgr7 @Felonious-Spellfire

@awaelchli awaelchli added the docs Documentation related label Sep 6, 2022
@awaelchli awaelchli added the good first issue Good for newcomers label Sep 6, 2022
@nsarang
Copy link
Contributor

nsarang commented Sep 6, 2022

Tensor.to can be used as a safe replacement for type_as. It also accepts another tensor as input.

From the documentation:

torch.to(other, non_blocking=False, copy=False) → Tensor
Returns a Tensor with same torch.dtype and torch.device as the Tensor other. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.

@yipliu
Copy link

yipliu commented Sep 6, 2022

For me, I often use the following methods

# As the LightningModule, self.deivce is known
torch.zeros(B, S, device=self.device)

@amrutharajashekar
Copy link
Contributor

Hey @awaelchli can i work on this issue ? I would like to contribute .

@awaelchli
Copy link
Contributor Author

@amrutha1098 Yes, that would be greatly appreciated! And please let me know if you need any further guidance regarding this issue or the contribution process.

@amrutharajashekar
Copy link
Contributor

amrutharajashekar commented Oct 7, 2022

Hi @awaelchli , I have submitted the PR with the changes. Please let me know if any changes are required.
Thanks for the help !!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docs Documentation related good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants