You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#12983 attempted to fix an issue where the evaluation step methods in the DDPStrategy would call the LightningModule.x_step methods directly instead of going through the LightningDoublePrecisionModule.x_step methods.
This introduced a typing issue where we can't guarantee that self.model implements the x_step methods.
Pitch
Treat #12983 as a temporary fix and refactor away the need to have a precision wrapper.
All the precision wrapper does is process the input arguments.
This input processing could be done directly in the strategy.
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
Proposed refactor
Motivation
#12983 attempted to fix an issue where the evaluation step methods in the DDPStrategy would call the
LightningModule.x_step
methods directly instead of going through theLightningDoublePrecisionModule.x_step
methods.This introduced a typing issue where we can't guarantee that
self.model
implements thex_step
methods.Pitch
Treat #12983 as a temporary fix and refactor away the need to have a precision wrapper.
All the precision wrapper does is process the input arguments.
This input processing could be done directly in the strategy.
Before
After
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @justusschock @awaelchli @rohitgr7 @carmocca @akihironitta
The text was updated successfully, but these errors were encountered: