Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: update pytorch_lightning imports #16864

Merged
merged 13 commits into from
Feb 27, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci-dockers-pytorch.yml
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ jobs:
timeout-minutes: 55

build-docs:
# if: github.event.pull_request.draft == false # fixme
if: github.event.pull_request.draft == false
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
Expand Down
18 changes: 2 additions & 16 deletions .github/workflows/docs-checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,12 +65,6 @@ jobs:
key: docs-test-${{ matrix.pkg-name }}-${{ hashFiles('requirements/${{ matrix.pkg-name }}/*.txt') }}
restore-keys: docs-test-${{ matrix.pkg-name }}-

- name: Install LAI package
# This is needed as App docs is heavily using/referring to lightning package
if: ${{ matrix.pkg-name == 'lightning' }}
run: |
pip install -e . -U -v -f pypi -f ${TORCH_URL}

- name: Adjust docs refs
if: ${{ matrix.pkg-name == 'lightning' }}
run: |
Expand All @@ -85,8 +79,6 @@ jobs:
python -c "n = '${{ matrix.pkg-name }}' ; print('REQ_DIR=' + {'lightning': 'app'}.get(n, n))" >> $GITHUB_ENV

- name: Install this package
env:
PACKAGE_NAME: ${{ matrix.pkg-name }}
run: |
pip install -e .[extra,cloud,ui] -U -r requirements/${{ env.REQ_DIR }}/docs.txt -f pypi -f ${TORCH_URL}
pip list
Expand Down Expand Up @@ -138,8 +130,6 @@ jobs:
python -c "n = '${{ matrix.pkg-name }}' ; print('REQ_DIR=' + {'lightning': 'app'}.get(n, n))" >> $GITHUB_ENV

- name: Install package & dependencies
env:
PACKAGE_NAME: ${{ matrix.pkg-name }}
run: |
pip --version
pip install -e . -U -r requirements/${{ env.REQ_DIR }}/docs.txt -f pypi -f ${TORCH_URL}
Expand All @@ -148,19 +138,15 @@ jobs:

- name: Make Documentation
working-directory: ./docs/${{ env.DOCS_DIR }}
run: |
make html --debug --jobs $(nproc) SPHINXOPTS="-W --keep-going"
run: make html --debug --jobs $(nproc) SPHINXOPTS="-W --keep-going"

- name: Check External Links in Sphinx Documentation (Optional)
working-directory: ./docs/${{ env.DOCS_DIR }}
run: |
make linkcheck
run: make linkcheck
continue-on-error: true

- name: Upload built docs
uses: actions/upload-artifact@v3
with:
name: docs-${{ matrix.pkg-name }}-${{ github.sha }}
path: docs/build/html/
# Use always() to always run this step to publish test results when there are test failuress
if: success()
2 changes: 1 addition & 1 deletion docs/source-pytorch/accelerators/accelerator_prepare.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ This will make your code scale to any arbitrary number of GPUs or TPUs with Ligh
z = torch.Tensor(2, 3)
z = z.to(x)

The :class:`~pytorch_lightning.core.module.LightningModule` knows what device it is on. You can access the reference via ``self.device``.
The :class:`~lightning.pytorch.core.module.LightningModule` knows what device it is on. You can access the reference via ``self.device``.
Sometimes it is necessary to store tensors as module attributes. However, if they are not parameters they will
remain on the CPU even if the module gets moved to a new device. To prevent that and remain device agnostic,
register the tensor as a buffer in your modules' ``__init__`` method with :meth:`~torch.nn.Module.register_buffer`.
Expand Down
6 changes: 3 additions & 3 deletions docs/source-pytorch/accelerators/gpu_intermediate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -228,9 +228,9 @@ DDP can also be used with 1 GPU, but there's no reason to do so other than debug

Implement Your Own Distributed (DDP) training
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you need your own way to init PyTorch DDP you can override :meth:`pytorch_lightning.strategies.ddp.DDPStrategy.setup_distributed`.
If you need your own way to init PyTorch DDP you can override :meth:`lightning.pytorch.strategies.ddp.DDPStrategy.setup_distributed`.

If you also need to use your own DDP implementation, override :meth:`pytorch_lightning.strategies.ddp.DDPStrategy.configure_ddp`.
If you also need to use your own DDP implementation, override :meth:`lightning.pytorch.strategies.ddp.DDPStrategy.configure_ddp`.

----------

Expand Down Expand Up @@ -279,7 +279,7 @@ Lightning allows explicitly specifying the backend via the `process_group_backen

.. code-block:: python

from pytorch_lightning.strategies import DDPStrategy
from lightning.pytorch.strategies import DDPStrategy

# Explicitly specify the process group backend if you choose to
ddp = DDPStrategy(process_group_backend="nccl")
Expand Down
2 changes: 1 addition & 1 deletion docs/source-pytorch/accelerators/hpu_basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ To enable PyTorch Lightning to utilize the HPU accelerator, simply provide ``acc


The ``devices>1`` parameter with HPUs enables the Habana accelerator for distributed training.
It uses :class:`~pytorch_lightning.strategies.hpu_parallel.HPUParallelStrategy` internally which is based on DDP
It uses :class:`~lightning.pytorch.strategies.hpu_parallel.HPUParallelStrategy` internally which is based on DDP
strategy with the addition of Habana's collective communication library (HCCL) to support scale-up within a node and
scale-out across multiple nodes.

Expand Down
14 changes: 7 additions & 7 deletions docs/source-pytorch/accelerators/hpu_intermediate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,21 +23,21 @@ By default, HPU training will use 32-bit precision. To enable mixed precision, s
Customize Mixed Precision
-------------------------

Internally, :class:`~pytorch_lightning.plugins.precision.hpu.HPUPrecisionPlugin` uses the Habana Mixed Precision (HMP) package to enable mixed precision training.
Internally, :class:`~lightning.pytorch.plugins.precision.hpu.HPUPrecisionPlugin` uses the Habana Mixed Precision (HMP) package to enable mixed precision training.

You can execute the ops in FP32 or BF16 precision. The HMP package modifies the Python operators to add the appropriate cast operations for the arguments before execution.
The default settings enable users to enable mixed precision training with minimal code easily.

In addition to the default settings in HMP, users also have the option of overriding these defaults and providing their
BF16 and FP32 operator lists by passing them as parameter to :class:`~pytorch_lightning.plugins.precision.hpu.HPUPrecisionPlugin`.
BF16 and FP32 operator lists by passing them as parameter to :class:`~lightning.pytorch.plugins.precision.hpu.HPUPrecisionPlugin`.

The below snippet shows an example model using MNIST with a single Habana Gaudi device and making use of HMP by overriding the default parameters.
This enables advanced users to provide their own BF16 and FP32 operator list instead of using the HMP defaults.

.. code-block:: python

import pytorch_lightning as pl
from pytorch_lightning.plugins import HPUPrecisionPlugin
import lightning.pytorch as pl
from lightning.pytorch.plugins import HPUPrecisionPlugin

# Initialize a trainer with HPU accelerator for HPU strategy for single device,
# with mixed precision using overidden HMP settings
Expand Down Expand Up @@ -72,7 +72,7 @@ For more details, please refer to `PyTorch Mixed Precision Training on Gaudi <ht
Enabling DeviceStatsMonitor with HPUs
----------------------------------------

:class:`~pytorch_lightning.callbacks.device_stats_monitor.DeviceStatsMonitor` is a callback that automatically monitors and logs device stats during the training stage.
:class:`~lightning.pytorch.callbacks.device_stats_monitor.DeviceStatsMonitor` is a callback that automatically monitors and logs device stats during the training stage.
This callback can be passed for training with HPUs. It returns a map of the following metrics with their values in bytes of type uint64:

- **Limit**: amount of total memory on HPU device.
Expand All @@ -90,8 +90,8 @@ The below snippet shows how DeviceStatsMonitor can be enabled.

.. code-block:: python

from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import DeviceStatsMonitor
from lightning.pytorch import Trainer
from lightning.pytorch.callbacks import DeviceStatsMonitor

device_stats = DeviceStatsMonitor()
trainer = Trainer(accelerator="hpu", callbacks=[device_stats])
Expand Down
12 changes: 6 additions & 6 deletions docs/source-pytorch/accelerators/ipu_advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ IPUs provide further optimizations to speed up training. By using the ``IPUStrat

.. code-block:: python

import pytorch_lightning as pl
from pytorch_lightning.strategies import IPUStrategy
import lightning.pytorch as pl
from lightning.pytorch.strategies import IPUStrategy

model = MyLightningModule()
trainer = pl.Trainer(accelerator="ipu", devices=8, strategy=IPUStrategy(device_iterations=32))
Expand All @@ -31,8 +31,8 @@ Note that by default we return the last device iteration loss. You can override
.. code-block:: python

import poptorch
import pytorch_lightning as pl
from pytorch_lightning.strategies import IPUStrategy
import lightning.pytorch as pl
from lightning.pytorch.strategies import IPUStrategy

model = MyLightningModule()
inference_opts = poptorch.Options()
Expand Down Expand Up @@ -71,7 +71,7 @@ Below is an example using the block annotation in a LightningModule.

.. code-block:: python

import pytorch_lightning as pl
import lightning.pytorch as pl
import poptorch


Expand Down Expand Up @@ -104,7 +104,7 @@ You can also use the block context manager within the forward function, or any o

.. code-block:: python

import pytorch_lightning as pl
import lightning.pytorch as pl
import poptorch


Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/accelerators/ipu_basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,5 +70,5 @@ Please see the `MNIST example <https://github.com/Lightning-AI/lightning/blob/ma
* Since the step functions are traced, branching logic or any form of primitive values are traced into constants. Be mindful as this could lead to errors in your custom code.
* Clipping gradients is not supported.
* It is not possible to use :class:`torch.utils.data.BatchSampler` in your dataloaders if you are using multiple IPUs.
* IPUs handle the data transfer to the device on the host, hence the hooks :meth:`~pytorch_lightning.core.hooks.ModelHooks.transfer_batch_to_device` and
:meth:`~pytorch_lightning.core.hooks.ModelHooks.on_after_batch_transfer` do not apply here and if you have overridden any of them, an exception will be raised.
* IPUs handle the data transfer to the device on the host, hence the hooks :meth:`~lightning.pytorch.core.hooks.ModelHooks.transfer_batch_to_device` and
:meth:`~lightning.pytorch.core.hooks.ModelHooks.on_after_batch_transfer` do not apply here and if you have overridden any of them, an exception will be raised.
10 changes: 5 additions & 5 deletions docs/source-pytorch/accelerators/ipu_intermediate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ set the precision flag.

.. code-block:: python

import pytorch_lightning as pl
import lightning.pytorch as pl

model = MyLightningModule()
trainer = pl.Trainer(accelerator="ipu", devices=8, precision=16)
Expand All @@ -30,8 +30,8 @@ You can also use pure 16-bit training, where the weights are also in 16-bit prec

.. code-block:: python

import pytorch_lightning as pl
from pytorch_lightning.strategies import IPUStrategy
import lightning.pytorch as pl
from lightning.pytorch.strategies import IPUStrategy

model = MyLightningModule()
model = model.half()
Expand All @@ -53,8 +53,8 @@ Lightning supports dumping all reports to a directory to open using the tool.

.. code-block:: python

import pytorch_lightning as pl
from pytorch_lightning.strategies import IPUStrategy
import lightning.pytorch as pl
from lightning.pytorch.strategies import IPUStrategy

model = MyLightningModule()
trainer = pl.Trainer(accelerator="ipu", devices=8, strategy=IPUStrategy(autoreport_dir="report_dir/"))
Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/accelerators/tpu_advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ Example:

.. code-block:: python

from pytorch_lightning.core.module import LightningModule
from lightning.pytorch.core.module import LightningModule
from torch import nn
from pytorch_lightning.trainer.trainer import Trainer
from lightning.pytorch.trainer.trainer import Trainer


class WeightSharingModule(LightningModule):
Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/accelerators/tpu_faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ XLA configuration is missing?
...
File "/home/kaushikbokka/pytorch-lightning/pytorch_lightning/utilities/device_parser.py", line 125, in parse_tpu_cores
raise MisconfigurationException('No TPU devices were found.')
pytorch_lightning.utilities.exceptions.MisconfigurationException: No TPU devices were found.
lightning.pytorch.utilities.exceptions.MisconfigurationException: No TPU devices were found.

This means the system is missing XLA configuration. You would need to set up XRT TPU device configuration.

Expand Down Expand Up @@ -88,7 +88,7 @@ How to setup the debug mode for Training on TPUs?

.. code-block:: python

import pytorch_lightning as pl
import lightning.pytorch as pl

my_model = MyLightningModule()
trainer = pl.Trainer(accelerator="tpu", devices=8, strategy="xla_debug")
Expand Down
4 changes: 2 additions & 2 deletions docs/source-pytorch/accelerators/tpu_intermediate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ To use a full TPU pod skip to the TPU pod section.

.. code-block:: python

import pytorch_lightning as pl
import lightning.pytorch as pl

my_model = MyLightningModule()
trainer = pl.Trainer(accelerator="tpu", devices=8)
Expand Down Expand Up @@ -105,7 +105,7 @@ set the 16-bit flag.

.. code-block:: python

import pytorch_lightning as pl
import lightning.pytorch as pl

my_model = MyLightningModule()
trainer = pl.Trainer(accelerator="tpu", devices=8, precision=16)
Expand Down
Loading