-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensor parallel documentation #3359
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py 2025-01-17 00:49:54.427641+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py 2025-01-17 00:50:14.910448+00:00
@@ -64,11 +64,11 @@
device="cuda",
)
with torch.no_grad():
# The plan is
- #plan = {
+ # plan = {
# "attention": PrepareModuleInput(
# input_layouts=(Shard(1), None),
# desired_input_layouts=(Replicate(), None),
# ),
# "attention.wq": ColwiseParallel(),
@@ -82,22 +82,22 @@
# ),
# "feed_forward.w1": ColwiseParallel(),
# "feed_forward.w2": RowwiseParallel(output_layouts=Shard(1)),
# "feed_forward.w3": ColwiseParallel(),
# "ffn_norm": SequenceParallel(),
- #}
+ # }
model = ParallelTransformer(model_args, device_mesh)
-# %%
-# Model inference with Torch-TensorRT backend
-# -------------------------------------------
-# When we compile the distributed model using Torch-TensorRT backend, pytorch distributed libraries create the sharded model
-# on multiple GPUs and the communicator operations are used for proper communication. In the above,
-# `ColwiseParallel` and `RowwiseParallel` shard the attention layers in the column or row fashion.
-# `SequenceParallel` performs sharded computations of the normalization layer
-# `PrepareModuleInput` configures the model input with proper communication operations
+ # %%
+ # Model inference with Torch-TensorRT backend
+ # -------------------------------------------
+ # When we compile the distributed model using Torch-TensorRT backend, pytorch distributed libraries create the sharded model
+ # on multiple GPUs and the communicator operations are used for proper communication. In the above,
+ # `ColwiseParallel` and `RowwiseParallel` shard the attention layers in the column or row fashion.
+ # `SequenceParallel` performs sharded computations of the normalization layer
+ # `PrepareModuleInput` configures the model input with proper communication operations
torch.manual_seed(0)
inp = torch.randint(32000, (8, 256), device="cuda")
python_result = model(inp)
torch_tensorrt.runtime.set_multi_device_safe_mode(True)
0313372
to
d511d80
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py 2025-01-17 00:57:39.378946+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py 2025-01-17 00:57:59.494626+00:00
@@ -64,11 +64,11 @@
device="cuda",
)
with torch.no_grad():
# The plan is
- #plan = {
+ # plan = {
# "attention": PrepareModuleInput(
# input_layouts=(Shard(1), None),
# desired_input_layouts=(Replicate(), None),
# ),
# "attention.wq": ColwiseParallel(),
@@ -82,23 +82,23 @@
# ),
# "feed_forward.w1": ColwiseParallel(),
# "feed_forward.w2": RowwiseParallel(output_layouts=Shard(1)),
# "feed_forward.w3": ColwiseParallel(),
# "ffn_norm": SequenceParallel(),
- #}
+ # }
model = ParallelTransformer(model_args, device_mesh)
-# %%
-# Model inference with Torch-TensorRT backend
-# -------------------------------------------
-# When we compile the distributed model using Torch-TensorRT backend, pytorch distributed libraries create the sharded model
-# on multiple GPUs and the communicator operations are used for proper communication. In the above,
-# `ColwiseParallel` and `RowwiseParallel` shard the attention layers in the column or row fashion.
-# `SequenceParallel` performs sharded computations of the normalization layer
-# `PrepareModuleInput` configures the model input with proper communication operations
-# The NCCL operations used in the distributed backend is handled by the TensorRT-LLM NCCL plugins, which causes no graph breaks now
+ # %%
+ # Model inference with Torch-TensorRT backend
+ # -------------------------------------------
+ # When we compile the distributed model using Torch-TensorRT backend, pytorch distributed libraries create the sharded model
+ # on multiple GPUs and the communicator operations are used for proper communication. In the above,
+ # `ColwiseParallel` and `RowwiseParallel` shard the attention layers in the column or row fashion.
+ # `SequenceParallel` performs sharded computations of the normalization layer
+ # `PrepareModuleInput` configures the model input with proper communication operations
+ # The NCCL operations used in the distributed backend is handled by the TensorRT-LLM NCCL plugins, which causes no graph breaks now
torch.manual_seed(0)
inp = torch.randint(32000, (8, 256), device="cuda")
python_result = model(inp)
torch_tensorrt.runtime.set_multi_device_safe_mode(True)
d511d80
to
b67cabb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py 2025-02-10 09:41:12.729066+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py 2025-02-10 09:41:34.023489+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:
Torch Compile Advanced Usage
======================================================
-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py 2025-02-10 09:41:12.730066+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py 2025-02-10 09:41:34.046868+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:
Compiling GPT2 using the dynamo backend
==========================================================
-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py 2025-02-10 09:41:12.730066+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py 2025-02-10 09:41:34.084070+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:
Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================
-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py 2025-02-10 09:41:12.730066+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py 2025-02-10 09:41:34.097536+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:
Torch Export with Cudagraphs
======================================================
-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py 2025-02-10 09:41:12.733066+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py 2025-02-10 09:41:34.127679+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:
Compiling Llama2 using the dynamo backend
==========================================================
-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py 2025-02-10 09:41:12.735066+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py 2025-02-10 09:41:34.208054+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:
Dynamo Compile Advanced Usage
======================================================
-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py 2025-02-10 09:41:12.735066+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py 2025-02-10 09:41:34.212358+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:
Compiling BERT using the `torch.compile` backend
==============================================================
-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py 2025-02-10 09:41:12.735066+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py 2025-02-10 09:41:34.225446+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:
Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================
-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py 2025-02-10 09:41:12.735066+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py 2025-02-10 09:41:34.255472+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:
Compiling a Transformer using torch.compile and TensorRT
==============================================================
-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py 2025-02-10 09:41:13.192070+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py 2025-02-10 09:41:34.266080+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:
Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================
-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py 2025-02-10 09:41:13.192070+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py 2025-02-10 09:41:34.280080+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:
Dynamo Compile Advanced Usage
======================================================
-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py 2025-02-10 09:41:13.193069+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py 2025-02-10 09:41:34.307230+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:
Compiling a Transformer using torch.compile and TensorRT
==============================================================
-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py 2025-02-10 09:41:13.222070+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py 2025-02-10 09:41:34.363270+00:00
@@ -4,11 +4,12 @@
.. _tensor_parallel_llama:
Torch distributed example for llama3-7B model
======================================================
-As model sizes are increasing, large models with billions of parameters are trained with many GPUs, where regular data parallel training is no longer possible. In this example, we illustrate the Llama3-7B model inference using Torch-TensorRT backend, split across multiple GPUs using a form of model parallelism called Tensor Parallelism. We make use of Pytorch Distributed Tensor Parallelism Module. Please refer to these tutorials- https://pytorch.org/tutorials/intermediate/TP_tutorial.html and https://lightning.ai/lightning-ai/studios/tensor-parallelism-supercharging-large-model-training-with-pytorch-lightning?section=featured"""
+As model sizes are increasing, large models with billions of parameters are trained with many GPUs, where regular data parallel training is no longer possible. In this example, we illustrate the Llama3-7B model inference using Torch-TensorRT backend, split across multiple GPUs using a form of model parallelism called Tensor Parallelism. We make use of Pytorch Distributed Tensor Parallelism Module. Please refer to these tutorials- https://pytorch.org/tutorials/intermediate/TP_tutorial.html and https://lightning.ai/lightning-ai/studios/tensor-parallelism-supercharging-large-model-training-with-pytorch-lightning?section=featured
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py 2025-02-10 09:41:13.223070+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py 2025-02-10 09:41:34.483222+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:
Torch Compile Advanced Usage
======================================================
-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py 2025-02-10 09:41:13.223070+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py 2025-02-10 09:41:34.500754+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:
Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================
-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py 2025-02-10 09:41:13.223070+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py 2025-02-10 09:41:34.510913+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:
Compiling BERT using the `torch.compile` backend
==============================================================
-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py 2025-02-10 09:41:13.223070+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py 2025-02-10 09:41:34.528922+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:
Compiling GPT2 using the dynamo backend
==========================================================
-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py 2025-02-10 09:41:13.223070+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py 2025-02-10 09:41:34.535461+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:
Torch Export with Cudagraphs
======================================================
-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py 2025-02-10 09:41:13.223070+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py 2025-02-10 09:41:34.553146+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:
Compiling Llama2 using the dynamo backend
==========================================================
-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py 2025-02-10 09:41:13.232070+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py 2025-02-10 09:41:35.043126+00:00
@@ -259,11 +259,11 @@
else:
return False
@staticmethod
def _parse_tensor_domain(
- domain: Optional[Tuple[float, float]]
+ domain: Optional[Tuple[float, float]],
) -> Tuple[float, float]:
"""
Produce a tuple of integers which specifies a tensor domain in the interval format: [lo, hi)
Args:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py 2025-02-10 09:41:13.234070+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py 2025-02-10 09:41:35.299338+00:00
@@ -51,17 +51,17 @@
def _redraw(self, *, blank_lines: int = 0) -> None:
if self._render:
def clear_line() -> None:
- print("\x1B[2K", end="")
+ print("\x1b[2K", end="")
def move_to_start_of_line() -> None:
- print("\x1B[0G", end="")
+ print("\x1b[0G", end="")
def move_cursor_up(lines: int) -> None:
- print("\x1B[{}A".format(lines), end="")
+ print("\x1b[{}A".format(lines), end="")
def progress_bar(steps: int, num_steps: int) -> str:
INNER_WIDTH = 10
completed_bar_chars = int(INNER_WIDTH * steps / float(num_steps))
return "[{}{}]".format(
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py 2025-02-10 09:41:13.232070+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py 2025-02-10 09:41:35.449503+00:00
@@ -1198,11 +1198,11 @@
"Provided unsupported source type for EngineCapability conversion"
)
@classmethod
def try_from(
- c: Union[trt.EngineCapability, EngineCapability]
+ c: Union[trt.EngineCapability, EngineCapability],
) -> Optional[EngineCapability]:
"""Create a Torch-TensorRT engine capability enum from a TensorRT engine capability enum.
Takes a device type enum from tensorrt and create a ``torch_tensorrt.EngineCapability``.
If the source is not supported or the engine capability level is not supported in Torch-TensorRT,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py 2025-02-10 09:41:13.235070+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py 2025-02-10 09:41:35.713533+00:00
@@ -245,11 +245,11 @@
beta: float,
) -> TRTTensor:
operation_type = trt.ActivationType.HARD_SIGMOID
def hard_sigmoid_dyn_range_fn(
- dyn_range: Tuple[float, float]
+ dyn_range: Tuple[float, float],
) -> Tuple[float, float]:
def hard_sigmoid_fn(x: float) -> float:
return max(0, min(1, alpha * x + beta))
return hard_sigmoid_fn(dyn_range[0]), hard_sigmoid_fn(dyn_range[1])
@@ -308,11 +308,11 @@
alpha: float,
) -> TRTTensor:
operation_type = trt.ActivationType.THRESHOLDED_RELU
def thresholded_relu_dyn_range_fn(
- dyn_range: Tuple[float, float]
+ dyn_range: Tuple[float, float],
) -> Tuple[float, float]:
def thresholded_relu_fn(x: float) -> float:
return x if x > alpha else 0
return thresholded_relu_fn(dyn_range[0]), thresholded_relu_fn(dyn_range[1])
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py 2025-02-10 09:41:13.239070+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py 2025-02-10 09:41:37.224552+00:00
@@ -463,11 +463,11 @@
else:
return torch.device(device)
def to_torch_tensorrt_device(
- device: Optional[Union[Device, torch.device, str]]
+ device: Optional[Union[Device, torch.device, str]],
) -> Device:
"""Cast a device-type to torch_tensorrt.Device
Returns the corresponding torch_tensorrt.Device
"""
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py 2025-02-10 09:41:13.244070+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py 2025-02-10 09:41:38.157109+00:00
@@ -99,11 +99,11 @@
self.y = torch.ones(y_shape)
def forward(self, condition):
return torch.where(condition, self.x, self.y)
- inputs = [(torch.randn(condition_shape) > 0)]
+ inputs = [torch.randn(condition_shape) > 0]
self.run_test(
Where(x_shape, y_shape),
inputs,
expected_ops={acc_ops.where},
test_implicit_batch_dim=False,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py 2025-02-10 09:41:13.248070+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py 2025-02-10 09:41:39.430572+00:00
@@ -515,11 +515,11 @@
dim0 = cast(int, transpose_node.args[1])
dim1 = cast(int, transpose_node.args[2])
changed = False
def _calculate_dim(
- transpose_dim: Union[torch.fx.Node, int]
+ transpose_dim: Union[torch.fx.Node, int],
) -> Union[torch.fx.Node, int]:
nonlocal transpose_input_node
nonlocal changed
if isinstance(transpose_dim, torch.fx.Node):
# Transpose dim is sub node
b67cabb
to
6394d79
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py 2025-02-10 09:44:10.793925+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py 2025-02-10 09:44:32.647105+00:00
@@ -4,11 +4,12 @@
.. _tensor_parallel_llama:
Torch distributed example for llama3-7B model
======================================================
-As model sizes are increasing, large models with billions of parameters are trained with many GPUs, where regular data parallel training is no longer possible. In this example, we illustrate the Llama3-7B model inference using Torch-TensorRT backend, split across multiple GPUs using a form of model parallelism called Tensor Parallelism. We make use of Pytorch Distributed Tensor Parallelism Module. Please refer to these tutorials- https://pytorch.org/tutorials/intermediate/TP_tutorial.html and https://lightning.ai/lightning-ai/studios/tensor-parallelism-supercharging-large-model-training-with-pytorch-lightning?section=featured"""
+As model sizes are increasing, large models with billions of parameters are trained with many GPUs, where regular data parallel training is no longer possible. In this example, we illustrate the Llama3-7B model inference using Torch-TensorRT backend, split across multiple GPUs using a form of model parallelism called Tensor Parallelism. We make use of Pytorch Distributed Tensor Parallelism Module. Please refer to these tutorials- https://pytorch.org/tutorials/intermediate/TP_tutorial.html and https://lightning.ai/lightning-ai/studios/tensor-parallelism-supercharging-large-model-training-with-pytorch-lightning?section=featured
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py 2025-02-10 12:29:13.021928+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/distributed_inference/tensor_parallel_llama3.py 2025-02-10 12:29:32.487017+00:00
@@ -4,11 +4,12 @@
.. _tensor_parallel_llama:
Torch distributed example for llama3-7B model
======================================================
-As model sizes are increasing, large models with billions of parameters are trained with many GPUs, where regular data parallel training is no longer possible. In this example, we illustrate the Llama3-7B model inference using Torch-TensorRT backend, split across multiple GPUs using a form of model parallelism called Tensor Parallelism. We make use of Pytorch Distributed Tensor Parallelism Module. Please refer to these tutorials- https://pytorch.org/tutorials/intermediate/TP_tutorial.html and https://lightning.ai/lightning-ai/studios/tensor-parallelism-supercharging-large-model-training-with-pytorch-lightning?section=featured"""
+As model sizes are increasing, large models with billions of parameters are trained with many GPUs, where regular data parallel training is no longer possible. In this example, we illustrate the Llama3-7B model inference using Torch-TensorRT backend, split across multiple GPUs using a form of model parallelism called Tensor Parallelism. We make use of Pytorch Distributed Tensor Parallelism Module. Please refer to these tutorials- https://pytorch.org/tutorials/intermediate/TP_tutorial.html and https://lightning.ai/lightning-ai/studios/tensor-parallelism-supercharging-large-model-training-with-pytorch-lightning?section=featured
+"""
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
bcaaea7
to
d2f83de
Compare
|
||
We have two options: | ||
|
||
Option 1: Install TensorRT-LLM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets only recommend option 2 at this point with the fetching tool you are making
Tensor parallel Llama3 tutorial illustrating use of torch.distributed and nccl ops
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: