From e867ab99ad61ec05c3f347784f96456171512d8c Mon Sep 17 00:00:00 2001 From: Laikh Tewari Date: Fri, 24 May 2024 15:30:12 -0700 Subject: [PATCH 01/10] readme --- README.md | 349 +++++++++++++----------------------------------------- 1 file changed, 84 insertions(+), 265 deletions(-) diff --git a/README.md b/README.md index 3e74a79688..dd76c11ec6 100644 --- a/README.md +++ b/README.md @@ -1,311 +1,130 @@ -# Torch-TensorRT +
+ +Torch-TensorRT +=========================== +

Easily achieve the best inference performance for any PyTorch model on the NVIDIA platform.

[![Documentation](https://img.shields.io/badge/docs-master-brightgreen)](https://nvidia.github.io/Torch-TensorRT/) +[![pytorch](https://img.shields.io/badge/PyTorch-2.2-green)](https://www.python.org/downloads/release/python-31013/) +[![cuda](https://img.shields.io/badge/cuda-12.1-green)](https://developer.nvidia.com/cuda-downloads) +[![trt](https://img.shields.io/badge/TensorRT-8.6.1-green)](https://github.com/nvidia/tensorrt-llm) +[![license](https://img.shields.io/badge/license-BSD--3--Clause-blue)](./LICENSE) [![CircleCI](https://circleci.com/gh/pytorch/TensorRT.svg?style=svg)](https://app.circleci.com/pipelines/github/pytorch/TensorRT) -> Ahead of Time (AOT) compiling for PyTorch JIT and FX +--- +
-Torch-TensorRT is a compiler for PyTorch/TorchScript/FX, targeting NVIDIA GPUs via NVIDIA's TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch's Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript or FX program into an module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extension and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT's suite of configurations at compile time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module. +Torch-TensorRT brings the power of TensorRT to PyTorch. Accelerate inference latency by up to 5x compared to eager execution in just one line of code. +
-Resources: -- [Documentation](https://nvidia.github.io/Torch-TensorRT/) -- [FX path Documentation](https://github.com/pytorch/TensorRT/blob/master/docsrc/tutorials/getting_started_with_fx_path.rst) -- [Torch-TensorRT Explained in 2 minutes!](https://www.youtube.com/watch?v=TU5BMU6iYZ0&ab_channel=NVIDIADeveloper) -- [Comprehensive Discussion (GTC Event)](https://www.nvidia.com/en-us/on-demand/session/gtcfall21-a31107/) -- [Pre-built Docker Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch). To use this container, make an NGC account and sign in to NVIDIA's registry with an API key. Refer to [this guide](https://docs.nvidia.com/ngc/ngc-catalog-user-guide/index.html#registering-activating-ngc-account) for the same. +## Installation +Stable versions of Torch-TensorRT are published on PyPI +```bash +pip install torch-tensorrt +``` -## NVIDIA NGC Container -Torch-TensorRT is distributed in the ready-to-run NVIDIA [NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) starting with 21.11. We recommend using this prebuilt container to experiment & develop with Torch-TensorRT; it has all dependencies with the proper versions as well as example notebooks included. +Nightly versions of Torch-TensorRT are published on the PyTorch package index +```bash +pip install --pre torch-tensorrt --index-url https://download.pytorch.org/whl/nightly/cu121 +``` -## Building a docker container for Torch-TensorRT +Torch-TensorRT is also distributed in the ready-to-run [NVIDIA NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) which has all dependencies with the proper versions and example notebooks included. -We provide a `Dockerfile` in `docker/` directory. It expects a PyTorch NGC container as a base but can easily be modified to build on top of any container that provides, PyTorch, CUDA, and TensorRT. The dependency libraries in the container can be found in the release notes. +For more advanced installation methods, please see [here](https://github.com/pytorch/tensorrt/INSTALLATION.md). -Please follow this instruction to build a Docker container. +## Quickstart -```bash -docker build --build-arg BASE= -f docker/Dockerfile -t torch_tensorrt:latest . -``` +### Option 1: torch.compile +You can use Torch-TensorRT anywhere you use `torch.compile`: -In the case of building on top of a custom base container, you first must determine the -version of the PyTorch C++ ABI. If your source of PyTorch is pytorch.org, likely this is the pre-cxx11-abi in which case you must modify `//docker/dist-build.sh` to not build the -C++11 ABI version of Torch-TensorRT. +```python +import torch +import torch_tensorrt -You can then build the container using the build command in the [docker README](docker/README.md#instructions) +model = MyModel().eval().cuda() # define your model here +x = [torch.randn((1, 3, 224, 224)).cuda()] # define a list of relevant inputs here -If you would like to build outside a docker container, please follow the section [Compiling Torch-TensorRT](#compiling-torch-tensorrt) +optimized_model = torch.compile(model, backend="tensorrt") +optimized_model(x) # compiled on first run -## Example Usage +optimized_model(x) # this will be fast! +``` -### C++ +### Option 2: Export +If you want to optimize your model ahead-of-time and/or deploy in a C++ environment, Torch-TensorRT provides an export-style workflow that serializes an optimized module. This module can be deployed in PyTorch or with libtorch (i.e. without a Python dependency). -```c++ -#include "torch/script.h" -#include "torch_tensorrt/torch_tensorrt.h" +#### Step 1: Optimize + serialize +```python +import torch +import torch_tensorrt -... -// Set input datatypes. Allowed options torch::{kFloat, kHalf, kChar, kInt32, kBool} -// Size of input_dtypes should match number of inputs to the network. -// If input_dtypes is not set, default precision follows traditional PyT / TRT rules -auto input = torch_tensorrt::Input(dims, torch::kHalf); -auto compile_settings = torch_tensorrt::ts::CompileSpec({input}); -// FP16 execution -compile_settings.enabled_precisions = {torch::kHalf}; -// Compile module -auto trt_mod = torch_tensorrt::ts::compile(ts_mod, compile_settings); -// Run like normal -auto results = trt_mod.forward({in_tensor}); -// Save module for later -trt_mod.save("trt_torchscript_module.ts"); -... -``` +model = MyModel().eval().cuda() # define your model here +inputs = [torch.randn((1, 3, 224, 224)).cuda()] # define a list of relevant inputs here -### Python +trt_gm = torch_tensorrt.compile(model, ir="dynamo", inputs) +torch_tensorrt.save(trt_gm, "trt.ep", inputs=inputs) # PyTorch only supports Python runtime for an ExportedProgram. For C++ deployment, use a TorchScript file +torch_tensorrt.save(trt_gm, "trt.ts", output_format="torchscript", inputs=inputs) +``` -```py +#### Step 2: Deploy +##### Deployment in PyTorch: +```python +import torch import torch_tensorrt -... - -trt_ts_module = torch_tensorrt.compile(torch_script_module, - # If the inputs to the module are plain Tensors, specify them via the `inputs` argument: - inputs = [example_tensor, # Provide example tensor for input shape or... - torch_tensorrt.Input( # Specify input object with shape and dtype - min_shape=[1, 3, 224, 224], - opt_shape=[1, 3, 512, 512], - max_shape=[1, 3, 1024, 1024], - # For static size shape=[1, 3, 224, 224] - dtype=torch.half) # Datatype of input tensor. Allowed options torch.(float|half|int8|int32|bool) - ], - - # For inputs containing tuples or lists of tensors, use the `input_signature` argument: - # Below, we have an input consisting of a Tuple of two Tensors (Tuple[Tensor, Tensor]) - # input_signature = ( (torch_tensorrt.Input(shape=[1, 3, 224, 224], dtype=torch.half), - # torch_tensorrt.Input(shape=[1, 3, 224, 224], dtype=torch.half)), ), - - enabled_precisions = {torch.half}, # Run with FP16 -) - -result = trt_ts_module(input_data) # run inference -torch.jit.save(trt_ts_module, "trt_torchscript_module.ts") # save the TRT embedded Torchscript +inputs = [torch.randn((1, 3, 224, 224)).cuda()] # your inputs go here + +# You can run this in a new python session! +model = torch.export.load("trt.ep").module() +# model = torch_tensorrt.load("trt.ep").module() # this also works +model(*inputs) ``` -> Notes on running in lower precisions: -> -> - Enabled lower precisions with compile_spec.enabled_precisions -> - The module should be left in FP32 before compilation (FP16 can support half tensor models) -> - Provided input tensors dtype should be the same as module before compilation, regardless of `enabled_precisions`. This can be overrided by setting `Input::dtype` +##### Deployment in C++: +```cpp +#include "torch/script.h" +#include "torch_tensorrt/torch_tensorrt.h" + +auto trt_mod = torch::jit::load("trt.ts"); +auto input_tensor = [...]; // fill this with your inputs +auto results = trt_mod.forward({input_tensor}); +``` + +## Further resources +- [Up to 50% faster Stable Diffusion inference with one line of code](https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/torch_compile_stable_diffusion.html#sphx-glr-tutorials-rendered-examples-dynamo-torch-compile-stable-diffusion-py) +- [Optimize LLMs from Hugging Face with Torch-TensorRT]() \[coming soon\] +- [Run your model in FP8 with Torch-TensorRT]() \[coming soon\] +- [Tools to resolve graph breaks and boost performance]() \[coming soon\] +- [Tech Talk (GTC '23)](https://www.nvidia.com/en-us/on-demand/session/gtcspring23-s51714/) +- [Documentation](https://nvidia.github.io/Torch-TensorRT/) + ## Platform Support | Platform | Support | | ------------------- | ------------------------------------------------ | | Linux AMD64 / GPU | **Supported** | +| Windows / GPU | **Official support coming soon** | | Linux aarch64 / GPU | **Native Compilation Supported on JetPack-4.4+ (use v1.0.0 for the time being)** | | Linux aarch64 / DLA | **Native Compilation Supported on JetPack-4.4+ (use v1.0.0 for the time being)** | -| Windows / GPU | **Unofficial Support** | -| Linux ppc64le / GPU | - | -| NGC Containers | **Included in PyTorch NGC Containers 21.11+** | +| Linux ppc64le / GPU | Not supported | -> Torch-TensorRT will be included in NVIDIA NGC containers (https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) starting in 21.11. - -> Note: Refer NVIDIA NGC container(https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch) for PyTorch libraries on JetPack. +> Note: Refer [NVIDIA L4T PyTorch NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch) for PyTorch libraries on JetPack. ### Dependencies These are the following dependencies used to verify the testcases. Torch-TensorRT can work with other versions, but the tests are not guaranteed to pass. - Bazel 5.2.0 -- Libtorch 2.4.0.dev (latest nightly) (built with CUDA 12.1) +- Libtorch 2.3.0.dev (latest nightly) (built with CUDA 12.1) - CUDA 12.1 - TensorRT 10.0.1.6 -## Prebuilt Binaries and Wheel files - -Releases: https://github.com/pytorch/TensorRT/releases - -``` -pip install tensorrt torch-tensorrt -``` - -## Compiling Torch-TensorRT - -### Installing Dependencies - -#### 0. Install Bazel - -If you don't have bazel installed, the easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk - -Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html - -Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions - -```sh -export BAZEL_VERSION= -mkdir bazel -cd bazel -curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip -unzip bazel-$BAZEL_VERSION-dist.zip -bash ./compile.sh -``` - -You need to start by having CUDA installed on the system, LibTorch will automatically be pulled for you by bazel, -then you have two options. - -#### 1. Building using TensorRT tarball distributions - -> This is recommended so as to build Torch-TensorRT hermetically and insures any bugs are not caused by version issues - -> Make sure when running Torch-TensorRT that these versions of the libraries are prioritized in your `$LD_LIBRARY_PATH` - -1. You need to download the tarball distributions of TensorRT from the NVIDIA website. - - https://developer.nvidia.com/tensorrt -2. Place these files in a directory (the directories `third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]` exist for this purpose) -3. Compile using: - -``` shell -bazel build //:libtorchtrt --compilation_mode opt --distdir third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu] -``` - -#### 2. Building using locally installed TensorRT - -> If you find bugs and you compiled using this method please disclose you used this method in the issue -> (an `ldd` dump would be nice too) - -1. Install TensorRT and CUDA on the system before starting to compile. -2. In `WORKSPACE` comment out - -```py -# Downloaded distributions to use with --distdir -http_archive( - name = "tensorrt", - urls = ["",], - - build_file = "@//third_party/tensorrt/archive:BUILD", - sha256 = "", - strip_prefix = "TensorRT-" -) -``` - -and uncomment - -```py -# Locally installed dependencies -new_local_repository( - name = "tensorrt", - path = "/usr/", - build_file = "@//third_party/tensorrt/local:BUILD" -) -``` - -3. Compile using: - -``` shell -bazel build //:libtorchtrt --compilation_mode opt -``` - -### FX path (Python only) installation -If the user plans to try FX path (Python only) and would like to avoid bazel build. Please follow the steps below. -``` shell -cd py && python3 setup.py install --fx-only -``` - -### Debug build - -``` shell -bazel build //:libtorchtrt --compilation_mode=dbg -``` - -### Native compilation on NVIDIA Jetson AGX -We performed end to end testing on Jetson platform using Jetpack SDK 4.6. - -``` shell -bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 -``` - -> Note: Please refer [installation](docs/tutorials/installation.html) instructions for Pre-requisites - -A tarball with the include files and library can then be found in bazel-bin - -### Running Torch-TensorRT on a JIT Graph - -> Make sure to add LibTorch to your LD_LIBRARY_PATH
-> `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TensorRT/external/libtorch/lib` - -``` shell -bazel run //cpp/bin/torchtrtc -- $(realpath ) out.ts -``` - -## Compiling the Python Package - -To compile the python package for your local machine, just run `python3 setup.py install` in the `//py` directory. -To build wheel files for different python versions, first build the Dockerfile in ``//py`` then run the following -command - -``` -docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh -``` - -Python compilation expects using the tarball based compilation strategy from above. - - -## Testing using Python backend - -Torch-TensorRT supports testing in Python using [nox](https://nox.thea.codes/en/stable) - -To install the nox using python-pip - -``` -python3 -m pip install --upgrade nox -``` - -To list supported nox sessions: - -``` -nox --session -l -``` - -Environment variables supported by nox - -``` -PYT_PATH - To use different PYTHONPATH than system installed Python packages -TOP_DIR - To set the root directory of the noxfile -USE_CXX11 - To use cxx11_abi (Defaults to 0) -USE_HOST_DEPS - To use host dependencies for tests (Defaults to 0) -``` - -Usage example - -``` -nox --session l0_api_tests -``` - -Supported Python versions: -``` -["3.7", "3.8", "3.9", "3.10"] -``` - -## How do I add support for a new op... - -### In Torch-TensorRT? - -Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. It's preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/pytorch/TensorRT/issues) for information on the support status of various operators. - -### In my application? - -> The Node Converter Registry is not exposed in the top level API but in the internal headers shipped with the tarball. - -You can register a converter for your op using the `NodeConverterRegistry` inside your application. +## Deprecation Policy -## Structure of the repo +Deprecation is used to inform developers that some APIs and tools are no longer recommended for use. Beginning with version 2.3, Torch-TensorRT has the following deprecation policy: -| Component | Description | -| ------------------------ | ------------------------------------------------------------ | -| [**core**](core) | Main JIT ingest, lowering, conversion and runtime implementations | -| [**cpp**](cpp) | C++ API and CLI source | -| [**examples**](examples) | Example applications to show different features of Torch-TensorRT | -| [**py**](py) | Python API for Torch-TensorRT | -| [**tests**](tests) | Unit tests for Torch-TensorRT | +Deprecation notices are communicated in the Release Notes. Deprecated API functions will have a statement in the source documenting when they were deprecated. Deprecated methods and classes will issue deprecation warnings at runtime, if they are used. Torch-TensorRT provides a 6-month migration period after the deprecation. APIs and tools continue to work during the migration period. After the migration period ends, APIs and tools are removed in a manner consistent with semantic versioning. ## Contributing @@ -314,4 +133,4 @@ Take a look at the [CONTRIBUTING.md](CONTRIBUTING.md) ## License -The Torch-TensorRT license can be found in the LICENSE file. It is licensed with a BSD Style licence +The Torch-TensorRT license can be found in the [LICENSE](./LICENSE) file. It is licensed with a BSD Style licence From ebcb1a6871439bc406fabe0dcf4b550ae298da66 Mon Sep 17 00:00:00 2001 From: Laikh Tewari Date: Fri, 24 May 2024 15:33:01 -0700 Subject: [PATCH 02/10] added installation and contributing guides --- CONTRIBUTING.md | 60 +++++++++++++++++- INSTALLATION.md | 164 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 223 insertions(+), 1 deletion(-) create mode 100644 INSTALLATION.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 543ed9309e..ba9ab32cf6 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -54,4 +54,62 @@ pip install pre-commit go install github.com/bazelbuild/buildtools/buildifier@latest ``` -Thanks in advance for your patience as we review your contributions; we do appreciate them! +## Testing using Python backend + +Torch-TensorRT supports testing in Python using [nox](https://nox.thea.codes/en/stable) + +To install the nox using python-pip + +``` +python3 -m pip install --upgrade nox +``` + +To list supported nox sessions: + +``` +nox --session -l +``` + +Environment variables supported by nox + +``` +PYT_PATH - To use different PYTHONPATH than system installed Python packages +TOP_DIR - To set the root directory of the noxfile +USE_CXX11 - To use cxx11_abi (Defaults to 0) +USE_HOST_DEPS - To use host dependencies for tests (Defaults to 0) +``` + +Usage example + +``` +nox --session l0_api_tests +``` + +Supported Python versions: +``` +["3.7", "3.8", "3.9", "3.10"] +``` + +## How do I add support for a new op... + +### In Torch-TensorRT? + +Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the [issues](https://github.com/pytorch/TensorRT/issues) for information on the support status of various operators. + +### In my application? + +> The Node Converter Registry is not exposed in the top level API but in the internal headers shipped with the tarball. + +You can register a converter for your op using the `NodeConverterRegistry` inside your application. + +## Structure of the repo + +| Component | Description | +| ------------------------ | ------------------------------------------------------------ | +| [**core**](core) | Main JIT ingest, lowering, conversion and runtime implementations | +| [**cpp**](cpp) | C++ API and CLI source | +| [**examples**](examples) | Example applications to show different features of Torch-TensorRT | +| [**py**](py) | Python API for Torch-TensorRT | +| [**tests**](tests) | Unit tests for Torch-TensorRT | + +Thanks in advance for your patience as we review your contributions; we do appreciate them! \ No newline at end of file diff --git a/INSTALLATION.md b/INSTALLATION.md new file mode 100644 index 0000000000..d322626738 --- /dev/null +++ b/INSTALLATION.md @@ -0,0 +1,164 @@ +## Pre-built wheels +Stable versions of Torch-TensorRT are published on PyPI +```bash +pip install torch-tensorrt +``` + +Nightly versions of Torch-TensorRT are published on the PyTorch package index +```bash +pip install --pre torch-tensorrt --index-url https://download.pytorch.org/whl/nightly/cu121 +``` + +Torch-TensorRT is also distributed in the ready-to-run [NVIDIA NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) which has all dependencies with the proper versions and example notebooks included. + +## Building a docker container for Torch-TensorRT + +We provide a `Dockerfile` in `docker/` directory. It expects a PyTorch NGC container as a base but can easily be modified to build on top of any container that provides, PyTorch, CUDA, cuDNN and TensorRT. The dependency libraries in the container can be found in the release notes. + +Please follow this instruction to build a Docker container. + +```bash +docker build --build-arg BASE= -f docker/Dockerfile -t torch_tensorrt:latest . +``` + +In the case of building on top of a custom base container, you first must determine the +version of the PyTorch C++ ABI. If your source of PyTorch is pytorch.org, likely this is the pre-cxx11-abi in which case you must modify `//docker/dist-build.sh` to not build the +C++11 ABI version of Torch-TensorRT. + +You can then build the container using the build command in the [docker README](docker/README.md#instructions) + +## Compiling Torch-TensorRT + +### Installing Dependencies + +#### 0. Install Bazel + +If you don't have bazel installed, the easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk + +Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html + +Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions + +```sh +export BAZEL_VERSION= +mkdir bazel +cd bazel +curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip +unzip bazel-$BAZEL_VERSION-dist.zip +bash ./compile.sh +``` + +You need to start by having CUDA installed on the system, LibTorch will automatically be pulled for you by bazel, +then you have two options. + +#### 1. Building using cuDNN & TensorRT tarball distributions + +> This is recommended so as to build Torch-TensorRT hermetically and insures any bugs are not caused by version issues + +> Make sure when running Torch-TensorRT that these versions of the libraries are prioritized in your `$LD_LIBRARY_PATH` + +1. You need to download the tarball distributions of TensorRT and cuDNN from the NVIDIA website. + - https://developer.nvidia.com/cudnn + - https://developer.nvidia.com/tensorrt +2. Place these files in a directory (the directories `third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]` exist for this purpose) +3. Compile using: + +``` shell +bazel build //:libtorchtrt --compilation_mode opt --distdir third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu] +``` + +#### 2. Building using locally installed cuDNN & TensorRT + +> If you find bugs and you compiled using this method please disclose you used this method in the issue +> (an `ldd` dump would be nice too) + +1. Install TensorRT, CUDA and cuDNN on the system before starting to compile. +2. In `WORKSPACE` comment out + +```py +# Downloaded distributions to use with --distdir +http_archive( + name = "cudnn", + urls = ["",], + + build_file = "@//third_party/cudnn/archive:BUILD", + sha256 = "", + strip_prefix = "cuda" +) + +http_archive( + name = "tensorrt", + urls = ["",], + + build_file = "@//third_party/tensorrt/archive:BUILD", + sha256 = "", + strip_prefix = "TensorRT-" +) +``` + +and uncomment + +```py +# Locally installed dependencies +new_local_repository( + name = "cudnn", + path = "/usr/", + build_file = "@//third_party/cudnn/local:BUILD" +) + +new_local_repository( + name = "tensorrt", + path = "/usr/", + build_file = "@//third_party/tensorrt/local:BUILD" +) +``` + +3. Compile using: + +``` shell +bazel build //:libtorchtrt --compilation_mode opt +``` + +### FX path (Python only) installation +If the user plans to try FX path (Python only) and would like to avoid bazel build. Please follow the steps below. +``` shell +cd py && python3 setup.py install --fx-only +``` + +### Debug build + +``` shell +bazel build //:libtorchtrt --compilation_mode=dbg +``` + +### Native compilation on NVIDIA Jetson AGX +We performed end to end testing on Jetson platform using Jetpack SDK 4.6. + +``` shell +bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 +``` + +> Note: Please refer [installation](docs/tutorials/installation.html) instructions for Pre-requisites + +A tarball with the include files and library can then be found in bazel-bin + +### Running Torch-TensorRT on a JIT Graph + +> Make sure to add LibTorch to your LD_LIBRARY_PATH
+> `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TensorRT/external/libtorch/lib` + +``` shell +bazel run //cpp/bin/torchtrtc -- $(realpath ) out.ts +``` + +## Compiling the Python Package + +To compile the python package for your local machine, just run `python3 setup.py install` in the `//py` directory. +To build wheel files for different python versions, first build the Dockerfile in ``//py`` then run the following +command + +``` +docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh +``` + +Python compilation expects using the tarball based compilation strategy from above. From 7d7e2a280cc94553fe0efe6feea8a6a1b2bfbf3a Mon Sep 17 00:00:00 2001 From: Laikh Tewari Date: Fri, 24 May 2024 15:35:05 -0700 Subject: [PATCH 03/10] revert libtorch version in doc --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index dd76c11ec6..e723af3d82 100644 --- a/README.md +++ b/README.md @@ -116,7 +116,7 @@ auto results = trt_mod.forward({input_tensor}); These are the following dependencies used to verify the testcases. Torch-TensorRT can work with other versions, but the tests are not guaranteed to pass. - Bazel 5.2.0 -- Libtorch 2.3.0.dev (latest nightly) (built with CUDA 12.1) +- Libtorch 2.4.0.dev (latest nightly) (built with CUDA 12.1) - CUDA 12.1 - TensorRT 10.0.1.6 From 9f3c72b2c04bd6e925c2142eaa85892a9ba807ea Mon Sep 17 00:00:00 2001 From: Laikh Tewari Date: Wed, 29 May 2024 17:16:58 -0700 Subject: [PATCH 04/10] use docs version of installation instead --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index e723af3d82..6a57a72be7 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ pip install --pre torch-tensorrt --index-url https://download.pytorch.org/whl/ni Torch-TensorRT is also distributed in the ready-to-run [NVIDIA NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) which has all dependencies with the proper versions and example notebooks included. -For more advanced installation methods, please see [here](https://github.com/pytorch/tensorrt/INSTALLATION.md). +For more advanced installation methods, please see [here](https://pytorch.org/TensorRT/getting_started/installation.html) ## Quickstart From 62b1522fdceb582a3da71b7aff2a122e26a1818d Mon Sep 17 00:00:00 2001 From: Laikh Tewari Date: Wed, 29 May 2024 17:19:32 -0700 Subject: [PATCH 05/10] Official support for Windows dynamo only --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 6a57a72be7..356b434d6e 100644 --- a/README.md +++ b/README.md @@ -104,7 +104,7 @@ auto results = trt_mod.forward({input_tensor}); | Platform | Support | | ------------------- | ------------------------------------------------ | | Linux AMD64 / GPU | **Supported** | -| Windows / GPU | **Official support coming soon** | +| Windows / GPU | **Supported (Dynamo only)** | | Linux aarch64 / GPU | **Native Compilation Supported on JetPack-4.4+ (use v1.0.0 for the time being)** | | Linux aarch64 / DLA | **Native Compilation Supported on JetPack-4.4+ (use v1.0.0 for the time being)** | | Linux ppc64le / GPU | Not supported | From 59f24c029c71d39a41c14a0937d2660a8a354b50 Mon Sep 17 00:00:00 2001 From: Laikh Tewari Date: Fri, 31 May 2024 17:52:58 -0700 Subject: [PATCH 06/10] removed installation.md --- INSTALLATION.md | 164 ------------------------------------------------ 1 file changed, 164 deletions(-) delete mode 100644 INSTALLATION.md diff --git a/INSTALLATION.md b/INSTALLATION.md deleted file mode 100644 index d322626738..0000000000 --- a/INSTALLATION.md +++ /dev/null @@ -1,164 +0,0 @@ -## Pre-built wheels -Stable versions of Torch-TensorRT are published on PyPI -```bash -pip install torch-tensorrt -``` - -Nightly versions of Torch-TensorRT are published on the PyTorch package index -```bash -pip install --pre torch-tensorrt --index-url https://download.pytorch.org/whl/nightly/cu121 -``` - -Torch-TensorRT is also distributed in the ready-to-run [NVIDIA NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) which has all dependencies with the proper versions and example notebooks included. - -## Building a docker container for Torch-TensorRT - -We provide a `Dockerfile` in `docker/` directory. It expects a PyTorch NGC container as a base but can easily be modified to build on top of any container that provides, PyTorch, CUDA, cuDNN and TensorRT. The dependency libraries in the container can be found in the release notes. - -Please follow this instruction to build a Docker container. - -```bash -docker build --build-arg BASE= -f docker/Dockerfile -t torch_tensorrt:latest . -``` - -In the case of building on top of a custom base container, you first must determine the -version of the PyTorch C++ ABI. If your source of PyTorch is pytorch.org, likely this is the pre-cxx11-abi in which case you must modify `//docker/dist-build.sh` to not build the -C++11 ABI version of Torch-TensorRT. - -You can then build the container using the build command in the [docker README](docker/README.md#instructions) - -## Compiling Torch-TensorRT - -### Installing Dependencies - -#### 0. Install Bazel - -If you don't have bazel installed, the easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk - -Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html - -Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions - -```sh -export BAZEL_VERSION= -mkdir bazel -cd bazel -curl -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-dist.zip -unzip bazel-$BAZEL_VERSION-dist.zip -bash ./compile.sh -``` - -You need to start by having CUDA installed on the system, LibTorch will automatically be pulled for you by bazel, -then you have two options. - -#### 1. Building using cuDNN & TensorRT tarball distributions - -> This is recommended so as to build Torch-TensorRT hermetically and insures any bugs are not caused by version issues - -> Make sure when running Torch-TensorRT that these versions of the libraries are prioritized in your `$LD_LIBRARY_PATH` - -1. You need to download the tarball distributions of TensorRT and cuDNN from the NVIDIA website. - - https://developer.nvidia.com/cudnn - - https://developer.nvidia.com/tensorrt -2. Place these files in a directory (the directories `third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]` exist for this purpose) -3. Compile using: - -``` shell -bazel build //:libtorchtrt --compilation_mode opt --distdir third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu] -``` - -#### 2. Building using locally installed cuDNN & TensorRT - -> If you find bugs and you compiled using this method please disclose you used this method in the issue -> (an `ldd` dump would be nice too) - -1. Install TensorRT, CUDA and cuDNN on the system before starting to compile. -2. In `WORKSPACE` comment out - -```py -# Downloaded distributions to use with --distdir -http_archive( - name = "cudnn", - urls = ["",], - - build_file = "@//third_party/cudnn/archive:BUILD", - sha256 = "", - strip_prefix = "cuda" -) - -http_archive( - name = "tensorrt", - urls = ["",], - - build_file = "@//third_party/tensorrt/archive:BUILD", - sha256 = "", - strip_prefix = "TensorRT-" -) -``` - -and uncomment - -```py -# Locally installed dependencies -new_local_repository( - name = "cudnn", - path = "/usr/", - build_file = "@//third_party/cudnn/local:BUILD" -) - -new_local_repository( - name = "tensorrt", - path = "/usr/", - build_file = "@//third_party/tensorrt/local:BUILD" -) -``` - -3. Compile using: - -``` shell -bazel build //:libtorchtrt --compilation_mode opt -``` - -### FX path (Python only) installation -If the user plans to try FX path (Python only) and would like to avoid bazel build. Please follow the steps below. -``` shell -cd py && python3 setup.py install --fx-only -``` - -### Debug build - -``` shell -bazel build //:libtorchtrt --compilation_mode=dbg -``` - -### Native compilation on NVIDIA Jetson AGX -We performed end to end testing on Jetson platform using Jetpack SDK 4.6. - -``` shell -bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 -``` - -> Note: Please refer [installation](docs/tutorials/installation.html) instructions for Pre-requisites - -A tarball with the include files and library can then be found in bazel-bin - -### Running Torch-TensorRT on a JIT Graph - -> Make sure to add LibTorch to your LD_LIBRARY_PATH
-> `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TensorRT/external/libtorch/lib` - -``` shell -bazel run //cpp/bin/torchtrtc -- $(realpath ) out.ts -``` - -## Compiling the Python Package - -To compile the python package for your local machine, just run `python3 setup.py install` in the `//py` directory. -To build wheel files for different python versions, first build the Dockerfile in ``//py`` then run the following -command - -``` -docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh -``` - -Python compilation expects using the tarball based compilation strategy from above. From 27b2cc9ba703461fd4780c240d596d8c5074d03f Mon Sep 17 00:00:00 2001 From: Laikh Tewari Date: Fri, 7 Jun 2024 07:53:43 -0700 Subject: [PATCH 07/10] change ci badges to gha --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 356b434d6e..58f7f20986 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,8 @@ Torch-TensorRT [![cuda](https://img.shields.io/badge/cuda-12.1-green)](https://developer.nvidia.com/cuda-downloads) [![trt](https://img.shields.io/badge/TensorRT-8.6.1-green)](https://github.com/nvidia/tensorrt-llm) [![license](https://img.shields.io/badge/license-BSD--3--Clause-blue)](./LICENSE) -[![CircleCI](https://circleci.com/gh/pytorch/TensorRT.svg?style=svg)](https://app.circleci.com/pipelines/github/pytorch/TensorRT) +[![linux_tests](https://github.com/pytorch/TensorRT/actions/workflows/build-test-linux.yml/badge.svg)](https://github.com/pytorch/TensorRT/actions/workflows/build-test-linux.yml) +[![windows_tests](https://github.com/pytorch/TensorRT/actions/workflows/build-test-windows.yml/badge.svg)](https://github.com/pytorch/TensorRT/actions/workflows/build-test-windows.yml) ---
From 8a122f84fa8a30b9c1c6b5d5a916904d4cd73159 Mon Sep 17 00:00:00 2001 From: Laikh Tewari Date: Fri, 7 Jun 2024 08:01:09 -0700 Subject: [PATCH 08/10] bump badge dep versions --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 58f7f20986..a633a5952e 100644 --- a/README.md +++ b/README.md @@ -5,9 +5,9 @@ Torch-TensorRT

Easily achieve the best inference performance for any PyTorch model on the NVIDIA platform.

[![Documentation](https://img.shields.io/badge/docs-master-brightgreen)](https://nvidia.github.io/Torch-TensorRT/) -[![pytorch](https://img.shields.io/badge/PyTorch-2.2-green)](https://www.python.org/downloads/release/python-31013/) -[![cuda](https://img.shields.io/badge/cuda-12.1-green)](https://developer.nvidia.com/cuda-downloads) -[![trt](https://img.shields.io/badge/TensorRT-8.6.1-green)](https://github.com/nvidia/tensorrt-llm) +[![pytorch](https://img.shields.io/badge/PyTorch-2.4-green)](https://www.python.org/downloads/release/python-31013/) +[![cuda](https://img.shields.io/badge/CUDA-12.1-green)](https://developer.nvidia.com/cuda-downloads) +[![trt](https://img.shields.io/badge/TensorRT-10.0.1-green)](https://github.com/nvidia/tensorrt-llm) [![license](https://img.shields.io/badge/license-BSD--3--Clause-blue)](./LICENSE) [![linux_tests](https://github.com/pytorch/TensorRT/actions/workflows/build-test-linux.yml/badge.svg)](https://github.com/pytorch/TensorRT/actions/workflows/build-test-linux.yml) [![windows_tests](https://github.com/pytorch/TensorRT/actions/workflows/build-test-windows.yml/badge.svg)](https://github.com/pytorch/TensorRT/actions/workflows/build-test-windows.yml) From 3e4b13f86f4a0bfa6adf3671a08f5a6c5ba1b806 Mon Sep 17 00:00:00 2001 From: Laikh Tewari Date: Tue, 11 Jun 2024 09:22:30 -0700 Subject: [PATCH 09/10] bump cuda badge --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a633a5952e..ae0aae501f 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ Torch-TensorRT [![Documentation](https://img.shields.io/badge/docs-master-brightgreen)](https://nvidia.github.io/Torch-TensorRT/) [![pytorch](https://img.shields.io/badge/PyTorch-2.4-green)](https://www.python.org/downloads/release/python-31013/) -[![cuda](https://img.shields.io/badge/CUDA-12.1-green)](https://developer.nvidia.com/cuda-downloads) +[![cuda](https://img.shields.io/badge/CUDA-12.4-green)](https://developer.nvidia.com/cuda-downloads) [![trt](https://img.shields.io/badge/TensorRT-10.0.1-green)](https://github.com/nvidia/tensorrt-llm) [![license](https://img.shields.io/badge/license-BSD--3--Clause-blue)](./LICENSE) [![linux_tests](https://github.com/pytorch/TensorRT/actions/workflows/build-test-linux.yml/badge.svg)](https://github.com/pytorch/TensorRT/actions/workflows/build-test-linux.yml) From 43429bddb3170768e1173ad50edf6e04bcfb4b8d Mon Sep 17 00:00:00 2001 From: Laikh Tewari Date: Tue, 11 Jun 2024 09:24:39 -0700 Subject: [PATCH 10/10] nightly install + PyTorch dep to cuda 12.4 --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index ae0aae501f..0b5f83877f 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ pip install torch-tensorrt Nightly versions of Torch-TensorRT are published on the PyTorch package index ```bash -pip install --pre torch-tensorrt --index-url https://download.pytorch.org/whl/nightly/cu121 +pip install --pre torch-tensorrt --index-url https://download.pytorch.org/whl/nightly/cu124 ``` Torch-TensorRT is also distributed in the ready-to-run [NVIDIA NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) which has all dependencies with the proper versions and example notebooks included. @@ -117,7 +117,7 @@ auto results = trt_mod.forward({input_tensor}); These are the following dependencies used to verify the testcases. Torch-TensorRT can work with other versions, but the tests are not guaranteed to pass. - Bazel 5.2.0 -- Libtorch 2.4.0.dev (latest nightly) (built with CUDA 12.1) +- Libtorch 2.4.0.dev (latest nightly) (built with CUDA 12.4) - CUDA 12.1 - TensorRT 10.0.1.6