-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 [Bug] Unable to Change DLA Local DRAM when using torch_tensorrt.compile
#2731
Comments
Hi - thanks for the report - to collect some more information, if you try specifying other custom DLA parameters such as |
Both |
Thanks for testing this out - I am looking into the issue and will follow up with any updates |
I added a fix in #2749 which is now reflected on the TensorRT/py/torch_tensorrt/ts/_compiler.py Lines 116 to 137 in b9e6aa3
|
Can confirm the above fix works on my setup. Thanks for the help! |
Bug Description
Hello, I am currently compiling my model to TensorRT on a Jetson AGX Orin dev. kit. As such I'd like to make use of the DLAs on the system. By default the local DRAM is set to 1024MiB and I'm looking to increase this due to the size of some of the layers in my network. The network is a simple U-Net but the first layer consists of feature maps of dimension [32, 64, 592, 784] so quite big and requiring more DRAM to be able to execute these layers on the DLA. In
torch_tensorrt.compile
, I set the kwargdla_local_dram_size
to be a different number e.g. 2 times the default value, however when I run the script to make the engine, the DLA local ram is still the default value of 1024MiB.To Reproduce
Steps to reproduce the behavior:
Expected behavior
Expected to build the TensorRT engine with local DRAM of 2048MiB but instead get local DRAM of 1024MiB.
Environment
conda
,pip
,libtorch
, source): pipThe text was updated successfully, but these errors were encountered: