-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to deploy maskRCNN pytorch model? #418
Comments
@achbogga you can use onnx conversion of your pytorch model and deploy. However, it is not straightforward. You can check this model, which I believe was created using maskrcnn-benchmark repository. From what I saw, TRTIS is working on integrating pytorch models as well, but you have to write your model using TorchScript in order to be able to freeze your model using |
Using python flask is a easy solution. |
@ligonzheng can you please elaborate? |
https://medium.com/datadriveninvestor/deploy-your-pytorch-model-to-production-f69460192217 |
Regarding the r19.07-py3 version pytorch support:There is no official PyTorch example supplied from NVIDIA to write config.pbtxt for the pytorch_libtorch models. Any official example would help. Also, when I tried with --strict-model-config=false, I got the following error while serving our maskrcnn model forked from facebook-maskrcnn-benchmark github repo. Any help is appreciated. If anyone got the model to work with onnx route, I would appreciate an example code snippet to export fb-maskrcnn model into onnx along with the corresponding config.pbtxt.
|
The pytorch backend runs the C++ backend (LibTorch) which requires a torch script model. You can produce the same by tracing your existing Pytorch model as shown here: https://pytorch.org/tutorials/advanced/cpp_export.html The autofill for pytorch is very limited due to lack of information being stored in the model file. You can see here in the docs the naming convention and other related information on how to create the config.pbtxt file for PyTorch models: https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-guide/docs/model_configuration.html#model-configuration |
@CoderHam An official example to create a config.pbtxt for PyTorch model should be provided by NVIDIA for the sake of all the noobs out there who could not understand the documentation properly. Can you guys provide at least one example? Please reopen this issue. |
BTW, I have tried both 1.torch.jit.trace and 2.torch.jit.script routes. Nothing seems to work for the maskrcnn models. I encountered the following errors respectively 1.RuntimeError: Expected object of scalar type Float but got scalar type Long for argument #2 'other' 2.UnsupportedNodeError: GeneratorExp aren't supported. |
You need to trace the pytorch model with a float tensor. Can you confirm you are doing that?
You can follow this template to create a similar config file to your model. |
Yes, I have tried tracing using float Tensor. The same result, it did not work. |
Hello,I have tried tracing using float Tensor, how to get the input name and output name? |
Did anyone manage to solve this? I got the server to run with Pytorch model but got http 400 when doing inference. Any way to print out more logs to identify the error? I have tried to add --log-verbose=5 but not enough logs to help derive the error. Appreciate the help |
@okyspace Did you try tracing the model as advised here. @c464851257 pytorch models do not have a concept of input names. The INPUT names should follow the Triton convention as described here. |
Thanks. I reviewed again and found that the error was due to my output shape not correct. And also if you could point me to examples that deploy pre-trained model like YOLO will be great. Thanks! |
@deadeyegoodwin , how should one think of deploying pytorch models some of which might not be supported for fully automatic tensorRT conversion yet? For example, can you point me towards any example with FPN-101 backbone maskRCNN pytorch model?
The text was updated successfully, but these errors were encountered: