You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Now creates output tensors of the correct type to accept data
- There still may be a data race in the creation of the dataloader
iterator
- Quantization and Dynamic Shape right now don't play well together,
potential subsequent release of TRT may address this
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
…ical_xor (#41)
Summary:
Pull Request resolved: pytorch/fx2trt#41
fx2trt is a tool we use to create a TensorRT engine from a PyTorch model. The lowering is composed of 1) start the Pytorch model 2) frace model with acc tracer in acc_ops 3) Use TRT Interpreter to create a TensorRT engine
Here I:
1. Add corresponding acc ops
2. Add a converter for the acc op to acc_ops_converters.py.
3. Add a unit test for the converter in fbcode/deeplearning/trt/fx2trt_oss/test/converters/acc_op/test_logical_or/xor.py
Reviewed By: frank-wei
Differential Revision: D35237918
fbshipit-source-id: 82720b764f0c886749aafea84584cdcb5172d206
Right now the same workflow to run FP32 models when used in FP16 gives bad results.
The text was updated successfully, but these errors were encountered: