-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: dynamic shapes support for neg ops #2878
Conversation
py/torch_tensorrt/_Input.py
Outdated
return torch.rand(self.shape[optimization_profile_field]).to( | ||
dtype=self.dtype.to(torch.dtype, use_default=True) | ||
) | ||
if ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can clean this up a bit
if self.dtype in [dtype.u8, dtype.i8, dtype.i32, dtype.i64]:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I think your PR for where op modifies _Input.py around the example_tensor
call. So make sure the bool type change is integrated when you merge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM, just some minor comments
07c9c2d
to
436e99b
Compare
436e99b
to
6d7f2fb
Compare
Changes for random integer caused failure in arange/cos/sin test. I think it unmasked potential issue as all input values were zero. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
Add test case for dynamic shape input of aten.neg ops
Replaced output_dtypes arg in run_test_with_dynamic_shape() with check_dtype to handle
integer type of dynamic shape input test case.
Fixes # (issue)
Type of change
Checklist: