-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove addressed workaround in ResizeV2 #7606
Conversation
elif interpolation == InterpolationMode.BILINEAR and image.device.type == "cpu": | ||
elif ( | ||
interpolation == InterpolationMode.BILINEAR | ||
and image.is_cpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a nit from me, I thought it was simpler than image.device.type == "cpu"
, but I can put it back
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can attest that this actually works, but
- I've never seen used it anywhere
- it is undocumented. This looks like a bug though, since
is_cuda
andis_meta
are there.
Thus, I would prefer to leave it as is, but no strong opinion. How did you learn about it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torchvision/transforms/v2/functional/_geometry.py:195: error: "Tensor" has no
attribute "is_cpu" [attr-defined]
and image.is_cpu
^~~~~~~~~~~~
Found 1 error in 1 file (checked 236 source files)
Error: Process completed with exit code 1.
bwhahahaha.
Anyway.
How did you learn about it?
I was reminded that is_cuda
exists so I sort of guessed is_cpu
should exist as well. It's used in the torch core code-base, but sparsely. I'll revert anyway to avoid fighting mypy
I wonder whether this is actually want we want We're going from this on
To this on this PR:
I find that a bit surprising because from offline discussions with @vfdev-5, I thought the output should be preserved as CL. Isn't that what |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Nicolas!
elif interpolation == InterpolationMode.BILINEAR and image.device.type == "cpu": | ||
elif ( | ||
interpolation == InterpolationMode.BILINEAR | ||
and image.is_cpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can attest that this actually works, but
- I've never seen used it anywhere
- it is undocumented. This looks like a bug though, since
is_cuda
andis_meta
are there.
Thus, I would prefer to leave it as is, but no strong opinion. How did you learn about it?
# uint8 dtype support for bilinear mode is limited to cpu and | ||
# according to our benchmarks non-AVX CPUs should prefer u8->f32->interpolate->u8 path | ||
if "AVX2" in torch.backends.cpu.get_cpu_capability(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could have been in the original elif
already or am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes there's nothing before or after that block so it's logically the same
For visibility, here is what happens in the PR:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, let's merge it and think of another way to avoid memory format change later
Hey @NicolasHug! You merged this PR, but no labels were added. The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py |
Reviewed By: vmoens Differential Revision: D46071408 fbshipit-source-id: 8216a893fc11741260c6c741bfa609cbe4a31a54
This PR removes a workaround which is not needed anymore: the original problem was fixed in torch core already in pytorch/pytorch#101136
I can confirm that the same stress-test from #7557 (review) are still properly passing (and that those tests were hitting the workaround code)
cc @vfdev-5