-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add sequence fill support for ElasticTransform #7141
Conversation
Only number is supported for torch Tensor. | ||
Only int or str or tuple value is supported for PIL Image. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drive-by since I was looking into the fill support. This seems to be a copy-paste error. Internally we just convert PIL images to tensors
vision/torchvision/transforms/functional.py
Lines 1558 to 1562 in 59dc938
t_img = img | |
if not isinstance(img, torch.Tensor): | |
if not F_pil._is_pil_image(img): | |
raise TypeError(f"img should be PIL Image or Tensor. Got {type(img)}") | |
t_img = pil_to_tensor(img) |
and then call the tensor kernel:
vision/torchvision/transforms/functional.py
Lines 1573 to 1578 in 59dc938
output = F_t.elastic_transform( | |
t_img, | |
displacement, | |
interpolation=interpolation.value, | |
fill=fill, | |
) |
Meaning, there is no difference between both types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Philip, some minor comments below. Should we add a non-regression test?
if isinstance(fill, (int, float)): | ||
fill = [float(fill)] | ||
elif isinstance(fill, (list, tuple)): | ||
fill = [float(f) for f in fill] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually need to convert to float?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, we do due to JIT 🥲 Removing the float conversion from L2110 gives us
import torch.jit
from torchvision import transforms
torch.jit.script(transforms.ElasticTransform(fill=[1]))
[...]
Expected a value of type 'Optional[List[float]]' for argument 'fill' but instead found type 'List[int]'.
[...]
I know this is ugly AF and far from being Pythonic, but given that it is on v1 I really don't want to deal with this any more than I have to.
I just realized that we don't have any JIT tests for
import torch.jit
from torchvision import transforms
torch.jit.script(transforms.ElasticTransform(fill=0.0))
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Philip, just nits, LGTM anyway
Reviewed By: vmoens Differential Revision: D43116102 fbshipit-source-id: 4ea89bf9faf5612d4e168483d0f236e9d1f569ef
According to our docstring we already have it
vision/torchvision/transforms/transforms.py
Lines 2058 to 2059 in 455eda6
However, we do
vision/torchvision/transforms/transforms.py
Lines 2107 to 2109 in 455eda6
which will fail for sequences.
cc @vfdev-5