-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Current way to use torchvision.prototype.transforms #7168
Comments
Correct. That happened in #7002.
All development on
Our idea was for the prototype datasets to just return the raw bytes so decoding can happen however the user likes. In #6944 we made a cut and separated datasets from transforms to focus on the latter. In that PR we also removed the decoding transforms that linked the two. Here is the relevant part from the state right before the PR was merged:
vision/torchvision/prototype/transforms/_type_conversion.py Lines 12 to 16 in 65769ab
Substituting |
Thanks, appreciate your response! |
📚 The doc issue
I tried to run the end-to-end example in this recent blog post:
but found that
torchvision.prototype.features
is now gone. What's the current way to run this? I attempted to simply pass the images, bboxes and labels with the following types:torchvision.prototype.datasets.utils._encoded.EncodedImage
,torchvision.prototype.datapoints._bounding_box.BoundingBox
,torchvision.prototype.datapoints._label.Label
. However this didn't seem to apply the transforms as everything remained the same shape.edit: I've found that
features
seems to be renamed todatapoints
. I tried applying this, butEncodedImage
in a cocosample['image']
seems to be 1D andprototype.transforms
requires 2D images. What's the proper way to get this as 2D so I can apply transforms? Is there a decode method I'm missing?Suggest a potential alternative/fix
No response
cc @vfdev-5 @bjuncek @pmeier
The text was updated successfully, but these errors were encountered: