Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytorch 2: TypedStorage is deprecated #5862

Closed
rijobro opened this issue Jan 17, 2023 · 2 comments · Fixed by #5863
Closed

pytorch 2: TypedStorage is deprecated #5862

rijobro opened this issue Jan 17, 2023 · 2 comments · Fixed by #5863

Comments

@rijobro
Copy link
Contributor

rijobro commented Jan 17, 2023

Describe the bug

With pytorch 2, I'm getting lots of instances of this warning:

/home/rbrown/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/_tensor.py:1287: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
ret = func(*args, **kwargs)

The only usage of torch.storage() is here, so I suspect that's the problem. @wyli I see you added the code, can we do without it or modify it?

@rijobro
Copy link
Contributor Author

rijobro commented Jan 17, 2023

Can I use tensor.untyped_storage instead?

@rijobro rijobro changed the title python 3.11 TypedStorage is deprecated pytorch 2: TypedStorage is deprecated Jan 17, 2023
@wyli
Copy link
Contributor

wyli commented Jan 17, 2023

I basically followed this method

https://github.com/pytorch/pytorch/blob/v1.13.1/torch/multiprocessing/reductions.py#L149-L150

and the expected behaviour is mainly tested here

def test_multiprocessing(self, device=None, dtype=None):
"""multiprocessing sharing with 'device' and 'dtype'"""
buf = io.BytesIO()
t = MetaTensor([0, 0] if dtype in (torch.int32, torch.int64) else [0.0, 0.0], device=device, dtype=dtype)
t.is_batch = True
if t.is_cuda:
with self.assertRaises(NotImplementedError):
ForkingPickler(buf).dump(t)
return
ForkingPickler(buf).dump(t)
obj = ForkingPickler.loads(buf.getvalue())
self.assertIsInstance(obj, MetaTensor)
assert_allclose(obj.as_tensor(), t)
assert_allclose(obj.is_batch, True)
ForkingPickler can dump and load metatensors.

feel free to update it for the latest API

wyli pushed a commit that referenced this issue Jan 17, 2023
Fixes #5862.

### Description

if `untyped_storage()` is present (pytorch 2) use it, else use
`storage()`.

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [ ] New tests added to cover the changes.
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [ ] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.
- [ ] In-line docstrings updated.
- [ ] Documentation updated, tested `make html` command in the `docs/`
folder.

Signed-off-by: Richard Brown <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants