Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[don't merge] this PR is for dev code comparisons/discussions #4536

Closed
wants to merge 59 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
7a204d0
Meta tensor load image (#4130)
wyli May 11, 2022
d6d01b1
enable cron tests
wyli May 4, 2022
cdec93d
fixes tests (#4229)
wyli May 5, 2022
ca64702
4246 integration meta (#4248)
wyli May 10, 2022
cd8baa4
Meta tensor channel (#4222)
rijobro May 11, 2022
e6d1645
update integration tests metatensor (#4262)
wyli May 12, 2022
509ac73
Merge branch 'dev' into feature/MetaTensor
wyli May 14, 2022
b20c025
meta_tensor orientation (#4278)
rijobro May 17, 2022
8d53447
Merge remote-tracking branch 'upstream/dev' into feature/MetaTensor
wyli May 19, 2022
1bd2c8c
`remove_extra_metadata` etc `transforms/utils.py` -> `data/utils.py` …
rijobro May 19, 2022
d648b6f
Merge remote-tracking branch 'MONAI/dev' into feature/MetaTensor
rijobro May 20, 2022
d4f88b8
inverse `Orientation` (#4305)
rijobro May 20, 2022
5301766
sync dev into feature branch (#4326)
wyli May 23, 2022
361cac3
Spacing MetaTensor (#4319)
rijobro May 24, 2022
49c69e0
Merge remote-tracking branch 'MONAI/dev' into feature/MetaTensor
rijobro May 24, 2022
9d738fa
fix
rijobro May 24, 2022
fc5c340
Merge branch 'dev' into feature/MetaTensor
wyli May 24, 2022
86863ac
Meta tensor spatial resample (#4332)
rijobro May 27, 2022
e2b1e6b
Merge branch 'dev' into feature/MetaTensor
rijobro May 27, 2022
e199c67
Merge remote-tracking branch 'upstream/dev' into feature/MetaTensor
wyli May 30, 2022
9320f7e
SaveImage to use MetaTensor (#4370)
rijobro May 30, 2022
7bdf1c6
Metatensor integration tests OK (#4407)
wyli Jun 1, 2022
1bd9f97
Merge branch 'dev' into feature/MetaTensor
wyli Jun 1, 2022
96af4a8
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 1, 2022
5933464
[MONAI] code formatting
monai-bot Jun 1, 2022
72cc2b0
resolve conflicts
wyli Jun 1, 2022
8ed38fd
Merge branch 'metatensor-merging' into feature/MetaTensor
wyli Jun 1, 2022
a6f09fb
Merge branch 'dev' into metatensor-merging
wyli Jun 3, 2022
b568fd8
fixes solve
wyli Jun 3, 2022
c417b0f
resolve conflicts
wyli Jun 3, 2022
d067aab
resolves conflicts
wyli Jun 3, 2022
60a22ff
Merge branch 'dev' into metatensor-merging
wyli Jun 4, 2022
04e5a14
padding and cropping classes use MetaTensor (#4371)
rijobro Jun 6, 2022
4d7b0fb
Merge branch 'dev' into feature/MetaTensor
wyli Jun 9, 2022
86521d5
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 9, 2022
60a9693
Merge branch 'dev' into merging-into-metatensor
wyli Jun 13, 2022
3583ea0
update Flip/Rotate/Resize spatial transforms support MetaTensor (#4429)
wyli Jun 14, 2022
36dc126
meta tensor basic profiling (#4223)
wyli Jun 14, 2022
46dc2ec
update integration tests metatensor (#4501)
wyli Jun 14, 2022
df51ef4
enhance copy in metatensor (#4506)
wyli Jun 15, 2022
03eacf1
Merge branch 'dev' into merging-metatensor
wyli Jun 16, 2022
e1edc99
more tests about the transforms with MetaTensor (#4521)
wyli Jun 17, 2022
6bcbde8
some util transforms to support MetaTensor (#4527)
wyli Jun 17, 2022
d56c8d9
Compatibility metatensor data convert (#4532)
wyli Jun 18, 2022
db9e49b
Rand affine fix (#4528)
rijobro Jun 20, 2022
8b9a27b
Merge remote-tracking branch 'upstream/dev' into sync-metatensor
wyli Jun 20, 2022
2338404
MetaTensor -- whats new/migration guide (#4543)
rijobro Jun 21, 2022
b9452b2
4530 optional SummaryWriter (#4546)
wyli Jun 21, 2022
cb94233
Merge remote-tracking branch 'upstream/dev' into feature/MetaTensor
wyli Jun 21, 2022
ac56784
Lambda fix metatensor (#4541)
rijobro Jun 21, 2022
b81693c
Merge branch 'dev' into merging-dev-into-metatensor
wyli Jun 21, 2022
995cab9
merging dev into metatensor
wyli Jun 21, 2022
17043d2
Merge branch 'dev' into merging-dev-into-metatensor
wyli Jun 22, 2022
b36c7b9
fixes test
wyli Jun 22, 2022
d659e05
Merge branch 'dev' into merging-dev-into-metatensor
wyli Jun 23, 2022
bfdd3bd
Merge branch 'dev' into merging-dev-into-metatensor
wyli Jun 27, 2022
79fbd34
Merge remote-tracking branch 'upstream/dev' into feature/MetaTensor
wyli Jun 29, 2022
2e15a91
fixes mypy
wyli Jun 29, 2022
9414800
Merge branch 'dev' into merging-dev-into-metatensor
wyli Jun 30, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions .github/workflows/cron.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ on:
# - cron: "0 2 * * *" # at 02:00 UTC
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
push:
branches:
- feature/MetaTensor

jobs:
cron-gpu:
Expand Down Expand Up @@ -202,7 +205,7 @@ jobs:

cron-tutorial-notebooks:
if: github.repository == 'Project-MONAI/MONAI'
needs: cron-gpu # so that monai itself is verified first
# needs: cron-gpu # so that monai itself is verified first
container:
image: nvcr.io/nvidia/pytorch:22.06-py3 # testing with the latest pytorch base image
options: "--gpus all --ipc=host"
Expand All @@ -223,7 +226,7 @@ jobs:
- name: Checkout tutorials and install their requirements
run: |
cd /opt
git clone --depth 1 --branch main --single-branch https://github.com/Project-MONAI/tutorials.git # latest commit of main branch
git clone --depth 1 --branch MetaTensor --single-branch https://github.com/Project-MONAI/tutorials.git # latest commit of main branch
cd tutorials
python -m pip install -r requirements.txt
- name: Run tutorial notebooks
Expand Down
29 changes: 29 additions & 0 deletions docs/source/transforms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,11 @@ Crop and Pad
:members:
:special-members: __call__

`PadBase`
"""""""""
.. autoclass:: PadBase
:special-members: __call__

`Pad`
"""""
.. autoclass:: Pad
Expand Down Expand Up @@ -105,6 +110,18 @@ Crop and Pad
:members:
:special-members: __call__

`CropBase`
""""""""""
.. autoclass:: CropBase
:members:
:special-members: __call__

`ListCropBase`
""""""""""""""
.. autoclass:: ListCropBase
:members:
:special-members: __call__

`SpatialCrop`
"""""""""""""
.. image:: https://github.com/Project-MONAI/DocImages/raw/main/transforms/SpatialCrop.png
Expand Down Expand Up @@ -995,6 +1012,12 @@ Dictionary Transforms
Crop and Pad (Dict)
^^^^^^^^^^^^^^^^^^^

`PadBased`
""""""""""
.. autoclass:: PadBased
:members:
:special-members: __call__

`SpatialPadd`
"""""""""""""
.. image:: https://github.com/Project-MONAI/DocImages/raw/main/transforms/SpatialPadd.png
Expand All @@ -1019,6 +1042,12 @@ Crop and Pad (Dict)
:members:
:special-members: __call__

`CropBased`
"""""""""""
.. autoclass:: CropBased
:members:
:special-members: __call__

`SpatialCropd`
""""""""""""""
.. image:: https://github.com/Project-MONAI/DocImages/raw/main/transforms/SpatialCropd.png
Expand Down
1 change: 1 addition & 0 deletions docs/source/whatsnew.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ What's New
.. toctree::
:maxdepth: 1

whatsnew_metatensor.md
whatsnew_0_9.md
whatsnew_0_8.md
whatsnew_0_7.md
Expand Down
80 changes: 80 additions & 0 deletions docs/source/whatsnew_metatensor.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# What's new -- `MetaTensor`

- New class `MetaTensor`. Stores meta data, image affine, and stack of transforms that have been applied to an image.
- Meta data will now be stored in `MetaTensor`, as opposed to using a dictionary in an adjacent key of the dictionary. This keeps the meta data more closely associated with the principal data, and allows array transforms to be aware (and update) meta data, not just our dictionary transforms.
- Previously, MONAI was fairly agnostic to the use of NumPy arrays and PyTorch tensors in its transforms. With the addition of `MetaTensor`,
Transforms largely use `torch.Tensor`. Input will be converted to `MetaTensor` by default.

## Manipulating `MetaTensor`

A `MetaTensor` can be created with e.g., `img=MetaTensor(torch.ones((1,4,5,6))`, which will use the default meta data (empty), and default affine transformation matrix (identity). These can be altered with input arguments.

With the `MetaTensor` created, the extra information can be accessed as follows:

- Meta data: `img.meta`,
- Affine: `img.affine`, and
- Applied operations (normally the traced/invertible transforms): `img.applied_operations`.

## Inverse array transforms

Previously, only dictionary transforms were invertible. Now, array transforms are, too!

```python
tr = Compose([LoadImage(), AddChannel(), Orientation(), Spacing()])
im = MetaTensor(...)
im_fwd = tr(im)
im_fwd_inv = tr.inverse(im_fwd)
print(im_fwd.applied_operations) # prints list of invertible transforms
print(im_fwd_inv.applied_operations) # should be back to 0
```

## Converting to and from `MetaTensor`

Users may, for example, want to use their own transforms which they developed prior to these changes. In a chain of transforms, you may have previously had something like this:

```python
transforms = Compose([
LoadImaged(), AddChanneld(), MyOwnTransformd(), Spacingd(),
])
```

If `MyOwnTransformd` expects the old type of data structure, then the transform stack can be modified to this:

```python
transforms = Compose([
LoadImaged(), AddChanneld(), FromMetaTensord(),
MyOwnTransformd(), ToMetaTensord(), Spacingd(),
])
```

That is to say, you can use `FromMetaTensord` to convert from e.g., `{"img": MetaTensor(...)}` to `{"img": torch.Tensor(...), "img_meta_dict: {...}` and `ToMetaTensord` will do the opposite.

## Batches of `MetaTensor`

The end user should not really need to modify this logic, it is here for interest.

We use a flag inside of the meta data to determine whether a `MetaTensor` is in fact a batch of multiple images. This logic is contained in our `default_collate`:

```python
im1, im2 = MetaTensor(...), MetaTensor(...)
print(im1.meta.is_batch) # False
batch = default_collate([im1, im2])
print(batch.meta.is_batch) # True
```

Similar functionality can be seen with the `DataLoader`:
```python
ds = Dataset([im1, im2])
print(ds[0].meta.is_batch) # False
dl = DataLoader(ds, batch_size=2)
batch = next(iter(dl))
print(batch.meta.is_batch) # True
```

**We recommend using MONAI's Dataset where possible, as this will use the correct collation method and ensure that MONAI is made aware of when a batch of data is being used or just a single image.**

## Disabling `MetaTensor`

This should ideally be a last resort, but if you are experiencing problems due to `MetaTensor`, `set_track_meta(False)` can be used.

Output will be returned as `torch.Tensor` instead of `MetaTensor`. This won't necessarily match prevoius functionality, as the meta data will no longer be present and so won't be used or stored. Further, more data will be converted from `numpy.ndarray` to `torch.Tensor`.
5 changes: 4 additions & 1 deletion monai/apps/deepgrow/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,9 @@

import numpy as np

from monai.transforms import AsChannelFirstd, Compose, LoadImaged, Orientationd, Spacingd
from monai.transforms import AsChannelFirstd, Compose, FromMetaTensord, LoadImaged, Orientationd, Spacingd, ToNumpyd
from monai.utils import GridSampleMode
from monai.utils.enums import PostFix


def create_dataset(
Expand Down Expand Up @@ -128,6 +129,8 @@ def _default_transforms(image_key, label_key, pixdim):
AsChannelFirstd(keys=keys),
Orientationd(keys=keys, axcodes="RAS"),
Spacingd(keys=keys, pixdim=pixdim, mode=mode),
FromMetaTensord(keys=keys),
ToNumpyd(keys=keys + [PostFix.meta(k) for k in keys]),
]
)

Expand Down
44 changes: 23 additions & 21 deletions monai/apps/detection/transforms/dictionary.py
Original file line number Diff line number Diff line change
Expand Up @@ -395,7 +395,7 @@ def __init__(
self.zoomer = Zoom(zoom=zoom, keep_size=keep_size, **kwargs)
self.keep_size = keep_size

def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, NdarrayOrTensor]:
def __call__(self, data: Mapping[Hashable, torch.Tensor]) -> Dict[Hashable, torch.Tensor]:
d = dict(data)

# zoom box
Expand All @@ -408,7 +408,7 @@ def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, N
box_key,
extra_info={"zoom": self.zoomer.zoom, "src_spatial_size": src_spatial_size, "type": "box_key"},
)
d[box_key] = ZoomBox(zoom=self.zoomer.zoom, keep_size=self.keep_size)(
d[box_key] = ZoomBox(zoom=self.zoomer.zoom, keep_size=self.keep_size)( # type: ignore
d[box_key], src_spatial_size=src_spatial_size
)

Expand All @@ -431,7 +431,7 @@ def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, N

return d

def inverse(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, NdarrayOrTensor]:
def inverse(self, data: Mapping[Hashable, torch.Tensor]) -> Dict[Hashable, torch.Tensor]:
d = deepcopy(dict(data))

for key in self.key_iterator(d):
Expand All @@ -453,14 +453,15 @@ def inverse(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, Nd
align_corners=None if align_corners == TraceKeys.NONE else align_corners,
)
# Size might be out by 1 voxel so pad
d[key] = SpatialPad(transform[TraceKeys.EXTRA_INFO]["original_shape"], mode="edge")(d[key])
orig_shape = transform[TraceKeys.EXTRA_INFO]["original_shape"]
d[key] = SpatialPad(orig_shape, mode="edge")(d[key]) # type: ignore

# zoom boxes
if key_type == "box_key":
zoom = np.array(transform[TraceKeys.EXTRA_INFO]["zoom"])
src_spatial_size = transform[TraceKeys.EXTRA_INFO]["src_spatial_size"]
box_inverse_transform = ZoomBox(zoom=(1 / zoom).tolist(), keep_size=self.zoomer.keep_size)
d[key] = box_inverse_transform(d[key], src_spatial_size=src_spatial_size)
d[key] = box_inverse_transform(d[key], src_spatial_size=src_spatial_size) # type: ignore

# Remove the applied transform
self.pop_transform(d, key)
Expand Down Expand Up @@ -544,7 +545,7 @@ def set_random_state(
self.rand_zoom.set_random_state(seed, state)
return self

def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, NdarrayOrTensor]:
def __call__(self, data: Mapping[Hashable, torch.Tensor]) -> Dict[Hashable, torch.Tensor]:
d = dict(data)
first_key: Union[Hashable, List] = self.first_key(d)
if first_key == []:
Expand All @@ -567,7 +568,7 @@ def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, N
box_key,
extra_info={"zoom": self.rand_zoom._zoom, "src_spatial_size": src_spatial_size, "type": "box_key"},
)
d[box_key] = ZoomBox(zoom=self.rand_zoom._zoom, keep_size=self.keep_size)(
d[box_key] = ZoomBox(zoom=self.rand_zoom._zoom, keep_size=self.keep_size)( # type: ignore
d[box_key], src_spatial_size=src_spatial_size
)

Expand All @@ -594,7 +595,7 @@ def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, N

return d

def inverse(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, NdarrayOrTensor]:
def inverse(self, data: Mapping[Hashable, torch.Tensor]) -> Dict[Hashable, torch.Tensor]:
d = deepcopy(dict(data))

for key in self.key_iterator(d):
Expand All @@ -616,15 +617,16 @@ def inverse(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, Nd
align_corners=None if align_corners == TraceKeys.NONE else align_corners,
)
# Size might be out by 1 voxel so pad
d[key] = SpatialPad(transform[TraceKeys.EXTRA_INFO]["original_shape"], mode="edge")(d[key])
orig_shape = transform[TraceKeys.EXTRA_INFO]["original_shape"]
d[key] = SpatialPad(orig_shape, mode="edge")(d[key]) # type: ignore

# zoom boxes
if key_type == "box_key":
# Create inverse transform
zoom = np.array(transform[TraceKeys.EXTRA_INFO]["zoom"])
src_spatial_size = transform[TraceKeys.EXTRA_INFO]["src_spatial_size"]
box_inverse_transform = ZoomBox(zoom=(1.0 / zoom).tolist(), keep_size=self.rand_zoom.keep_size)
d[key] = box_inverse_transform(d[key], src_spatial_size=src_spatial_size)
d[key] = box_inverse_transform(d[key], src_spatial_size=src_spatial_size) # type: ignore

# Remove the applied transform
self.pop_transform(d, key)
Expand Down Expand Up @@ -665,7 +667,7 @@ def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, N
d = dict(data)

for key in self.image_keys:
d[key] = self.flipper(d[key])
d[key] = self.flipper(d[key]) # type: ignore
self.push_transform(d, key, extra_info={"type": "image_key"})

for box_key, box_ref_image_key in zip(self.box_keys, self.box_ref_image_keys):
Expand All @@ -683,7 +685,7 @@ def inverse(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, Nd

# flip image, copied from monai.transforms.spatial.dictionary.Flipd
if key_type == "image_key":
d[key] = self.flipper(d[key])
d[key] = self.flipper(d[key]) # type: ignore

# flip boxes
if key_type == "box_key":
Expand Down Expand Up @@ -741,7 +743,7 @@ def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, N

for key in self.image_keys:
if self._do_transform:
d[key] = self.flipper(d[key], randomize=False)
d[key] = self.flipper(d[key], randomize=False) # type: ignore
self.push_transform(d, key, extra_info={"type": "image_key"})

for box_key, box_ref_image_key in zip(self.box_keys, self.box_ref_image_keys):
Expand All @@ -761,7 +763,7 @@ def inverse(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, Nd
if transform[TraceKeys.DO_TRANSFORM]:
# flip image, copied from monai.transforms.spatial.dictionary.RandFlipd
if key_type == "image_key":
d[key] = self.flipper(d[key], randomize=False)
d[key] = self.flipper(d[key], randomize=False) # type: ignore

# flip boxes
if key_type == "box_key":
Expand Down Expand Up @@ -1204,7 +1206,7 @@ def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> List[Dict[Hashab
# crop images
cropper = SpatialCrop(roi_slices=crop_slices)
for image_key in self.image_keys:
results[i][image_key] = cropper(d[image_key])
results[i][image_key] = cropper(d[image_key]) # type: ignore

# crop boxes and labels
boxcropper = SpatialCropBox(roi_slices=crop_slices)
Expand Down Expand Up @@ -1269,7 +1271,7 @@ def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Mapping[Hashable
self.push_transform(d, key, extra_info={"spatial_size": spatial_size, "type": "box_key"})

for key in self.image_keys:
d[key] = self.img_rotator(d[key])
d[key] = self.img_rotator(d[key]) # type: ignore
self.push_transform(d, key, extra_info={"type": "image_key"})
return d

Expand All @@ -1283,7 +1285,7 @@ def inverse(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, Nd

if key_type == "image_key":
inverse_transform = Rotate90(num_times_to_rotate, self.img_rotator.spatial_axes)
d[key] = inverse_transform(d[key])
d[key] = inverse_transform(d[key]) # type: ignore
if key_type == "box_key":
spatial_size = transform[TraceKeys.EXTRA_INFO]["spatial_size"]
inverse_transform = RotateBox90(num_times_to_rotate, self.box_rotator.spatial_axes)
Expand Down Expand Up @@ -1327,7 +1329,7 @@ def __init__(
super().__init__(self.image_keys + self.box_keys, prob, max_k, spatial_axes, allow_missing_keys)
self.box_ref_image_keys = ensure_tuple_rep(box_ref_image_keys, len(self.box_keys))

def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Mapping[Hashable, NdarrayOrTensor]:
def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Mapping[Hashable, NdarrayOrTensor]: # type: ignore
self.randomize()
d = dict(data)

Expand Down Expand Up @@ -1355,11 +1357,11 @@ def __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Mapping[Hashable

for key in self.image_keys:
if self._do_transform:
d[key] = img_rotator(d[key])
d[key] = img_rotator(d[key]) # type: ignore
self.push_transform(d, key, extra_info={"rand_k": self._rand_k, "type": "image_key"})
return d

def inverse(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, NdarrayOrTensor]:
def inverse(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, NdarrayOrTensor]: # type: ignore
d = deepcopy(dict(data))
if self._rand_k % 4 == 0:
return d
Expand All @@ -1374,7 +1376,7 @@ def inverse(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, Nd
# flip image, copied from monai.transforms.spatial.dictionary.RandFlipd
if key_type == "image_key":
inverse_transform = Rotate90(num_times_to_rotate, self.spatial_axes)
d[key] = inverse_transform(d[key])
d[key] = inverse_transform(d[key]) # type: ignore
if key_type == "box_key":
spatial_size = transform[TraceKeys.EXTRA_INFO]["spatial_size"]
inverse_transform = RotateBox90(num_times_to_rotate, self.spatial_axes)
Expand Down
Loading