After modifying the transforms in trainers/deepedit, the model cannot learn. #907
-
My version: from docker projectmonai/monailabel:0.4.2 3D Slicer version: 5.0.3 Server Executio: After I executed
Problem Statement: Result:
If use:
Verify Transforms:
Can someone help me with the problem? Thanks in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
Hi @Minxiangliu Let me try to reproduce the results. Is the above image the demo using CropForeground and SpatialPadd? The image looks normal. I will try figure this out and get reply later. |
Beta Was this translation helpful? Give feedback.
-
Hi @Minxiangliu , thanks for reporting this issue. After reproducing the app and your transformations, we found the NormalizeLabelsInDatasetd will not modify the data meta information correctly, but the following transformation after EnsureChannelFirstd will be handled as MetaTensor, where the image meta information are stored. If the NormalizeLabelsInDatasetd is applied in the beginning, the meta information will be lost and resulting incorrect transformations. For now, the solution is to move the NormalizeLabelsInDatasetd after all MONAI image transformations are done (As bellow).
We will continue look into this issue and see whether the NormalizeLabelsInDatasetd need further modification to be compatible with MetaTensor. Thanks |
Beta Was this translation helpful? Give feedback.
-
Thanks for reporting this issue, @Minxiangliu. Changing the pre-transforms for the DeepEdit model is tricky as it involves click simulation (https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/deepedit.py#L114-L117). Cropping the image and label by the foreground deletes most of the background and it could affect performance. It is not always the case but that should be considered. The transform NormalizeLabelsInDatasetd ensures that the label numbering is consistent among all the training samples. Especially for the training samples that don't have all the labels declared in the configs file. Actually, this transform shouldn't affect performance if you make sure all the images have the labels you declared and the same label numbering (i.e. cancer is always 1 and transfer is always 2). If you'd like to have more flexibility in the pre-transforms, I'd recommend you the segmentation model. Here is an example: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/localization_spine.py#L72-L84 Hope that helps, |
Beta Was this translation helpful? Give feedback.
Hi @Minxiangliu , thanks for reporting this issue. After reproducing the app and your transformations, we found the NormalizeLabelsInDatasetd will not modify the data meta information correctly, but the following transformation after EnsureChannelFirstd will be handled as MetaTensor, where the image meta information are stored. If the NormalizeLabelsInDatasetd is applied in the beginning, the meta information will be lost and resulting incorrect transformations.
For now, the solution is to move the NormalizeLabelsInDatasetd after all MONAI image transformations are done (As bellow).
I've tested it using your app and data, it gives me the results:
Result: {"total_epochs": 50, "total_iterat…