-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test error in test_soft_clipping_one_sided_high #7616
Comments
cc @Lucas-rbnt |
Hello, I think this is because torch.logaddexp and numpy.logaddexp give different results. One workaround would be, when the input is a numpy array, to convert it to tensor to apply torch.logaddexp and then convert it back to numpy. Not very elegant but simple and should solve the problem A small test on this yields convincing results: from monai.transforms.utils_pytorch_numpy_unification import softplus
x = torch.randn(128, 128)
np.array_equal(softplus(x.numpy()), softplus(x).numpy())
:> False while the modified version works as expected: import numpy as np
import torch
def softplus(x):
"""stable softplus through `np.logaddexp` with equivalent implementation for torch.
Args:
x: array/tensor.
Returns:
Softplus of the input.
"""
if isinstance(x, np.ndarray):
x = torch.from_numpy(x)
return torch.logaddexp(torch.zeros_like(x), x).numpy()
return torch.logaddexp(torch.zeros_like(x), x)
x = torch.randn(128, 128)
np.array_equal(softplus(x.numpy()), softplus(x).numpy())
:> True Let me know if you think this is relevant and sorry about that! |
Hi @Lucas-rbnt, I believe we could consider adjusting this line
I propose that we use |
It's definitely a way of doing things, but it won't be enough on its own because in the def __call__(self, img: NdarrayOrTensor) -> NdarrayOrTensor:
"""
Apply the transform to `img`.
"""
img = convert_to_tensor(img, track_meta=get_track_meta()) Then the softplus is applied directly to a tensor and not a numpy array, and we get back to the differences in results between |
Signed-off-by: YunLiu <[email protected]>
Hi @Lucas-rbnt, Absolutely, I understand your perspective. Since the introduction of What do you think? |
Yes, I agree, it remains coherent from my point of view! self.assertEqual(ref, v.float(), atol=0.01, rtol=0.01) (l.3461) with and it seems to me that https://github.com/pytorch/pytorch/blob/793df52dc52f5f5f657744abfd7681eaba7a21f9/torch/testing/_comparison.py#L1183 gives the others default parameter depending on the type
However, the values for float32 are quite low compared to the error mentioned above, so I'm not sure. |
Taking into account that PyTorch innately performs a comprehensive array of internal consistency tests to ensure alignment with NumPy, it could be redundant for us to independently verify this consistency within our codebase. Moreover, I would like to suggest that we prioritize merging this fix given that it has frequently led to test failures in other PRs: Let's move this discussion onward to garner more opinions on this matter. cc @ericspod @atbenmurray @Nic-Ma |
Fixes #7616 ### Types of changes <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [x] Non-breaking change (fix or new feature that would not break existing functionality). - [ ] Breaking change (fix or new feature that would cause existing functionality to change). - [ ] New tests added to cover the changes. - [ ] Integration tests passed locally by running `./runtests.sh -f -u --net --coverage`. - [ ] Quick tests passed locally by running `./runtests.sh --quick --unittests --disttests`. - [ ] In-line docstrings updated. - [ ] Documentation updated, tested `make html` command in the `docs/` folder. --------- Signed-off-by: YunLiu <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
The text was updated successfully, but these errors were encountered: