-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add utilities for vector magnitude and normalisation #243
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #243 +/- ##
=======================================
Coverage 99.76% 99.77%
=======================================
Files 14 14
Lines 866 886 +20
=======================================
+ Hits 864 884 +20
Misses 2 2 ☔ View full report in Codecov by Sentry. |
3c078a3
to
eeffcca
Compare
@sfmig I'll give you a chance to comment on this before merging, because we've discussed this together. No rush |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for waiting for me @niksirbi and sorry for the late review!
The implementation looks good, my suggestions are mostly around clarifying the docstrings. I also have two additional points.
I like that we stick to one term to refer to the norm of a vector. However, I suggest to use "norm" rather than "magnitude". The main reason for preferring "norm" is that that seems to be the preferred term for numpy, scipy, and Matlab, which we have followed for the other vector utils (cart2pol
, pol2cart
). Many of our users will be familiar with these packages/language, and therefore it would make sense to keep the same nomenclature as much as possible.
But if we decide to go for "length" or "magnitude", I think we should accompany it with vector (vector length, vector magnitude) in the docstrings. I think this is currently not very clear because the input of the function is not a (2d, 3d) vector, but a multi-dimensional array, which suggests we are computing the magnitude of an array.
Re computing unit vectors for vectors in polar coordinates: I don't see a good reason for restricting the function to cartesian vectors only. As you say the implementation is easy, and the operation is conceptually the same for polar vectors. I think the function should do as expected (i.e., returning a vector with rho=1
and same phi
).
Happy to discuss anything further!
Co-authored-by: sfmig <[email protected]>
bd91bea
to
5d8d42c
Compare
Thanks for the thorough review @sfmig. I believe I've addressed all your points by basically adopting all your suggestions (see edited part of the PR description). Re-requesting your review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks great, thanks! ✨
I had a bit of a think about these tests, but I think they will be better addressed as a PR (draft here, still WIP) - will tag you when it's ready
|
Description
What is this PR
Why is this PR needed?
We often need to compute the magnitude (norm, length) of a vector, and would be nice to have a convenient xarray-native way of doing so. Currently we can only do that by either:
cart2pol
to convert to poral coordinates, and take therho
valuenp.linalg.norm
What does this PR do?
Adds two new functions in the
utils/vector.py
module.magnitude
: this does the obvious thing. If an array in cartesian space is passed, we return the euclidean norm. If a user passes an array in polar coordinates, we just return therho
.normalize
: added for convenience. This function only works for arrays in cartesian coordinates, and returns the same array divided by its magnitude. For (0,0) values (exactly at the origin), the output is NaN (because of zero-division). If the input array is passed with polar coordinates, I raise an error. In principle, I could just make allrho
values equal to 1, but I don't see much value in that (feel free to push back if you have a better idea on what to do in this case).References
Closes #190.
Facilitates #147 , #239 , #228, #238 and others.
How has this PR been tested?
Unit tests have been added for both functions.
Is this a breaking change?
No.
Does this PR require an update to the documentation?
I updated a few places in the examples where
np.linalg.norm
was used. Now they usemagnitude
.The API index should automatically pick up the new functions.
Checklist:
PS
@sfmig I'm aware this was on your to-do list, but I had to do this anyway now, because we could benefit from this feature in https://github.com/neuroinformatics-unit/In2Research-2024.
EDIT: 2024-08-14
Following @sfmig's review, the following changes were made:
magnitude()
tocompute_norm()
and updated the docstring accordinglynormalize()
toconvert_to_unit()
and updated the docstring accordinglyconvert_to_unit()
also work in polar coordianates, where it makes allrho
values 1 and preservesphi
angles.