-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
logsumexp handling of edge-cases #426
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #426 +/- ##
==========================================
+ Coverage 97.23% 98.20% +0.97%
==========================================
Files 10 10
Lines 217 223 +6
==========================================
+ Hits 211 219 +8
+ Misses 6 4 -2 ☔ View full report in Codecov by Sentry. |
I just had a look at the history, and it seems an issue #368 motivated a change of how nans and infs were treated in #369 and introduced a new issue #407 - which is also the issue that motivated this pull request, which should solve both. I've added tests and improved numerical stability for the inplace version. |
This pull request had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity. |
Hi @rusty1s, Is there something I can do to help with this? :) |
Hi! First of all thanks for this library, which has been useful in a few projects.
I've run into some inconsistencies between the torch_scatter.logsumexp and torch.logsumexp:
-inf
, not0
.This is equivalent to having sum(..., start=0) in non-log space.
torch.logsumexp(torch.tensor([]), dim=0)
returns-inf
.torch.logsumexp(torch.tensor([torch.nan]), dim=0)
returnsnan
.eps
should be added.torch.logsumexp
also doesn't have this.In the first commit, I've added test cases for empty lists, -inf, inf, very large positive and negative numbers - and added a cartesian product test across the inputs and the various floating point data types.
In the second commit, I've changed the implementation to comply with the new tests.
Dependency Management
On another note, I ended up spending a few hours setting up the torch/cuda dependencies correctly to build from source.
I ended up using the following commands to install deps, build from source and run the tests:
Maybe having this or something similar in the docs (if it's not already there and I missed it) could make it easier for others to contribute.