You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I often work with timeseries data in log space, and it can have a fairly small absolute variance. I believe the fix for issue #26 has since caused another bug where the added random noise (up to 0.1, based on the distribution used) can be sufficient to introduce significant errors into the resulting peak detection.
The ideal solution would probably be to use indexing instead of relying on unique values for de-duplication, but that might require some additional refactoring that I haven't fully scoped out.
A simple alternative solution could be to use the smallest available increment to de-duplicate the data instead of relying on random noise.
The text was updated successfully, but these errors were encountered:
I often work with timeseries data in log space, and it can have a fairly small absolute variance. I believe the fix for issue #26 has since caused another bug where the added random noise (up to 0.1, based on the distribution used) can be sufficient to introduce significant errors into the resulting peak detection.
The ideal solution would probably be to use indexing instead of relying on unique values for de-duplication, but that might require some additional refactoring that I haven't fully scoped out.
A simple alternative solution could be to use the smallest available increment to de-duplicate the data instead of relying on random noise.
The text was updated successfully, but these errors were encountered: