-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cut 0.0.125 with a few bugfixes #3362
Cut 0.0.125 with a few bugfixes #3362
Conversation
When we upgrade from LDK 0.0.123 or prior, we need to intialize `holder_commitment_point` with commitment point(s). In 1f7f3a3 we changed the point(s) which we fetch from both the current and next per-commitment-point (setting the value to `HolderCommitmentPoint::Available` on upgrade) to only fetching the current per-commitment-point (setting the value to `HolderCommitmentPoint::PendingNext` on upgrade). In `commitment_signed` handling, we expect the next per-commitment-point to be available (allowing us to `advance()` the `holder_commitment_point`), as it was included in the `revoke_and_ack` we most recently sent to our peer, so must've been available at that time. Sadly, these two interact negatively with each other - on upgrade, assuming the channel is at a steady state and there are no pending updates, we'll not make the next per-commitment-point available but if we receive a `commitment_signed` from our peer we'll assume it is. As a result, in debug mode, we'll hit an assertion failure, and in production mode we'll force-close the channel. Instead, here, we fix the upgrade logic to always upgrade directly to `HolderCommitmentPoint::Available`, making the next per-commitment-point available immediately. We also attempt to resolve the next per-commitment-point in `get_channel_reestablish`, allowing any channels which were upgraded to LDK 0.0.124 and are in this broken state to avoid the force-closure, as long as they don't receive a `commitment_signed` in the interim.
If we manage to pull a `node_counter` from `removed_node_counters` for reuse, `add_channel_between_nodes` would `unwrap_or` with the `next_node_counter`-incremented value. This visually looks right, except `unwrap_or` is always called, causing us to always increment `next_node_counter` even if we don't use it. This will result in the `node_counter`s always growing any time we add a new node to our graph, leading to somewhat larger memory usage when routing and a debug assertion failure in `test_node_counter_consistency`. The fix is trivial, this is what `unwrap_or_else` is for.
Previously, the `ChainListenerSet` `Listen` implementation wouldn't forward to the listeners `block_connected` implementation outside of tests. This would result in the default implementation of `Listen::block_connected` being used and the listeners implementation never being called.
When we're calculating the success probability for min-/max-bucket pairs and are looking at the 0th' min-bucket, we only look at the highest max-bucket to decide the success probability. We ignore max-buckets which have a value below `BUCKET_FIXED_POINT_ONE` to only consider values which aren't substantially decayed. However, if all of our data is substantially decayed, this filter causes us to conclude that the highest max-bucket is bucket zero even though we really should then be looking at any bucket. We make this change here, looking at the highest non-zero max-bucket if no max-buckets have a value above `BUCKET_FIXED_POINT_ONE`.
Replaces #3361 but to a new branch for this release. |
b2c0d1a
to
7a948c3
Compare
7a948c3
to
2028c78
Compare
@@ -11,6 +11,7 @@ edition = "2021" | |||
|
|||
[package.metadata.docs.rs] | |||
all-features = true | |||
features = ["lightning/std"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mhh, does this actually work or do we also need to set it on RGS and the BP? Do we know of any way to validate this without running docs.rs
in a container etc?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uploaded the BP depending on 124 as https://docs.rs/crate/docs-rs-test-bluematt-delete-this/latest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems broken?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, looks like it works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"The requested version does not exist"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I guess when I yanked the crate it deleted it...it did work, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mhh, previously I got another page that however still looked broken (links not working etc). Any case, fingers crossed it will look unbroken on docs.rs
.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## 0.0.125 #3362 +/- ##
===========================================
- Coverage 89.82% 89.63% -0.20%
===========================================
Files 126 126
Lines 103854 102184 -1670
Branches 103854 102184 -1670
===========================================
- Hits 93288 91590 -1698
- Misses 7842 7867 +25
- Partials 2724 2727 +3 ☔ View full report in Codecov by Sentry. |
CI is failing due to an MSRV violation by |
I think its fine? We don't plan on using this branch so some failing CI (that is known to not be a bug) is fine. |
Several important bugfixes and an 0.0.125 release. The most important bugfix follows, but see individual commits for more info.
When we upgrade from LDK 0.0.123 or prior, we need to intialize
holder_commitment_point
with commitment point(s). In1f7f3a3 we changed the point(s)
which we fetch from both the current and next per-commitment-point
(setting the value to
HolderCommitmentPoint::Available
onupgrade) to only fetching the current per-commitment-point (setting
the value to
HolderCommitmentPoint::PendingNext
on upgrade).In
commitment_signed
handling, we expect the nextper-commitment-point to be available (allowing us to
advance()
the
holder_commitment_point
), as it was included in therevoke_and_ack
we most recently sent to our peer, so must've beenavailable at that time.
Sadly, these two interact negatively with each other - on upgrade,
assuming the channel is at a steady state and there are no pending
updates, we'll not make the next per-commitment-point available but
if we receive a
commitment_signed
from our peer we'll assume itis. As a result, in debug mode, we'll hit an assertion failure, and
in production mode we'll force-close the channel.
Instead, here, we fix the upgrade logic to always upgrade directly
to
HolderCommitmentPoint::Available
, making the nextper-commitment-point available immediately.
We also attempt to resolve the next per-commitment-point in
get_channel_reestablish
, allowing any channels which wereupgraded to LDK 0.0.124 and are in this broken state to avoid the
force-closure, as long as they don't receive a
commitment_signed
in the interim.