-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add HolderCommitmentPoint
struct to track commitment points
#3086
Add HolderCommitmentPoint
struct to track commitment points
#3086
Conversation
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #3086 +/- ##
==========================================
+ Coverage 89.90% 91.83% +1.92%
==========================================
Files 117 119 +2
Lines 97105 113897 +16792
Branches 97105 113897 +16792
==========================================
+ Hits 87303 104596 +17293
+ Misses 7243 6976 -267
+ Partials 2559 2325 -234 ☔ View full report in Codecov by Sentry. |
95894e4
to
83c1fa9
Compare
lightning/src/ln/channel.rs
Outdated
@@ -1119,6 +1119,81 @@ pub(crate) struct ShutdownResult { | |||
pub(crate) channel_funding_txo: Option<OutPoint>, | |||
} | |||
|
|||
#[derive(Debug, Copy, Clone)] | |||
enum HolderCommitmentPoint { | |||
Uninitialized { transaction_number: u64 }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why even have this variant given we always immediately call request_next
? ISTM we could drop the panics below if we just elided this and requested in new
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#3109 should hopefully provide more context for this now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right...related question, how are y'all handling the ChannelSigner::pubkeys
call? Are you just blocking for the initial signer setup there or are you doing something more clever to preload the pubkeys (that could also apply to the initial commitment point)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh woops thought i responded to this - we basically preload pubkeys upon the creation of a node for all channels, so what we use for that probably can't directly help with the first commitment point. We could make it so that for async signing you need to pass the initial per commitment point into ChannelManager::create_channel
or accept_inbound_channel
to avoid doing the Uninitialized
variant - and on our side we'd just have to wait for the response there which isn't a ton different from how we're waiting already
83c1fa9
to
65b3898
Compare
squashed because this is still in it's early phases. the major changes were:
Since this is a pretty big change, my goal is to make this easy to review as possible by splitting it up into separate PRs - some of this is a bit hard (-> |
I almost wonder if we shouldn't introduce the machinery here with nothing fallible so that we don't any any extra |
e22f940
to
9bc514e
Compare
Dropped the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, please squash the fixups and lets land this!
9bc514e
to
e947e84
Compare
squashed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing blocking!
lightning/src/ln/channel.rs
Outdated
if self.context.signer_pending_funding { | ||
// TODO: set signer_pending_channel_ready | ||
log_debug!(logger, "Can't produce channel_ready: the signer is pending funding."); | ||
return None; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this check exists already about 60 lines up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops just moved the log + TODO up there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sike, just removed the one above, i realized in the next PR we should only set signer_pending_channel_ready
after we've passed all the other checks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sike x2, causes a bug, presumably because it causes us to do this check after modifying state
lightning/src/ln/channel.rs
Outdated
PendingNext { transaction_number: u64, current: PublicKey }, | ||
Available { transaction_number: u64, current: PublicKey, next: PublicKey }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO these could use some docs at some point, but feel free to hold off if it would make more sense to add them in following PRs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh true, i'll add some interim docs and probably expand them more in an upcoming PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
e947e84
to
adb038a
Compare
317426a
to
74297f4
Compare
Looks like tests are failing: |
This includes when building TxCreationKeys, as well as for open_channel and accept_channel messages. Note: this is only for places where we are retrieving the current per commitment point, which excludes channel_reestablish.
74297f4
to
cf545b4
Compare
looks to be this, should be fixed now |
/// Our current commitment point is ready, we've cached our next point, | ||
/// and we are not pending a new one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: are "we've cached our next point" and "we are not pending a new one" the same thing? If so, I think one of the clauses could be removed since it sounds like separate things atm
@@ -5417,7 +5417,10 @@ impl<SP: Deref> Channel<SP> where | |||
} | |||
|
|||
fn get_last_revoke_and_ack(&self) -> msgs::RevokeAndACK { | |||
let next_per_commitment_point = self.context.holder_signer.as_ref().get_per_commitment_point(self.context.holder_commitment_point.transaction_number(), &self.context.secp_ctx); | |||
debug_assert!(self.context.holder_commitment_point.transaction_number() <= INITIAL_COMMITMENT_NUMBER + 2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to understand the + 2
. The bolts seem to read like our transaction number will never be > INITIAL_COMMITMENT_NUMBER
, let me know what I'm missing here!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah oops, this should actually be - 2
, to assert we have always advanced our commitment point twice before we ever call here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah that makes sense lol
@@ -5417,7 +5417,10 @@ impl<SP: Deref> Channel<SP> where | |||
} | |||
|
|||
fn get_last_revoke_and_ack(&self) -> msgs::RevokeAndACK { | |||
let next_per_commitment_point = self.context.holder_signer.as_ref().get_per_commitment_point(self.context.holder_commitment_point.transaction_number(), &self.context.secp_ctx); | |||
debug_assert!(self.context.holder_commitment_point.transaction_number() <= INITIAL_COMMITMENT_NUMBER + 2); | |||
// TODO: handle non-available case when get_per_commitment_point becomes async |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my understanding, is this referring to when we implement this TODO to add the new HolderCommitmentPoint
variant (which I think means that current_point()
will become Option
al)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep! technically it should never be unavailable by the time we get here, but figured i'd leave a note to think about it again when things change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like PendingNext
would also work here, so the assertion below still doesn't totally make sense to me, but it probably gets clearer in the follow-ups.
f2237a7
into
lightningdevkit:main
This is the first of several upcoming PRs to complete async signing. This adds the
HolderCommitmentPoint
struct to consolidate our logic getting commitment points, which will make things easier when this operation returns a result type in an upcoming PR. This refactor helps prepare for async signing, but still assumes the signer is synchronous.I left a bunch of TODOs that should get removed in upcoming PRs, let me know if they're too much.