-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create SpendableOutputs events no matter the chain::Confirm order #970
Create SpendableOutputs events no matter the chain::Confirm order #970
Conversation
7206c03
to
8216af0
Compare
Codecov Report
@@ Coverage Diff @@
## main #970 +/- ##
==========================================
+ Coverage 90.63% 90.93% +0.30%
==========================================
Files 60 60
Lines 30538 31896 +1358
==========================================
+ Hits 27678 29005 +1327
- Misses 2860 2891 +31
Continue to review full report at Codecov.
|
) -> Vec<TransactionOutputs> | ||
where | ||
B::Target: BroadcasterInterface, | ||
F::Target: FeeEstimator, | ||
L::Target: Logger, | ||
{ | ||
let should_broadcast = self.would_broadcast_at_height(height, &logger); | ||
let should_broadcast = self.would_broadcast_at_height(self.best_block.height(), logger); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might make sense to update our interfaces (here and elsewhere) to take &BestBlock
where that is expected to avoid accidentally passing a confirmation height. Or even directly use it rather than passing it at all where that is possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, suppose we could, I thought renaming height to conf_height
was pretty clear (though I could spell it out). For OnchainTx
I could tweak it if you think it makes more sense (given it doesn't have its own best_block
field)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like there are a few scenarios:
- it doesn't need to be passed at all because the method already can access it through
self
, so why bother passing it? (e.g., removewould_broadcast_at_height
can be named simplyshould_broadcast
and doesn't need to take a height, nor doesget_broadcasted_holder_claims
- only the current height is needed, so passing
&BestBlock
makes it explicit and prevents passing the wrong height (e.g.,has_reached_confirmation_threshold
) - both heights are needed in
OnchainTx
(maybe leave as is since there is a calculation involved at the call site)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe get_broadcasted_holder_claims
needs the confirmation height, or at least appears to given it passes it to PackageTx::build_package
as height_original
, though its API seems like a good opportunity for cleanup. Did the rest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm... I see but you're calling get_broadcasted_holder_claims
with the best block height in block_confirmed
, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, in that case its being called with reference to a new commitment transaction being broadcast, not one which is confirmed (new_holder_commitment_transaction
), so I think that's correct. Somewhat confusingly, its also called with a 0-height in provide_payment_preimage
in a similar context, which is "wrong", but AFAICT harmless.
Ultimately it doesn't really matter much for this case, I think, cause its passed through as height_original
in PackageTemplate
which is only used to figure out when we can merge packages (which we can never do for the HolderHTLCOutput
s that are used here), and when we can broadcast (which we just use the CLTV which is fine).
I'll add some comments and fix the 0 value version.
8216af0
to
642f5af
Compare
) -> Vec<TransactionOutputs> | ||
where | ||
B::Target: BroadcasterInterface, | ||
F::Target: FeeEstimator, | ||
L::Target: Logger, | ||
{ | ||
let should_broadcast = self.would_broadcast_at_height(height, &logger); | ||
let should_broadcast = self.would_broadcast_at_height(self.best_block.height(), logger); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm... I see but you're calling get_broadcasted_holder_claims
with the best block height in block_confirmed
, right?
/// Update state for new block(s)/transaction(s) confirmed. Note that the caller *must* update | ||
/// `self.best_block` before calling, and set `conf_height` to the height at which any new |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this right? It isn't the case when called by transactions_confirmed
as of now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it matters, but its probably simpler/nicer for users to actually set it there, so I did that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this behavior required to fix the bug? Adding it there is equivalent to calling best_block_updated
after each transactions_confirmed
. So doesn't this mean calling best_block_updated
is no longer required when confirming transactions at the same block? Seems it is less simpler for users because it adds a conditional to Confirm
's contract, albeit optional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this behavior required to fix the bug?
No.
Adding it there is equivalent to calling best_block_updated after each transactions_confirmed. So doesn't this mean calling best_block_updated is no longer required when confirming transactions at the same block?
Yes, I believe so, at least in ChannelMonitor
, dunno about ChannelManager
.
Seems it is less simpler for users because it adds a conditional to Confirm's contract, albeit optional.
I don't see why? It makes a call optional in our current implementation, yes, but we don't have to tell users that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see why? It makes a call optional in our current implementation, yes, but we don't have to tell users that.
It seems like surprising behavior. I called X expecting Y but instead Y + Z happened. I guess I don't see why not just drop the comment rather than altering the behavior if it isn't required to fix the bug. At very least it should be a separate commit stating the reason for the change, IMHO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It definitely feels "more correct" and should happen, IMO. I moved it to a new commit, let me know what you think of the commit docs and if it sufficiently justifies it.
eaff313
to
9fcc9d2
Compare
Updated to address feedback. |
9fcc9d2
to
540bcde
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of interactions to think about after a first parse.
@@ -2008,6 +2008,10 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> { | |||
self.is_paying_spendable_output(&tx, height, &logger); | |||
} | |||
|
|||
if height > self.best_block.height() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you can add message commit here : "This avoids complexity in block_confirmed
by never having a tracked height set which is higher than our current best confirmed chain, potentially avoiding some bugs in the rather-complicated code" ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I put it in the doc comment of block_confirmed
instead.
if node_txn.len() == 1 { | ||
check_spends!(node_txn[0], chan_2.3); | ||
} else { | ||
assert_eq!(node_txn.len(), 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly this test path which covers TransactionsFirstSkippingBlocks
, the connection style updated by this comment, doesn't make sense to me.
From what I remember, I don't understand why updating best_block
in transactions_confirmed
instead of best_block_updated
only doesn't generate a B's commitment transaction broadcast, they both should fulfill should_broadcast
.
I need to look deeper on this...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the old case, a few blocks before we get here we remove the funding tx output from OnchainTx::claimable_outpoints
(which was originally tracked via commitment_tx[0]
's confirmation). Because its not there we can re-add it and re-broadcast our own commitment transaction. In the new case, we skip ahead a few blocks and just call OnchainTx::update_claims_view
once, which means only at the very end do we remove the funding tx output, after we've already given up broadcasting the commitment tx.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because its not there we can re-add it and re-broadcast our own commitment transaction.
This is the weirdness I didn't understand and AFAICT it sounds an issue with our already-existent update_claim_view
logic where we might have a "height" collision between maturity of OnchainEvent::Claim
for commitment_tx[0]
and hitting a should_broadcast()==true
generating a new request for the same commitment.
Adding the following diff
diff --git a/lightning/src/chain/onchaintx.rs b/lightning/src/chain/onchaintx.rs
index 7e616ef2..4e1c1c5d 100644
--- a/lightning/src/chain/onchaintx.rs
+++ b/lightning/src/chain/onchaintx.rs
@@ -536,6 +536,7 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> {
// been aggregated in a single tx and claimed so atomically
if let Some(request) = self.pending_claim_requests.remove(&claim_request) {
for outpoint in request.outpoints() {
+ log_trace!(logger, "Removing claiming outpoint for {}", outpoint.txid);
self.claimable_outpoints.remove(&outpoint);
}
}
And obtaining the following log, confirm that we might remove a tracked output to re-add just after as a new request ? (see 24647a3792b08dbcd7c9516ae5118fce200b2a11c49cbc496b48da466b125e3e
)
TRACE node 1 [lightning::chain::channelmonitor : lightning/src/chain/channelmonitor.rs, 1925] New best block 88fb36eb50c852280f8b85410462252dff993bbf50a3e752dec3f8a54b978c41 at height 206
INFO node 1 [lightning::chain::channelmonitor : lightning/src/chain/channelmonitor.rs, 2250] Force-closing channel due to outbound HTLC timeout, HTLC expiry is 81
TRACE node 1 [lightning::chain::channelmonitor : lightning/src/chain/channelmonitor.rs, 2093] Descriptor DynamicOutputP2WPKH 012b0c6223264225513eb7fd1661b3e04e6076f8c0868b48565ba85d9c3c24c4:2 marked for spending has got enough confirmations to be passed upstream
TRACE node 1 [lightning::chain::channelmonitor : lightning/src/chain/channelmonitor.rs, 2085] HTLC 75877bb41d393b5fb8455ce60ecd8dda001d06316496b14dfa7f895656eeca4a failure update has got enough confirmations to be passed upstream
TRACE node 1 [lightning::chain::channelmonitor : lightning/src/chain/channelmonitor.rs, 2093] Descriptor StaticOutput 4ee8b85ba72a1a1beb201438e8ec8903ce0acf2e48161e5e61c9e331d048fa70:0 marked for spending has got enough confirmations to be passed upstream
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 385] Updating claims view at height 206 with 0 matched transactions and 2 claim requests
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 394] Ignoring second claim for outpoint 24647a3792b08dbcd7c9516ae5118fce200b2a11c49cbc496b48da466b125e3e:0, already registered its claiming request
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 394] Ignoring second claim for outpoint b00e0a6ecc3ad8958be2552cf1eefc75232be2c68ec4668e14f6c784d5d050e3:0, already registered its claiming request
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 539] Removing claiming outpoint for 24647a3792b08dbcd7c9516ae5118fce200b2a11c49cbc496b48da466b125e3e
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 539] Removing claiming outpoint for 012b0c6223264225513eb7fd1661b3e04e6076f8c0868b48565ba85d9c3c24c4
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 563] Bumping 0 candidates
TRACE node 1 [lightning::chain::channelmonitor : lightning/src/chain/channelmonitor.rs, 1925] New best block 88fb36eb50c852280f8b85410462252dff993bbf50a3e752dec3f8a54b978c41 at height 206
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 385] Updating claims view at height 206 with 0 matched transactions and 0 claim requests
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 563] Bumping 0 candidates
TRACE node 1 [lightning::chain::channelmonitor : lightning/src/chain/channelmonitor.rs, 1963] Block 88fb36eb50c852280f8b85410462252dff993bbf50a3e752dec3f8a54b978c41 at height 206 connected with 0 txn matched
INFO node 1 [lightning::chain::channelmonitor : lightning/src/chain/channelmonitor.rs, 2250] Force-closing channel due to outbound HTLC timeout, HTLC expiry is 81
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 385] Updating claims view at height 206 with 0 matched transactions and 2 claim requests
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 413] Test if outpoint can be aggregated with expiration 206 against 218
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 394] Ignoring second claim for outpoint b00e0a6ecc3ad8958be2552cf1eefc75232be2c68ec4668e14f6c784d5d050e3:0, already registered its claiming request
TRACE node 1 [lightning::chain::package : lightning/src/chain/package.rs, 615] Adding claiming input for outpoint 24647a3792b08dbcd7c9516ae5118fce200b2a11c49cbc496b48da466b125e3e:0
TRACE node 1 [lightning::chain::package : lightning/src/chain/package.rs, 616] Finalized transaction b00e0a6ecc3ad8958be2552cf1eefc75232be2c68ec4668e14f6c784d5d050e3 ready to broadcast
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 444] Registering claiming request for 24647a3792b08dbcd7c9516ae5118fce200b2a11c49cbc496b48da466b125e3e:0
TRACE node 1 [lightning::chain::onchaintx : lightning/src/chain/onchaintx.rs, 448] Broadcasting onchain commitment tx with txid b00e0a6ecc3ad8958be2552cf1eefc75232be2c68ec4668e14f6c784d5d050e3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the weirdness I didn't understand and AFAICT it sounds an issue with our already-existent update_claim_view logic where we might have a "height" collision between maturity of OnchainEvent::Claim for commitment_tx[0] and hitting a should_broadcast()==true generating a new request for the same commitment.
Its not height, its the previous outpoint itself. The claimable_outpoints
map is indexed by previous outpoint, and in this case we've already registered that we're tracking a spend of the same outpoint, so we don't add a new one. This seems correct to me, if race-y.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmmm i still think it's a weird height-race because note in the log channelmonitor
is processing twice height 206 ? First time we remove the commitment's outpoint from OnchainTxHandler.claimable_outpoints
then at second processing we add a new request for the already-confirmed commitment transaction in OnchainTxHandler.claimable_outpoints
as the redundant-claim replay protection L393 doesn't play out ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I totally agree that is a (very minor) bug, but this PR just removes that bug depending on the connection style, so I don't think it needs to get fixed here :).
@@ -2051,7 +2055,7 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> { | |||
self.onchain_events_awaiting_threshold_conf.drain(..).collect::<Vec<_>>(); | |||
let mut onchain_events_reaching_threshold_conf = Vec::new(); | |||
for entry in onchain_events_awaiting_threshold_conf { | |||
if entry.has_reached_confirmation_threshold(height) { | |||
if entry.has_reached_confirmation_threshold(self.best_block.height()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is the updated code path solving the issue describing in the commit. Maybe add PR number and one-line mentioning we had a bug here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There were two issues, this is one, tbh I don't recall exactly which line in update_claims_view
solved the other one. In general we haven't historically updated our code with comments where there was a bug, at least as long as we have a test which will fail if we ever regress (which we do here). It feels weird to start now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, I don't seep mentions of former bugs over a quick browse of channel.rs/channelmanager.rs
though for QA purposes what do you think of tracking PR in do_test_htlc_on_chain_timeout
's updated comment about connect style scenarios ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I generally just git blame
and then look at the commit(s)/check github to see what PR it came from. We could add PR #s too, but I'm a bit hesitant to have the code explicitly reference the PR it was in. But if you feel strongly I can add it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well i'm a huge fan of good post-mortem tracking, kind of in the long-term of #573 :) Making a mental note to add to next agenda meeting as we have all differing opinions there.
@@ -552,7 +555,7 @@ impl<ChannelSigner: Sign> OnchainTxHandler<ChannelSigner> { | |||
// Check if any pending claim request must be rescheduled | |||
for (first_claim_txid, ref request) in self.pending_claim_requests.iter() { | |||
if let Some(h) = request.timer() { | |||
if height >= h { | |||
if cur_height >= h { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this change opening to the possibility of a duplicated broadcast ?
If you call first best_block_updated
and bump best_block
to N.
Then you call transaction_confirmed
and register a request at block N - 3, you generate a first claim transaction.
Then you hit timer returning T >= H, you generate a second claim transaction.
I don't think because we call get_height_timer
with cur_height
in generate_claim_tx
though I would be glad to have this potential interaction more highlighted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmmm, should we swap the ordering of things in update_claims_view
and first bump RBF transactions and only process the requests after that? I'm a bit hesitant to given it will probably mean a ton of test changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above comment about height collision, I would favor this ordering swap to avoid re-broadcasting. I believe, post-anchor that kind of defaulting internal API issue might force us to reserve fee-bumping reserve for nothing ?
Though it's not related to this PR, and we can keep thinking about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit confused about that comment, but I'd think the solution is to track outputs which were on chain and spent and never re-spend them, instead of re-spending them after ANTI_REORG_DELAY.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that would works too to blacklist confirmed utxos so i think it's more an issue on how duplicate block processing is interfering with update_claim_view()
list of calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, hmm, I guess I just misread your above comment. Let me try the ordering swap in a new PR? I don't really want to hold this/0.0.99 up to redo a ton of tests.
@@ -1357,10 +1357,13 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> { | |||
// *we* sign a holder commitment transaction, not when e.g. a watchtower broadcasts one of our | |||
// holder commitment transactions. | |||
if self.broadcasted_holder_revokable_script.is_some() { | |||
let (claim_reqs, _) = self.get_broadcasted_holder_claims(&self.current_holder_commitment_tx, 0); | |||
// Assume that the broadcasted commitment transaction confirmed in the current best | |||
// block. Even if not, its a reasonable metric for the bump criteria on the HTLC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this comment is a bit confusing, bumping timer is only function of current height and expiration timelock (see get_height_timer
in package.rs
, not of parent transaction conf. So leveraging the best known height as comparison point to refresh timers is the best we can do ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_broadcasted_holder_claims
sets the PackageTemplate::soonest_conf_deadline
to the height given here, so it is the "expiration timelock".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes in this case parent conf height is equal to PackageTemplate::soonest_conf_deadline
which is equal to "expiration timelock" but my point rather than relying on parent conf height, which have the side-effect of artificially increasing the fee-bump frequency and thus bleed feerate a bit, we should rely on the per-holder-claims timelocks themselves to fulfill soonest_conf_deadline
.
Though I don't think this comment needs further action for now and sounds to be related just after.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still a bit confused here, but I'll leave it for a followup.
@@ -1747,7 +1750,7 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> { | |||
}; | |||
HolderHTLCOutput::build_accepted(payment_preimage, htlc.amount_msat) | |||
}; | |||
let htlc_package = PackageTemplate::build_package(holder_tx.txid, transaction_output_index, PackageSolvingData::HolderHTLCOutput(htlc_output), height, false, height); | |||
let htlc_package = PackageTemplate::build_package(holder_tx.txid, transaction_output_index, PackageSolvingData::HolderHTLCOutput(htlc_output), conf_height, false, conf_height); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is buggy, why we call build_package
and initialize soonest_conf_deadline
(i.e the timelock expiration height) to the confirmation or current height ? At least it's already what we do on master (c05347f), this PR doesn't seem to change anything...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, its "wrong", but only in the sense that there is no "right" answer. We can't RBF-bump the commitment transactions anyway, so its not broken, i don't think, and indeed its the same on master. I'd ideally prefer to clean it up in a separate PR, though I agree its a really poor API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well IMHO the change is as simple as this :
diff --git a/lightning/src/chain/channelmonitor.rs b/lightning/src/chain/channelmonitor.rs
index b7f87883..9e6e64e4 100644
--- a/lightning/src/chain/channelmonitor.rs
+++ b/lightning/src/chain/channelmonitor.rs
@@ -1746,7 +1746,7 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> {
};
HolderHTLCOutput::build_accepted(payment_preimage, htlc.amount_msat)
};
- let htlc_package = PackageTemplate::build_package(holder_tx.txid, transaction_output_index, PackageSolvingData::HolderHTLCOutput(htlc_output), conf_height, false, conf_height);
+ let htlc_package = PackageTemplate::build_package(holder_tx.txid, transaction_output_index, PackageSolvingData::HolderHTLCOutput(htlc_output), htlc.cltv_expiry, false, conf_height);
claim_requests.push(htlc_package);
}
}
Though doesn't break any test on top of this branch, so likely need a test to demonstrate the fee-bumping policy change. Tracked as #982 and self-assigned. Note, it might duplicate htlc.cltv_expiry
usage both at the package-level and at the output-level but i don't think that's a redundance as in the first case we would use for fee-bumping scheduling and in the second to generate the correct witnessScript
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah! Duh, indeed, I was somehow confusing the htlc package for the commitment tx package. I updated it but agree we need more tests here.
@@ -2043,6 +2046,9 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> { | |||
self.pending_monitor_events.push(MonitorEvent::CommitmentTxBroadcasted(self.funding_info.0)); | |||
let commitment_tx = self.onchain_tx_handler.get_fully_signed_holder_tx(&self.funding_redeemscript); | |||
self.holder_tx_signed = true; | |||
// Because we're building a new commitment transaction, we should construct the package |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is confusing too, we never "build" new commitment transaction in the backend, at least we might regenerate one to ensure sanity but never build-and-sign a commitment transaction from scratch ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, semantics, but OK, I tweaked it.
6a0dcd8
to
9abba15
Compare
9abba15
to
419e9fa
Compare
Rebased after merge of the logging dependency. No changes/squashes were done. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Largely looks good though my comprehension of onchaintx.rs
changes is lacking compared to @ariard.
@@ -2004,6 +2004,10 @@ impl<Signer: Sign> ChannelMonitorImpl<Signer> { | |||
self.is_paying_spendable_output(&tx, height, &logger); | |||
} | |||
|
|||
if height > self.best_block.height() { | |||
self.best_block = BestBlock::new(block_hash, height); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we log_trace
here as we do in block_connected
and best_block_updated
? Regardless of which methods, should we log only when best_block
is updated or (in particularly for the other two) unconditionally when the methods are called?
Taking a step back, it feels like some of this logging me be redundant across other monitors and may not provide much value. Maybe the logging should be placed in ChainMonitor
instead? Then again, logging when the best block is actually updated could be valuable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I think there's a lot of room for improvement here, lets do it in a followup?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM
419e9fa
to
8e1da16
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notwithstanding already-existent issues mentioned in comment and we should address latter, this PR sounds good to me.
e271d98
to
5e51f84
Compare
Code Review ACK 5e51f84 Thanks for taking the latest suggestions! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK 5e51f84
This further DRYs up some functional_test code and increases coverage.
There are no visible effects of this, but it seems like good code hygiene to not call a disconnect function in a different file if no disconnect happened.
No matter the context, if we're told about a block which is guaranteed by our API semantics to be on the best chain, and it has a higher height than our current understanding of the best chain, we should update our understanding. This avoids complexity in `block_confirmed` by never having a height set which is *higher* than our current best chain, potentially avoiding some bugs in the rather-complicated code. It also requires a minor test tweak as we in some cases now no longer broadcast a conflicting transaction after the original has reached the ANTI_REORG_DELAY.
We had a user who pointed out that we weren't creating `SpendableOutputs` events when we should have been after they called `ChannelMonitor::best_block_updated` with a block well after a CSV locktime and then called `ChannelMonitor::transactions_confirmed` with the transaction which we should have been spending (with a block height/hash a ways in the past). This was due to `ChannelMonitor::transactions_confirmed` only calling `ChannelMonitor::block_confirmed` with the height at which the transactions were confirmed, resulting in all checks being done against that, not the current height. Further, in the same scenario, we also would not fail-back and HTLC where the HTLC-Timeout transaction was confirmed more than ANTI_REORG_DELAY blocks ago. To address this, we use the best block height for confirmation threshold checks in `ChannelMonitor::block_confirmed` and pass both the confirmation and current heights through to `OnchainTx::update_claims_view`, using each as appropriate. Fixes lightningdevkit#962.
5e51f84
to
412cc9f
Compare
Squashed with no diff, will take after CI:
|
We had a user who pointed out that we weren't creating
SpendableOutputs
events when we should have been after theycalled
ChannelMonitor::best_block_updated
with a block wellafter a CSV locktime and then called
ChannelMonitor::transactions_confirmed
with the transaction whichwe should have been spending (with a block height/hash a ways in
the past).
This was due to
ChannelMonitor::transactions_confirmed
onlycalling
ChannelMonitor::block_confirmed
with the height at whichthe transactions were confirmed, resulting in all checks being done
against that, not the current height.
Further, in the same scenario, we also would not fail-back and HTLC
where the HTLC-Timeout transaction was confirmed more than
ANTI_REORG_DELAY blocks ago.
To address this, we use the best block height for confirmation
threshold checks in
ChannelMonitor::block_confirmed
and pass boththe confirmation and current heights through to
OnchainTx::update_claims_view
, using each as appropriate.Note to reviewers: please carefully check the
update_claims_view
part of the changeset here, there may very well be some unintended double-use of the height which wasn't properly separated (though I don't see any immediately).Fixes #962.