Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hide internals of ChainMonitor behind getter #1112

Merged
merged 7 commits into from
Oct 14, 2021

Conversation

TheBlueMatt
Copy link
Collaborator

These are a few of the cleanups/pre-refactors from #1108 split out to make #1108 a bit smaller. They mostly move a few things around and change the API of ChainMonitor to no longer expose its internal datastructures to the world.

@TheBlueMatt TheBlueMatt added this to the 0.0.102 milestone Oct 8, 2021
@TheBlueMatt TheBlueMatt force-pushed the 2021-10-mon-refactors branch 5 times, most recently from 36d0dc7 to 3d95770 Compare October 8, 2021 20:43
@TheBlueMatt
Copy link
Collaborator Author

Heh, the test changes here turned up a bug in ChainMonitor, which is now fixed in a new commit.

@TheBlueMatt TheBlueMatt force-pushed the 2021-10-mon-refactors branch from 3d95770 to 81ccf92 Compare October 8, 2021 20:44
Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will continue to review but wanted to get out some comments about the code moves.

/// transaction and losing money. This is a risk because previous channel states
/// are toxic, so it's important that whatever channel state is persisted is
/// kept up-to-date.
pub trait Persist<ChannelSigner: Sign> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we expect a watchtower implementation to persist ChannelMonitors independently of a ChainMonitor and reuse a Persist implementation to do so? That could be an argument of keeping this outside of chainmonitor.rs.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chatted about it offline, but I don't think so - I'd think they'd use chain::Watch directly and never think about Persister, especially after #1108.

Comment on lines +178 to +180
/// An error enum representing a failure to persist a channel monitor update.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum ChannelMonitorUpdateErr {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... it's a little strange this error is related to persisting ChannelMonitor updates but lives in a different location than both Persist and ChannelMonitor. Not sure if there is a compelling case for moving this and Persist. At very least I'd think this should live next to Persist, wherever that would be. chain::Watch is just forwarding these from a Persist implementation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, its more subtle - chain::Watch is what users will use if they're not using a ChainMonitor, I think. It passing through a Persist is only for those who use ChainMonitor.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. We may want to rename ChannelMonitorUpdateErr later given it overloads the "update" terminology. That is, this is return by watch_channel which doesn't involve applying a ChannelMonitorUpdate.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, that'd make sense, I think.

@TheBlueMatt TheBlueMatt force-pushed the 2021-10-mon-refactors branch from 81ccf92 to 8fae31a Compare October 8, 2021 21:32
@codecov
Copy link

codecov bot commented Oct 8, 2021

Codecov Report

Merging #1112 (da498d7) into main (da498d7) will not change coverage.
The diff coverage is n/a.

❗ Current head da498d7 differs from pull request most recent head 1464671. Consider uploading reports for the commit 1464671 to get more accurate results
Impacted file tree graph

@@           Coverage Diff           @@
##             main    #1112   +/-   ##
=======================================
  Coverage   90.60%   90.60%           
=======================================
  Files          66       66           
  Lines       34474    34474           
=======================================
  Hits        31235    31235           
  Misses       3239     3239           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update da498d7...1464671. Read the comment docs.

@TheBlueMatt TheBlueMatt force-pushed the 2021-10-mon-refactors branch 2 times, most recently from 3c5aafd to 8bb8758 Compare October 8, 2021 22:08
Comment on lines +95 to +97
struct MonitorHolder<ChannelSigner: Sign> {
monitor: ChannelMonitor<ChannelSigner>,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is MonitorHolder needed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, its used in #1108 to add additional data per monitor in the same hashmap.

impl<ChannelSigner: Sign> Deref for LockedChannelMonitor<'_, ChannelSigner> {
type Target = ChannelMonitor<ChannelSigner>;
fn deref(&self) -> &ChannelMonitor<ChannelSigner> {
&self.lock.get(&self.funding_txo).expect("Checked at construction").monitor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to hold a reference to the ChannelMonitor instead of a doing a look-up each time a LockedChannelMonitor is accessed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to, but I can't seem to find a way to do it without making the borrow checker sad - "just" including a reference doesn't work because it'd be a self-referential reference (which makes rust very very sad), and doing it with a cache of a reference doesn't seem to either without dropping Deref and using a custom function with a different lifetime declaration (if that).

Comment on lines +178 to +180
/// An error enum representing a failure to persist a channel monitor update.
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum ChannelMonitorUpdateErr {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. We may want to rename ChannelMonitorUpdateErr later given it overloads the "update" terminology. That is, this is return by watch_channel which doesn't involve applying a ChannelMonitorUpdate.

Copy link
Contributor

@valentinewallace valentinewallace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to be 95% mechanical cascading changes/moves/test changes from my PoV. Let me know if there's anything specific to review for, but this is looking pretty good!

@TheBlueMatt TheBlueMatt force-pushed the 2021-10-mon-refactors branch 2 times, most recently from cbc5e05 to c0d7887 Compare October 11, 2021 18:43
@TheBlueMatt
Copy link
Collaborator Author

Note the check_commits failure here should resolve once squashed.

Copy link
Contributor

@valentinewallace valentinewallace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, seems like a strict improvement even without #1108! I'm ACK mod these comments

/// Gets the [`LockedChannelMonitor`] for a given funding outpoint, returning an `Err` if no
/// such [`ChannelMonitor`] is currently being monitored for.
///
/// Note that the result holds a mutex our monitor state, and should not be held indefinitely.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting -- is there ways in the bindings or in the native languages to mem::drop these objects? Might it be a limitation for future language bindings? 🤔

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep! Even better, the Java mapping is "clever" enough to map things that start with "Lock" to something that acts exactly like a native lock in Java!

@TheBlueMatt TheBlueMatt force-pushed the 2021-10-mon-refactors branch from c0d7887 to 1a368a2 Compare October 12, 2021 02:26

/// Gets the current list of [`ChannelMonitor`]s being monitored for, by their funding
/// outpoint.
pub fn list_current_monitors(&self) -> Vec<OutPoint> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this intended to be used? If monitors are added or removed after it is called, will that be a problem for the user?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just kinda figured users may care about the list of channels being monitored to fully replace the exposed HashMap we had before - I suppose users may want to walk their monitor list and re-persist them or something? Indeed, its race-y, but hopefully not too race-y, at least users have a function called when a new channel is added (and we currently don't remove monitors, but we'd also call a persist function when we do). Would you prefer if we return a LockedIterator of some form?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong preference, though the LockedIterator would indeed give a more consistent API.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, yea, slightly, though its a bit awkward to use, we could deref as an iterator, but then its all rust-only, or we could deref as a vec, but then its kinda hard to make sure the user doesn't drop the lock before using the vec, putting us back where we started. On the flip side, I think users may be fine with it unlocked. eg if you're going to re-persist the world, you first set a "ok, also sending to host X" bit, then when a new monitor comes in, it goes to host X, even if that happens while you're walking the list of old monitors, so there wouldn't be a race.

/// Gets the [`LockedChannelMonitor`] for a given funding outpoint, returning an `Err` if no
/// such [`ChannelMonitor`] is currently being monitored for.
///
/// Note that the result holds a mutex over our monitor set, and should not be held
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would this be worth reflecting in a more overt manner? Maybe a method name like get_locked_monitor() or something indicative of the mutex?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, the struct is called LockedChannelMonitor, though I suppose the method name could be too. Note that its not actually super duper critical - its only a read lock and we can do everything around updating monitors even while the read lock is held, only not add new monitors.

/// restore the channel to an operational state.
///
/// Note that a given ChannelManager will *never* re-generate a given ChannelMonitorUpdate. If
/// you return a TemporaryFailure you must ensure that it is written to disk safely before
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I return a failure, I must ensure that the update is written to disk, correct? The phrasing is a tad ambiguous.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If its alright with you I'll leave this for #1106 which rewrites a ton of the text here.

@TheBlueMatt TheBlueMatt force-pushed the 2021-10-mon-refactors branch 3 times, most recently from 9a57918 to 40d4387 Compare October 13, 2021 20:21
@TheBlueMatt
Copy link
Collaborator Author

Squashed without diff:

$ git diff-tree -U1 9a57918c 40d43872
$

Copy link
Contributor

@valentinewallace valentinewallace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM after squash :)

@TheBlueMatt TheBlueMatt force-pushed the 2021-10-mon-refactors branch from ddea65c to 4b66bea Compare October 13, 2021 22:03
@TheBlueMatt
Copy link
Collaborator Author

Squashed without diff, will land after CI. cc @arik-so since you gave this one pass review.

$ git diff-tree -U1 ddea65c1 4b66bea0
$

Copy link
Contributor

@arik-so arik-so left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still looks good post-squash.

Exposing a `RwLock<HashMap<>>` directly was always a bit strange,
and in upcoming changes we'd like to change the internal
datastructure in `ChainMonitor`.

Further, the use of `RwLock` and `HashMap` meant we weren't able
to expose the ChannelMonitors themselves to users in bindings,
leaving a bindings/rust API gap.

Thus, we take this opportunity go expose ChannelMonitors directly
via a wrapper, hiding the internals of `ChainMonitor` behind
getters. We also update tests to use the new API.
test_simple_monitor_permanent_update_fail and
test_simple_monitor_temporary_update_fail both have a mode where
they use either chain::Watch or persister to return errors.

As we won't be doing any returns directly from the chain::Watch
wrapper in a coming commit, the chain::Watch-return form of the
test will no longer make sense.
Previously, if a Persister returned a TemporaryFailure error when
we tried to persist a new channel, the ChainMonitor wouldn't track
the new ChannelMonitor at all, generating a PermanentFailure later
when the updating is restored.

This fixes that by correctly storing the ChannelMonitor on
TemporaryFailures, allowing later update restoration to happen
normally.

This is (indirectly) tested in the next commit where we use
Persister to return all monitor-update errors.
As ChainMonitor will need to see those errors in a coming PR,
we need to return errors via Persister so that our ChainMonitor
chain::Watch implementation sees them.
@TheBlueMatt TheBlueMatt force-pushed the 2021-10-mon-refactors branch from 4b66bea to 1464671 Compare October 14, 2021 00:20
@TheBlueMatt
Copy link
Collaborator Author

Rebased on main, range diff is pretty trivial:

$ git range-diff 6582aaebae7a79a95f4429567c196c4881e39796..4b66bea071e6315f855aecf9c09ea3523e2d2d7b  da498d7974bb1e8de5ccec090a1819e63f882350..1464671ae846169ea3fa509330bbac49c6e4c396

1:  02badf143 = 1:  0dfb24e66 Move `Persist` trait to chainmonitor as that's the only reference
2:  b3c54e6d1 ! 2:  6a7c48b60 Move ChannelMonitorUpdateErr to chain as it is a chain::Watch val
    @@ lightning/src/ln/chanmon_update_fail_tests.rs: use bitcoin::blockdata::block::{B
     +use chain::{ChannelMonitorUpdateErr, Listen, Watch};
      use ln::{PaymentPreimage, PaymentHash};
      use ln::channelmanager::{ChannelManager, ChannelManagerReadArgs, RAACommitmentOrder, PaymentSendFailure};
    - use ln::features::{InitFeatures, InvoiceFeatures};
    + use ln::features::InitFeatures;
     
      ## lightning/src/ln/channelmanager.rs ##
     @@ lightning/src/ln/channelmanager.rs: use bitcoin::secp256k1::ecdh::SharedSecret;
3:  0355a21bd ! 3:  79541b11e Make `ChainMonitor::monitors` private and expose monitor via getter
    @@ lightning/src/ln/functional_tests.rs: fn test_force_close_fail_back() {
     -          let mut monitors = nodes[2].chain_monitor.chain_monitor.monitors.read().unwrap();
     -          monitors.get(&OutPoint{ txid: Txid::from_slice(&payment_event.commitment_msg.channel_id[..]).unwrap(), index: 0 }).unwrap()
     +          get_monitor!(nodes[2], payment_event.commitment_msg.channel_id)
    -                   .provide_payment_preimage(&our_payment_hash, &our_payment_preimage, &node_cfgs[2].tx_broadcaster, &node_cfgs[2].fee_estimator, &&logger);
    +                   .provide_payment_preimage(&our_payment_hash, &our_payment_preimage, &node_cfgs[2].tx_broadcaster, &node_cfgs[2].fee_estimator, &node_cfgs[2].logger);
        }
        mine_transaction(&nodes[2], &tx);
     @@ lightning/src/ln/functional_tests.rs: fn test_funding_peer_disconnect() {
4:  482c13a36 ! 4:  49dbabff2 Simplify channelmonitor tests which use chain::Watch and Persister
    @@ lightning/src/ln/chanmon_update_fail_tests.rs: use io;
        let mut chanmon_cfgs = create_chanmon_cfgs(2);
        let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
     @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn do_test_simple_monitor_permanent_update_fail(persister_fail: bool) {
    +   create_announced_chan_between_nodes(&nodes, 0, 1, InitFeatures::known(), InitFeatures::known());
      
    -   let (_, payment_hash_1, payment_secret_1) = get_payment_preimage_hash!(&nodes[1]);
    - 
    +   let (route, payment_hash_1, _, payment_secret_1) = get_route_and_payment_hash!(&nodes[0], nodes[1], 1000000);
    +-
     -  match persister_fail {
     -          true => chanmon_cfgs[0].persister.set_update_ret(Err(ChannelMonitorUpdateErr::PermanentFailure)),
     -          false => *nodes[0].chain_monitor.update_ret.lock().unwrap() = Some(Err(ChannelMonitorUpdateErr::PermanentFailure))
     -  }
    -+  *nodes[0].chain_monitor.update_ret.lock().unwrap() = Some(Err(ChannelMonitorUpdateErr::PermanentFailure));
    -   let net_graph_msg_handler = &nodes[0].net_graph_msg_handler;
    -   let route = get_route(&nodes[0].node.get_our_node_id(), &net_graph_msg_handler.network_graph, &nodes[1].node.get_our_node_id(), Some(InvoiceFeatures::known()), None, &Vec::new(), 1000000, TEST_FINAL_CLTV, &logger).unwrap();
    ++  chanmon_cfgs[0].persister.set_update_ret(Err(ChannelMonitorUpdateErr::PermanentFailure));
        unwrap_send_err!(nodes[0].node.send_payment(&route, payment_hash_1, &Some(payment_secret_1)), true, APIError::ChannelUnavailable {..}, {});
    +   check_added_monitors!(nodes[0], 2);
    + 
     @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn test_monitor_and_persister_update_fail() {
        assert_eq!(events.len(), 1);
      }
    @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn test_monitor_and_persister_upd
        let mut chanmon_cfgs = create_chanmon_cfgs(2);
     @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn do_test_simple_monitor_temporary_update_fail(disconnect: bool, persister_fail
      
    -   let (payment_preimage_1, payment_hash_1, payment_secret_1) = get_payment_preimage_hash!(&nodes[1]);
    +   let (route, payment_hash_1, payment_preimage_1, payment_secret_1) = get_route_and_payment_hash!(&nodes[0], nodes[1], 1000000);
      
     -  match persister_fail {
     -          true => chanmon_cfgs[0].persister.set_update_ret(Err(ChannelMonitorUpdateErr::TemporaryFailure)),
    @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn do_test_simple_monitor_tempora
     +  chanmon_cfgs[0].persister.set_update_ret(Err(ChannelMonitorUpdateErr::TemporaryFailure));
      
        {
    -           let net_graph_msg_handler = &nodes[0].net_graph_msg_handler;
    +           unwrap_send_err!(nodes[0].node.send_payment(&route, payment_hash_1, &Some(payment_secret_1)), false, APIError::MonitorUpdateFailed, {});
     @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn do_test_simple_monitor_temporary_update_fail(disconnect: bool, persister_fail
                reconnect_nodes(&nodes[0], &nodes[1], (true, true), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (false, false));
        }
    @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn do_test_simple_monitor_tempora
        check_added_monitors!(nodes[0], 0);
     @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn do_test_simple_monitor_temporary_update_fail(disconnect: bool, persister_fail
        // Now set it to failed again...
    -   let (_, payment_hash_2, payment_secret_2) = get_payment_preimage_hash!(&nodes[1]);
    +   let (route, payment_hash_2, _, payment_secret_2) = get_route_and_payment_hash!(&nodes[0], nodes[1], 1000000);
        {
     -          match persister_fail {
     -                  true => chanmon_cfgs[0].persister.set_update_ret(Err(ChannelMonitorUpdateErr::TemporaryFailure)),
     -                  false => *nodes[0].chain_monitor.update_ret.lock().unwrap() = Some(Err(ChannelMonitorUpdateErr::TemporaryFailure))
     -          }
     +          chanmon_cfgs[0].persister.set_update_ret(Err(ChannelMonitorUpdateErr::TemporaryFailure));
    -           let net_graph_msg_handler = &nodes[0].net_graph_msg_handler;
    -           let route = get_route(&nodes[0].node.get_our_node_id(), &net_graph_msg_handler.network_graph, &nodes[1].node.get_our_node_id(), Some(InvoiceFeatures::known()), None, &Vec::new(), 1000000, TEST_FINAL_CLTV, &logger).unwrap();
                unwrap_send_err!(nodes[0].node.send_payment(&route, payment_hash_2, &Some(payment_secret_2)), false, APIError::MonitorUpdateFailed, {});
    +           check_added_monitors!(nodes[0], 1);
    +   }
     @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn do_test_simple_monitor_temporary_update_fail(disconnect: bool, persister_fail
      
      #[test]
5:  d9022894c = 5:  1b6a7c131 Handle Persister returning TemporaryFailure for new channels
6:  1889c9c23 ! 6:  c396dc6ee Use Persister to return errors in tests not chain::Watch
    @@ lightning/src/ln/chanmon_update_fail_tests.rs: use sync::{Arc, Mutex};
        let node_cfgs = create_node_cfgs(2, &chanmon_cfgs);
        let node_chanmgrs = create_node_chanmgrs(2, &node_cfgs, &[None, None]);
        let mut nodes = create_network(2, &node_cfgs, &node_chanmgrs);
    -@@ lightning/src/ln/chanmon_update_fail_tests.rs: fn test_simple_monitor_permanent_update_fail() {
    - 
    -   let (_, payment_hash_1, payment_secret_1) = get_payment_preimage_hash!(&nodes[1]);
    - 
    --  *nodes[0].chain_monitor.update_ret.lock().unwrap() = Some(Err(ChannelMonitorUpdateErr::PermanentFailure));
    -+  chanmon_cfgs[0].persister.set_update_ret(Err(ChannelMonitorUpdateErr::PermanentFailure));
    -   let net_graph_msg_handler = &nodes[0].net_graph_msg_handler;
    -   let route = get_route(&nodes[0].node.get_our_node_id(), &net_graph_msg_handler.network_graph, &nodes[1].node.get_our_node_id(), Some(InvoiceFeatures::known()), None, &Vec::new(), 1000000, TEST_FINAL_CLTV, &logger).unwrap();
    -   unwrap_send_err!(nodes[0].node.send_payment(&route, payment_hash_1, &Some(payment_secret_1)), true, APIError::ChannelUnavailable {..}, {});
     @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn test_monitor_and_persister_update_fail() {
      fn do_test_simple_monitor_temporary_update_fail(disconnect: bool) {
        // Test that we can recover from a simple temporary monitor update failure optionally with
    @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn test_monitor_and_persister_upd
        let mut nodes = create_network(2, &node_cfgs, &node_chanmgrs);
     @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn do_test_monitor_temporary_update_fail(disconnect_count: usize) {
        // Now try to send a second payment which will fail to send
    -   let (payment_preimage_2, payment_hash_2, payment_secret_2) = get_payment_preimage_hash!(nodes[1]);
    +   let (route, payment_hash_2, payment_preimage_2, payment_secret_2) = get_route_and_payment_hash!(nodes[0], nodes[1], 1000000);
        {
     -          *nodes[0].chain_monitor.update_ret.lock().unwrap() = Some(Err(ChannelMonitorUpdateErr::TemporaryFailure));
     +          chanmon_cfgs[0].persister.set_update_ret(Err(ChannelMonitorUpdateErr::TemporaryFailure));
    -           let net_graph_msg_handler = &nodes[0].net_graph_msg_handler;
    -           let route = get_route(&nodes[0].node.get_our_node_id(), &net_graph_msg_handler.network_graph, &nodes[1].node.get_our_node_id(), Some(InvoiceFeatures::known()), None, &Vec::new(), 1000000, TEST_FINAL_CLTV, &logger).unwrap();
                unwrap_send_err!(nodes[0].node.send_payment(&route, payment_hash_2, &Some(payment_secret_2)), false, APIError::MonitorUpdateFailed, {});
    +           check_added_monitors!(nodes[0], 1);
    +   }
     @@ lightning/src/ln/chanmon_update_fail_tests.rs: fn do_test_monitor_temporary_update_fail(disconnect_count: usize) {
        }
      
7:  4b66bea07 = 7:  1464671ae Use Persister to return errors in fuzzers not chain::Watch
$

Copy link
Contributor

@arik-so arik-so left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-ACKing post-squash

Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 1464671

@TheBlueMatt TheBlueMatt merged commit dda86a0 into lightningdevkit:main Oct 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants