-
Notifications
You must be signed in to change notification settings - Fork 925
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Let gossipd automatically announce the node when a local channel_announcement is queued #413
Let gossipd automatically announce the node when a local channel_announcement is queued #413
Conversation
Ok, added @rustyrussell's changes from #430, and skipped the two that were causing a conflict. I'll fix the merge conflict with |
This is used to identify our own announcements. Signed-off-by: Christian Decker <[email protected]>
We are still generating only char* style aliases, but the field is defined to be unicode, which doesn't mix too well with char. Signed-off-by: Christian Decker <[email protected]>
First step towards `gossipd` managing the `node_announcement`. Signed-off-by: Christian Decker <[email protected]>
This will later be used to determine whether or not we should announce ourselves as a node. Signed-off-by: Christian Decker <[email protected]>
Signed-off-by: Christian Decker <[email protected]>
Signed-off-by: Christian Decker <[email protected]>
Signed-off-by: Christian Decker <[email protected]>
Rather than using the destructor, hook up the cmd so we can close it. peers are allocated off ld, so they are only destroyed explicitly. Signed-off-by: Rusty Russell <[email protected]>
As demonstrated in the test at the end of this series, openingd dying spontaneously causes the conn to be freed which causes the subd to be destroyed, which fails the peer, which hits the db. Signed-off-by: Rusty Russell <[email protected]>
On unmarshal, we stop unmarshaling on a 0 (ADDR_TYPE_PADDING) type. So we should also stop marshaling in that case. Signed-off-by: Rusty Russell <[email protected]>
We can open other channels, if we want. Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Rusty Russell <[email protected]>
We should not disconnect from a peer just because it fails opening; we should return it to gossipd, and give a meaningful error. Closes: ElementsProject#401 Signed-off-by: Rusty Russell <[email protected]>
We'll want this for the next change, where gossipd migrates remote peers back to local ones. Signed-off-by: Rusty Russell <[email protected]>
…elds. We should also go through and use consistent nomenclature on functions which are used with a local peer ("lpeer_xxx"?) and those with a remote peer ("rpeer_xxx"?) but this is minimal. Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Rusty Russell <[email protected]>
Otherwise we always say it died because we killed it, so we don't get the exit status. Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Rusty Russell <[email protected]>
All peers come from gossipd, and maintain an fd to talk to it. Sometimes we hand the peer back, but to avoid a race, we always recreated it. The race was that a daemon closed the gossip_fd, which made gossipd forget the peer, then master handed the peer back to gossipd. We stop the race by never closing the gossipfd, but hand it back to gossipd for closing. Now gossipd has to accept two fds, but the handling of peers is far clearer. Signed-off-by: Rusty Russell <[email protected]>
If the peer is moved from remote to local, this may be lost; it's more secure to increment after we've sent the broadcast.
When gossipd sends a message, have a gossip_index. When it gets back a peer, the current gossip_index is included, so it can know exactly where it's up to. Most of this is mechanical plumbing through openingd, channeld and closingd, even though openingd and closingd don't (currently) read gossip, so their gossip_index will be unchanged. Signed-off-by: Rusty Russell <[email protected]>
It was relying on the message order instead of waiting the desired state. Signed-off-by: Christian Decker <[email protected]>
Signed-off-by: Christian Decker <[email protected]>
8674880
to
c787e51
Compare
Rebased and should be ready for review. I'll also walk through Rusty's changes and review them. |
Now using assertRaisesRegex instead of try-except and added restart to nodes. Signed-off-by: Christian Decker <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checked @rustyrussell: LGTM (but can't approve since I opened the PR 😄 )
lightningd/peer_control.c
Outdated
@@ -183,10 +193,10 @@ void peer_fail_permanent(struct peer *peer, const u8 *msg TAKES) | |||
* zero (ie. all bytes zero), in which case it refers to all | |||
* channels. */ | |||
static const struct channel_id all_channels; | |||
const char *why = tal_strndup(NULL, (char *)msg, tal_len(msg)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This threw me off a bit, but in combination with the TAKES
above and a take()
on the caller side, this is simply a conditional tal_steal
and type cast, or am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, strndup will take() if it needs to, which ahem.. takes care of it. :)
lightningd/peer_control.c
Outdated
return 0; | ||
} | ||
|
||
/* FIXME: Should we save addr in peer, or should gossipd remember it? */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say just send an address hint to gossipd
tests/test_lightningd.py
Outdated
try: | ||
l1.rpc.fundchannel(l2.info['id'], 100000) | ||
except ValueError as verr: | ||
str(verr).index('to_self_delay {} larger than {}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These expected exceptions can be implemented using assertRaises or assertRaisesRegex. They're a bit shorter and will ensure failure if the expected exception is not raised.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a commit to the end of the PR to address this.
tests/test_lightningd.py
Outdated
self.fail("huge locktime ignored?") | ||
|
||
# We don't have enough left to cover fees if we try to spend it all. | ||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't this implicitly assume an ordering of the internal checks, i.e., first we check the funding amounts and only then we check the locktimes? After all we didn't change the locktime blocks, but now expect a different exception. Spinning up a third node, with compatible locktimes to l1, and then connecting to l1 but with the wrong funding amount gets rid of this assumption.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The commit I added to the end of the PR now changes the max-locktime and restarts l2
, which we'd be doing in the next step anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, much cleaner. Thanks!
* @dc: the daemon_conn to clean up. | ||
* | ||
* This is used by gossipd when a peer is handed back, and we no longer | ||
* want to deal with it via daemon_conn. @dc must not be used after this! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could return NULL
like tal_free
so calls would look like this:
dc = daemon_conn_clear(dc)
to signal that the dc
has been taken care of. The only call for daemon_conn_clear
is directly followed by a tal_free
on the daemon_conn
anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, yes, but as you say it's only got one caller, so maybe overkill.
Thanks! Nice work! |
This closes #320. I'm spinning this one out of the changes that I'm testing for #284 since the bigger PR has a flaky unit test when running under
valgrind
.