Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Let gossipd automatically announce the node when a local channel_announcement is queued #413

Merged
merged 25 commits into from
Dec 17, 2017

Conversation

cdecker
Copy link
Member

@cdecker cdecker commented Dec 8, 2017

This closes #320. I'm spinning this one out of the changes that I'm testing for #284 since the bigger PR has a flaky unit test when running under valgrind.

@cdecker
Copy link
Member Author

cdecker commented Dec 12, 2017

Ok, added @rustyrussell's changes from #430, and skipped the two that were causing a conflict. I'll fix the merge conflict with master tomorrow so we can get this merged :-)

cdecker and others added 21 commits December 13, 2017 12:33
This is used to identify our own announcements.

Signed-off-by: Christian Decker <[email protected]>
We are still generating only char* style aliases, but the field is
defined to be unicode, which doesn't mix too well with char.

Signed-off-by: Christian Decker <[email protected]>
First step towards `gossipd` managing the `node_announcement`.

Signed-off-by: Christian Decker <[email protected]>
This will later be used to determine whether or not we should announce
ourselves as a node.

Signed-off-by: Christian Decker <[email protected]>
Rather than using the destructor, hook up the cmd so we can close it.
peers are allocated off ld, so they are only destroyed explicitly.

Signed-off-by: Rusty Russell <[email protected]>
As demonstrated in the test at the end of this series, openingd dying
spontaneously causes the conn to be freed which causes the subd to be
destroyed, which fails the peer, which hits the db.

Signed-off-by: Rusty Russell <[email protected]>
On unmarshal, we stop unmarshaling on a 0 (ADDR_TYPE_PADDING) type.  So
we should also stop marshaling in that case.

Signed-off-by: Rusty Russell <[email protected]>
We can open other channels, if we want.

Signed-off-by: Rusty Russell <[email protected]>
We should not disconnect from a peer just because it fails opening; we
should return it to gossipd, and give a meaningful error.

Closes: ElementsProject#401
Signed-off-by: Rusty Russell <[email protected]>
We'll want this for the next change, where gossipd migrates remote peers
back to local ones.

Signed-off-by: Rusty Russell <[email protected]>
…elds.

We should also go through and use consistent nomenclature on functions which
are used with a local peer ("lpeer_xxx"?) and those with a remote peer
("rpeer_xxx"?) but this is minimal.

Signed-off-by: Rusty Russell <[email protected]>
Otherwise we always say it died because we killed it, so we don't get
the exit status.

Signed-off-by: Rusty Russell <[email protected]>
All peers come from gossipd, and maintain an fd to talk to it.  Sometimes
we hand the peer back, but to avoid a race, we always recreated it.

The race was that a daemon closed the gossip_fd, which made gossipd
forget the peer, then master handed the peer back to gossipd.  We stop
the race by never closing the gossipfd, but hand it back to gossipd
for closing.

Now gossipd has to accept two fds, but the handling of peers is far
clearer.

Signed-off-by: Rusty Russell <[email protected]>
If the peer is moved from remote to local, this may be lost; it's more
secure to increment after we've sent the broadcast.
rustyrussell and others added 3 commits December 13, 2017 12:36
When gossipd sends a message, have a gossip_index.  When it gets back a
peer, the current gossip_index is included, so it can know exactly where
it's up to.

Most of this is mechanical plumbing through openingd, channeld and closingd,
even though openingd and closingd don't (currently) read gossip, so their
gossip_index will be unchanged.

Signed-off-by: Rusty Russell <[email protected]>
It was relying on the message order instead of waiting the desired
state.

Signed-off-by: Christian Decker <[email protected]>
@cdecker cdecker force-pushed the gossip-node-announce branch from 8674880 to c787e51 Compare December 13, 2017 11:36
@cdecker
Copy link
Member Author

cdecker commented Dec 13, 2017

Rebased and should be ready for review. I'll also walk through Rusty's changes and review them.

Now using assertRaisesRegex instead of try-except and added restart to
nodes.

Signed-off-by: Christian Decker <[email protected]>
Copy link
Member Author

@cdecker cdecker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checked @rustyrussell: LGTM (but can't approve since I opened the PR 😄 )

@@ -183,10 +193,10 @@ void peer_fail_permanent(struct peer *peer, const u8 *msg TAKES)
* zero (ie. all bytes zero), in which case it refers to all
* channels. */
static const struct channel_id all_channels;
const char *why = tal_strndup(NULL, (char *)msg, tal_len(msg));
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This threw me off a bit, but in combination with the TAKES above and a take() on the caller side, this is simply a conditional tal_steal and type cast, or am I missing something?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, strndup will take() if it needs to, which ahem.. takes care of it. :)

return 0;
}

/* FIXME: Should we save addr in peer, or should gossipd remember it? */
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say just send an address hint to gossipd

try:
l1.rpc.fundchannel(l2.info['id'], 100000)
except ValueError as verr:
str(verr).index('to_self_delay {} larger than {}'
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These expected exceptions can be implemented using assertRaises or assertRaisesRegex. They're a bit shorter and will ensure failure if the expected exception is not raised.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a commit to the end of the PR to address this.

self.fail("huge locktime ignored?")

# We don't have enough left to cover fees if we try to spend it all.
try:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this implicitly assume an ordering of the internal checks, i.e., first we check the funding amounts and only then we check the locktimes? After all we didn't change the locktime blocks, but now expect a different exception. Spinning up a third node, with compatible locktimes to l1, and then connecting to l1 but with the wrong funding amount gets rid of this assumption.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The commit I added to the end of the PR now changes the max-locktime and restarts l2, which we'd be doing in the next step anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, much cleaner. Thanks!

* @dc: the daemon_conn to clean up.
*
* This is used by gossipd when a peer is handed back, and we no longer
* want to deal with it via daemon_conn. @dc must not be used after this!
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could return NULL like tal_free so calls would look like this:

dc = daemon_conn_clear(dc)

to signal that the dc has been taken care of. The only call for daemon_conn_clear is directly followed by a tal_free on the daemon_conn anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, yes, but as you say it's only got one caller, so maybe overkill.

@rustyrussell
Copy link
Contributor

Thanks! Nice work!

@cdecker cdecker deleted the gossip-node-announce branch January 28, 2018 00:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Have gossipd generate node_announcement automatically
2 participants