Skip to content
This repository was archived by the owner on Jun 20, 2024. It is now read-only.

Enhanced weave status #1248

Merged
merged 3 commits into from
Aug 11, 2015
Merged

Enhanced weave status #1248

merged 3 commits into from
Aug 11, 2015

Conversation

awh
Copy link
Contributor

@awh awh commented Jul 30, 2015

Ready for an initial review pass; I will add further commits to update the documentation only when the output format haggling has subsided 😄

vagrant@host1:~$ weave status
          Service: router
             Name: 0e:e9:62:f9:fa:98
         NickName: host1
       Encryption: false
    PeerDiscovery: true
            Peers: 3
             MACs: 8
    UnicastRoutes: 3
  BroadcastRoutes: 3
      DirectPeers: 3
     Reconnecting: 1
    Unestablished: 0

          Service: ipam
            Range: [10.32.0.0-10.48.0.0)
    DefaultSubnet: 10.32.0.0/12

          Service: dns
           Domain: weave.local.
          Address: 0.0.0.0:53
              TTL: 1
          Entries: 1

          Service: proxy
          Address: tcp://127.0.0.1:12375
vagrant@host1:~$ weave status router macs
12:78:0f:ca:a9:e0 -> 0e:e9:62:f9:fa:98(host1) (2015-07-30 16:42:19.040002029 +0000 UTC)
32:4c:33:fa:36:42 -> 0e:e9:62:f9:fa:98(host1) (2015-07-30 16:41:56.067822166 +0000 UTC)
0e:e9:62:f9:fa:98 -> 0e:e9:62:f9:fa:98(host1) (2015-07-30 16:41:56.59108157 +0000 UTC)
72:e2:58:b3:7e:1a -> 0e:e9:62:f9:fa:98(host1) (2015-07-30 16:41:56.739184865 +0000 UTC)
fe:3b:a3:e7:b3:b3 -> 6e:e8:7a:66:49:f6(host2) (2015-07-30 16:41:57.379373879 +0000 UTC)
6e:e8:7a:66:49:f6 -> 6e:e8:7a:66:49:f6(host2) (2015-07-30 16:41:57.393960063 +0000 UTC)
12:8d:01:78:5b:93 -> 5a:df:63:d0:f4:04(host3) (2015-07-30 16:41:58.72572806 +0000 UTC)
5a:df:63:d0:f4:04 -> 5a:df:63:d0:f4:04(host3) (2015-07-30 16:41:58.921160369 +0000 UTC)
vagrant@host1:~$ weave status router peers
0e:e9:62:f9:fa:98(host1) (4) (UID 2516028838751749872)
   -> 6e:e8:7a:66:49:f6(host2) [192.168.48.12:33252]
   -> 5a:df:63:d0:f4:04(host3) [192.168.48.13:33205]
6e:e8:7a:66:49:f6(host2) (4) (UID 5844599409302017521)
   -> 0e:e9:62:f9:fa:98(host1) [192.168.48.11:6783]
   -> 5a:df:63:d0:f4:04(host3) [192.168.48.13:44931]
5a:df:63:d0:f4:04(host3) (4) (UID 12249497294994793864)
   -> 0e:e9:62:f9:fa:98(host1) [192.168.48.11:6783]
   -> 6e:e8:7a:66:49:f6(host2) [192.168.48.12:6783]
vagrant@host1:~$ weave status router routes
  unicast 0e:e9:62:f9:fa:98 -> 00:00:00:00:00:00
  unicast 6e:e8:7a:66:49:f6 -> 6e:e8:7a:66:49:f6
  unicast 5a:df:63:d0:f4:04 -> 5a:df:63:d0:f4:04
broadcast 0e:e9:62:f9:fa:98 -> [6e:e8:7a:66:49:f6 5a:df:63:d0:f4:04]
broadcast 6e:e8:7a:66:49:f6 -> []
broadcast 5a:df:63:d0:f4:04 -> []
vagrant@host1:~$ weave status router directpeers
192.168.48.12
192.168.48.13
1.1.1.1
vagrant@host1:~$ weave status router reconnects
1.1.1.1:6783 true 2015-07-30 16:41:58.746738587 +0000 UTC 2s
vagrant@host1:~$ weave status dns entries
415b0cf212548728fe760378a5ef5df08039e217400e51cbfcb11b3c12dcb55c 10.32.0.1 foo.weave.local.
vagrant@host1:~$ weave json
{
    "Router": {
        "Encryption": false,
        "PeerDiscovery": true,
        "Name": "0e:e9:62:f9:fa:98",
        "NickName": "host1",
        "Interface": {
            "Index": 54,
            "MTU": 65535,
            "Name": "ethwe",
            "HardwareAddr": "Mkwz+jZC",
            "Flags": 19
        },
        "MACs": [
            {
                "Mac": "12:78:0f:ca:a9:e0",
                "Name": "0e:e9:62:f9:fa:98",
                "NickName": "host1",
                "LastSeen": "2015-07-30T16:42:19.040002029Z"
            },
            {
                "Mac": "32:4c:33:fa:36:42",
                "Name": "0e:e9:62:f9:fa:98",
                "NickName": "host1",
                "LastSeen": "2015-07-30T16:41:56.067822166Z"
            },
            {
                "Mac": "0e:e9:62:f9:fa:98",
                "Name": "0e:e9:62:f9:fa:98",
                "NickName": "host1",
                "LastSeen": "2015-07-30T16:41:56.59108157Z"
            },
            {
                "Mac": "72:e2:58:b3:7e:1a",
                "Name": "0e:e9:62:f9:fa:98",
                "NickName": "host1",
                "LastSeen": "2015-07-30T16:41:56.739184865Z"
            },
            {
                "Mac": "fe:3b:a3:e7:b3:b3",
                "Name": "6e:e8:7a:66:49:f6",
                "NickName": "host2",
                "LastSeen": "2015-07-30T16:41:57.379373879Z"
            },
            {
                "Mac": "6e:e8:7a:66:49:f6",
                "Name": "6e:e8:7a:66:49:f6",
                "NickName": "host2",
                "LastSeen": "2015-07-30T16:41:57.393960063Z"
            },
            {
                "Mac": "12:8d:01:78:5b:93",
                "Name": "5a:df:63:d0:f4:04",
                "NickName": "host3",
                "LastSeen": "2015-07-30T16:41:58.72572806Z"
            },
            {
                "Mac": "5a:df:63:d0:f4:04",
                "Name": "5a:df:63:d0:f4:04",
                "NickName": "host3",
                "LastSeen": "2015-07-30T16:41:58.921160369Z"
            }
        ],
        "Peers": [
            {
                "Name": "0e:e9:62:f9:fa:98",
                "NickName": "host1",
                "UID": 2516028838751749872,
                "Version": 4,
                "Connections": [
                    {
                        "Name": "6e:e8:7a:66:49:f6",
                        "NickName": "host2",
                        "TCPAddr": "192.168.48.12:33252",
                        "Outbound": false,
                        "Established": true
                    },
                    {
                        "Name": "5a:df:63:d0:f4:04",
                        "NickName": "host3",
                        "TCPAddr": "192.168.48.13:33205",
                        "Outbound": false,
                        "Established": true
                    }
                ]
            },
            {
                "Name": "6e:e8:7a:66:49:f6",
                "NickName": "host2",
                "UID": 5844599409302017521,
                "Version": 4,
                "Connections": [
                    {
                        "Name": "0e:e9:62:f9:fa:98",
                        "NickName": "host1",
                        "TCPAddr": "192.168.48.11:6783",
                        "Outbound": true,
                        "Established": true
                    },
                    {
                        "Name": "5a:df:63:d0:f4:04",
                        "NickName": "host3",
                        "TCPAddr": "192.168.48.13:44931",
                        "Outbound": false,
                        "Established": true
                    }
                ]
            },
            {
                "Name": "5a:df:63:d0:f4:04",
                "NickName": "host3",
                "UID": 12249497294994793864,
                "Version": 4,
                "Connections": [
                    {
                        "Name": "0e:e9:62:f9:fa:98",
                        "NickName": "host1",
                        "TCPAddr": "192.168.48.11:6783",
                        "Outbound": true,
                        "Established": true
                    },
                    {
                        "Name": "6e:e8:7a:66:49:f6",
                        "NickName": "host2",
                        "TCPAddr": "192.168.48.12:6783",
                        "Outbound": true,
                        "Established": true
                    }
                ]
            }
        ],
        "UnicastRoutes": [
            {
                "Dest": "0e:e9:62:f9:fa:98",
                "Via": "00:00:00:00:00:00"
            },
            {
                "Dest": "6e:e8:7a:66:49:f6",
                "Via": "6e:e8:7a:66:49:f6"
            },
            {
                "Dest": "5a:df:63:d0:f4:04",
                "Via": "5a:df:63:d0:f4:04"
            }
        ],
        "BroadcastRoutes": [
            {
                "Source": "0e:e9:62:f9:fa:98",
                "Via": [
                    "6e:e8:7a:66:49:f6",
                    "5a:df:63:d0:f4:04"
                ]
            },
            {
                "Source": "6e:e8:7a:66:49:f6",
                "Via": null
            },
            {
                "Source": "5a:df:63:d0:f4:04",
                "Via": null
            }
        ],
        "ConnectionMaker": {
            "DirectPeers": [
                "192.168.48.12",
                "192.168.48.13",
                "1.1.1.1"
            ],
            "Reconnects": [
                {
                    "Address": "1.1.1.1:6783",
                    "Attempting": true,
                    "TryAfter": "2015-07-30T16:41:58.746738587Z",
                    "TryInterval": "2s"
                }
            ]
        }
    },
    "IPAM": {
        "Paxos": null,
        "Range": "[10.32.0.0-10.48.0.0)",
        "DefaultSubnet": "10.32.0.0/12"
    },
    "DNS": {
        "Domain": "weave.local.",
        "Address": "0.0.0.0:53",
        "TTL": 1,
        "Entries": [
            {
                "ContainerID": "415b0cf212548728fe760378a5ef5df08039e217400e51cbfcb11b3c12dcb55c",
                "Address": "10.32.0.1",
                "Hostname": "foo.weave.local."
            }
        ]
    }
}

Fixes #1025, fixes #1141, fixes #1027, fixes #908.

@tomwilkie
Copy link
Contributor

Woot looks nice. IPAM used to have stats about number of free and allocated addresses - can we have them back please?

@tomwilkie
Copy link
Contributor

For DNS, can you shorten the container id to 12 char (I think thats what docker does) and include the peer id / nickname? Don't shorten in the json.

Also a bunch of tests depend on the format of status (150_connect_forget_2_test.sh, 500_weave_multi_cidr_test.sh) as did the weave launch-proxy script.

@rade
Copy link
Member

rade commented Jul 30, 2015

 Encryption: false
 PeerDiscovery: true

Personally I'd prefer this to say "on/off", or "enabled/disabled".

vagrant@host1:~$ weave status router reconnects
1.1.1.1:6783 true 2015-07-30 16:41:58.746738587 +0000 UTC 2s

The "true" and "2s" are really rather obscure. I had to look at the JSON to figure out what these mean. Also what happened to the failure reason in the above? Comes from target.lastError in the old code.

fixes #908

how?

One thing I worry about is that now users need to run a whole bunch of commands to debug issues. What are we going ask users to send us? The JSON? urgh.

I wonder whether should have a weave status router connections that contains all pertinent information about connectivity from this peer, i.e. all the connections it has to other peers and their status, pending connection, direct peers. i.e. a subset of the output of peers, combined with directpeers and reconnects.

I also wonder whether we should ditch the 'macs' and 'routes' sub commands. I cannot think of any situation where their output has been helpful in debugging a user problem.

weave json

That's not a good command name.

Finally, I'd be tempted to ditch the command category, ie. router, ipam,dns,proxy.

@rade
Copy link
Member

rade commented Jul 30, 2015

Shouldn't some code from ipam get removed? After all, it outputs some status info right now, the code of which presumably is now dormant. IPAM also has a whole bunch of String methods with zero coverage that I suspect actually have zero call sites, though it's hard to be sure due to format string magic.

@awh
Copy link
Contributor Author

awh commented Jul 31, 2015

@tomwilkie

Woot looks nice.

Thanks 😄

IPAM used to have stats about number of free and allocated addresses - can we have them back please?

Good catch - will do. The intent of this PR is to achieve rough parity with what we have now, whilst leaving in place an extensible framework for other things to hook into - for example we have #1185 coming up and @bboreham wanted to add more detailed output from IPAM (e.g. an exhaustive list of allocations) to name a few things.

For DNS, can you shorten the container id to 12 char (I think thats what docker does) and include the peer id / nickname? Don't shorten in the json.

Will do.

Also a bunch of tests depend on the format of status

As with the documentation, I'll update the tests once the output templates have stabilised. If I recall correctly, you favour making the tests examine the JSON - I could look into doing that, however I think there is an argument to be made that this reduces the coverage of the tests. Thoughts @rade?

@rade

Personally I'd prefer this to say "on/off", or "enabled/disabled".

That is easily arranged.

The "true" and "2s" are really rather obscure. I had to look at the JSON to figure out what these mean. Also what happened to the failure reason in the above? Comes from target.lastError in the old code.

I'll rework that to be a bit more descriptive.

fixes #908

how?

vagrant@host1:~$ weave status
...
    Unestablished: 0
...

This addresses the specific case you cited, but if you intended for us to do more - a historic profile of heartbeat failures perhaps - I'll remove the reference to #908 here and the work can be done in a separate PR.

One thing I worry about is that now users need to run a whole bunch of commands to debug issues. What are we going ask users to send us? The JSON? urgh.

These are two separate issues - IMO it's not unreasonable to ask for a JSON dump as part of an issue submission, but violently agree we don't want users to have to resort to looking at it for interactive debugging - that just means we didn't get the plain text templates right.

I wonder whether should have a weave status router connections that contains all pertinent information about connectivity from this peer, i.e. all the connections it has to other peers and their status, pending connection, direct peers. i.e. a subset of the output of peers, combined with directpeers and reconnects.

The template approach makes this trivial to iterate on - I'll bash something up for review.

I also wonder whether we should ditch the 'macs' and 'routes' sub commands. I cannot think of any situation where their output has been helpful in debugging a user problem.

Unless you're advocating removing it from the JSON too (which I would disagree with - as I mention above, and contrary to your 'urgh', I think the JSON dump will become lingua franca for reporting issues) we save only a few lines of template by removing these.

Finally, I'd be tempted to ditch the command category, ie. router, ipam,dns,proxy.

We can make a decision on this once we've settled on the final set of outputs - the weave script simply synthesises a URL so it's just a matter of updating the path -> template bindings in prog/weaver/http.go...

weave json

That's not a good command name.

It's a placeholder calculated to engender discussion 😄 I had called it weave inspect originally, as it produces JSON output and we had discussed following up with a --format style option; @squaremo was concerned that it would be confusing to users though as docker inspect takes a mandatory container ID. I had also considered weave status --json, but then one might reasonably extrapolate that things like weave status router macs --json etc etc would produce JSON which is not the case. Suggestions welcome! Related, our weave status is now a lot like docker info...

Shouldn't some code from ipam get removed?

Yes! Will address comments by pushing additional commits, then do a squash at the end when we're all happy.

@awh
Copy link
Contributor Author

awh commented Jul 31, 2015

@tomwilkie

IPAM used to have stats about number of free and allocated addresses

I have a vague memory of seeing such stats, but they're not in current master:

vagrant@host1:~$ weave status
weave router git-1701bc3e92c5
Our name is 2e:a3:66:44:f5:5c(host1)
Encryption off
Peer discovery on
Sniffing traffic on &{100 65535 ethwe 82:b5:99:2d:08:e3 up|broadcast|multicast}
MACs:
76:01:58:f4:58:ce -> 2e:a3:66:44:f5:5c(host1) (2015-07-31 10:37:30.137417737 +0000 UTC)
72:08:ad:57:66:db -> 36:4b:f4:14:8d:a0(host2) (2015-07-31 10:37:30.853239177 +0000 UTC)
2e:a3:66:44:f5:5c -> 2e:a3:66:44:f5:5c(host1) (2015-07-31 10:37:30.945247518 +0000 UTC)
36:4b:f4:14:8d:a0 -> 36:4b:f4:14:8d:a0(host2) (2015-07-31 10:37:31.185363147 +0000 UTC)
de:5f:f2:63:7e:5a -> ea:b3:af:c1:3d:89(host3) (2015-07-31 10:37:31.405821993 +0000 UTC)
ea:b3:af:c1:3d:89 -> ea:b3:af:c1:3d:89(host3) (2015-07-31 10:37:31.721202011 +0000 UTC)
8e:db:54:0d:5c:71 -> 2e:a3:66:44:f5:5c(host1) (2015-07-31 10:40:20.319090013 +0000 UTC)
82:b5:99:2d:08:e3 -> 2e:a3:66:44:f5:5c(host1) (2015-07-31 10:37:29.568493343 +0000 UTC)
Peers:
ea:b3:af:c1:3d:89(host3) (v4) (UID 7619796738551607122)
   -> 2e:a3:66:44:f5:5c(host1) [192.168.48.11:6783]
   -> 36:4b:f4:14:8d:a0(host2) [192.168.48.12:6783]
2e:a3:66:44:f5:5c(host1) (v4) (UID 9889040557843448893)
   -> ea:b3:af:c1:3d:89(host3) [192.168.48.13:45285]
   -> 36:4b:f4:14:8d:a0(host2) [192.168.48.12:45484]
36:4b:f4:14:8d:a0(host2) (v4) (UID 13471084242190272070)
   -> 2e:a3:66:44:f5:5c(host1) [192.168.48.11:6783]
   -> ea:b3:af:c1:3d:89(host3) [192.168.48.13:57053]
Routes:
unicast:
36:4b:f4:14:8d:a0 -> 36:4b:f4:14:8d:a0
ea:b3:af:c1:3d:89 -> ea:b3:af:c1:3d:89
2e:a3:66:44:f5:5c -> 00:00:00:00:00:00
broadcast:
2e:a3:66:44:f5:5c -> [36:4b:f4:14:8d:a0 ea:b3:af:c1:3d:89]
36:4b:f4:14:8d:a0 -> []
ea:b3:af:c1:3d:89 -> []
Direct Peers: 192.168.48.13 192.168.48.12
Reconnects:

Allocator range [10.32.0.0-10.48.0.0)
Owned Ranges:
  10.32.0.0 -> 2e:a3:66:44:f5:5c (host1) (v1)
  10.40.0.0 -> ea:b3:af:c1:3d:89 (host3) (v0)
Allocator default subnet: 10.32.0.0/12

WeaveDNS (2e:a3:66:44:f5:5c)
  listening on port 53, for domain weave.local.
  response ttl 1

0f091f1a0e5f: foo.weave.local. [10.32.0.1]



weave proxy is running

Any clue where they went? In the meantime I'll add the owned ranges back in...

@rade
Copy link
Member

rade commented Jul 31, 2015

Any clue where they went?

I think they disappeared when we introduced multi-subnet support in IPAM. There simply isn't enough info held in IPAM to produce sensible figures. Notably, IPAM itself does not know about subnets; it's all done in the UI/API layer.

@awh
Copy link
Contributor Author

awh commented Jul 31, 2015

vagrant@host1:~$ weave status dns entries
1e:98:ba:18:84:8e 4d505c808f4b one.weave.local. 10.32.0.1
d2:ae:c9:32:27:a3 e5ec7472356b one.weave.local. 10.40.0.0
f2:53:30:5e:0b:dd 5d47a2898b85 one.weave.local. 10.36.0.2
1e:98:ba:18:84:8e 1d1c3600a8d3 three.weave.local. 10.32.0.3
d2:ae:c9:32:27:a3 8feb8f8fb8bd three.weave.local. 10.40.0.2
f2:53:30:5e:0b:dd dba632d9887c three.weave.local. 10.36.0.4
1e:98:ba:18:84:8e b578d151c74b two.weave.local. 10.32.0.2
d2:ae:c9:32:27:a3 3f7680196300 two.weave.local. 10.40.0.1
f2:53:30:5e:0b:dd 7037006ccaa7 two.weave.local. 10.36.0.3
vagrant@host1:~$ weave status router reconnects
     Address: 192.168.48.16:6783
   LastError: dial tcp4 192.168.48.16:6783: no route to host
 TryInterval: 22.78125s
 TryingSince: 2015-07-31 13:59:55.422775542 +0000 UTC

     Address: 192.168.48.14:6783
   LastError: dial tcp4 192.168.48.14:6783: no route to host
 TryInterval: 22.78125s
     NextTry: 2015-07-31 13:59:59.155947088 +0000 UTC

     Address: 192.168.48.15:6783
   LastError: dial tcp4 192.168.48.15:6783: no route to host
 TryInterval: 22.78125s
     NextTry: 2015-07-31 14:00:07.962673128 +0000 UTC

@rade
Copy link
Member

rade commented Jul 31, 2015

re macs & routes...

Unless you're advocating removing it from the JSON too (which I would disagree with - as I mention above, and contrary to your 'urgh', I think the JSON dump will become lingua franca for reporting issues) we save only a few lines of template by removing these.

The JSON should contain everything. I don't think the text version needs to. I would go so far as saying that the text version should only present information that we have a very strong reason to believe is useful to a good chunk of users.

@awh
Copy link
Contributor Author

awh commented Jul 31, 2015

vagrant@host1:~$ weave status ipam entries
2e:7a:e9:aa:9f:cd 10.32.0.0 5
12:c3:63:6c:5c:b5 10.36.0.2 3
f6:06:6d:be:50:2e 10.40.0.0 3

@awh
Copy link
Contributor Author

awh commented Jul 31, 2015

vagrant@host1:~$ weave status router connections
 DirectPeers: 192.168.48.12
              192.168.48.13
              192.168.48.14
              192.168.48.15
              192.168.48.16

     Address: 192.168.48.12:44455
        Name: be:b2:77:c9:a7:d2
    NickName: host2
       State: established

     Address: 192.168.48.13:37854
        Name: b6:cd:58:2a:b1:9c
    NickName: host3
       State: established

     Address: 192.168.48.14:6783
   LastError: dial tcp4 192.168.48.14:6783: no route to host
 TryInterval: 51.2578125s
     NextTry: 2015-07-31 16:16:03.081358134 +0000 UTC

     Address: 192.168.48.15:6783
   LastError: dial tcp4 192.168.48.15:6783: no route to host
 TryInterval: 51.2578125s
     NextTry: 2015-07-31 16:16:00.847285808 +0000 UTC

     Address: 192.168.48.16:6783
   LastError: dial tcp4 192.168.48.16:6783: no route to host
 TryInterval: 34.171875s
     NextTry: 2015-07-31 16:15:50.493288252 +0000 UTC

@rade
Copy link
Member

rade commented Jul 31, 2015

weave status ipam entries

isn't that just a subset of the info one gets from weave ps?

weave status router connections

Nice. And nearly there. I don't think we need DirectPeers. Instead the connection/reconnect list should indicate when the address is a direct peer. I believe (you'd better check) it is the case that any address in directPeers will always be found in either the local peer's connection list, or in the CM's target list.

I think there is an argument to be made that (making tests examine JSON status) reduces the coverage of the tests

We should generally avoid checking status output in tests. In the few places where that is the best option, I suspect we'd ultimately want to extract the info we need via a --format template. I don't care what we do in the meantime. Re coverage... weave status and weave json (or whatever they end up being called) should have their own set of tests. Nothing fancy; just aiming to get coverage.

Re command names... status, info and report all make good names. Agree that inspect is bad for the reasons cited. I suggest weave report for the JSON, especially if that is what we want people to send us for troubleshooting. As for whether we stick to status or switch to info, I don't care that much.

@awh
Copy link
Contributor Author

awh commented Aug 3, 2015

isn't that just a subset of the info one gets from weave ps?

That's actually peer name/owned range/version from the IPAM ring. I'll adjust the template to use the more verbose labelled style, as there shouldn't be that many entries...

Instead the connection/reconnect list should indicate when the address is a direct peer

Brilliant - the invariant you mentioned sounds correct from memory, but I will check.

ACK on your remaining points. More commits incoming.

@awh
Copy link
Contributor Author

awh commented Aug 3, 2015

vagrant@host1:~$ weave status ipam entries
       Peer: 2a:4a:5c:8b:8e:41
 OwnedRange: 10.32.0.0
    Version: 0

       Peer: ca:b4:3e:0d:76:35
 OwnedRange: 10.40.0.0
    Version: 5

       Peer: 7a:93:20:a5:4c:09
 OwnedRange: 10.44.0.1
    Version: 0

       Peer: ca:b4:3e:0d:76:35
 OwnedRange: 10.47.255.255
    Version: 0

@awh awh force-pushed the issues/1025-enhanced-weave-status branch from c213671 to af6f664 Compare August 3, 2015 16:35
@rade
Copy link
Member

rade commented Aug 4, 2015

re weave status ipam entries.... If what is shown are ranges, why aren't they displayed as...ranges? Does the range extend to the next entry? I think this is all quite hard to interpret. And frankly I doubt any user would ever want to look at it. I suggest ditching it.

@awh awh force-pushed the issues/1025-enhanced-weave-status branch from 1f6864c to def1e5e Compare August 4, 2015 11:22
@awh
Copy link
Contributor Author

awh commented Aug 4, 2015

Current state of play:

vagrant@host1:~$ weave status
          Service: router
             Port: 6783
             Name: 16:6a:f6:7c:d3:a3
         NickName: host1
       Encryption: disabled
    PeerDiscovery: enabled
            Peers: 3
             MACs: 16
    UnicastRoutes: 3
  BroadcastRoutes: 3
      DirectPeers: 2
     Reconnecting: 1
    Unestablished: 0

          Service: ipam
            Range: [10.32.0.0-10.48.0.0)
    DefaultSubnet: 10.32.0.0/12
          Entries: 3

          Service: dns
           Domain: weave.local.
          Address: 0.0.0.0:53
              TTL: 1
          Entries: 9

          Service: proxy
          Address: tcp://127.0.0.1:12375
vagrant@host1:~$ weave status peers
de:2d:da:df:3d:a0(host3) (4) (UID 14086907666721500540)
   -> 16:6a:f6:7c:d3:a3(host1) [192.168.48.11:55304]
   -> 2e:c8:00:aa:5e:bc(host2) [192.168.48.12:6783]
16:6a:f6:7c:d3:a3(host1) (6) (UID 14872807027590051473)
   -> de:2d:da:df:3d:a0(host3) [192.168.48.13:6783]
   -> 2e:c8:00:aa:5e:bc(host2) [192.168.48.12:37442]
2e:c8:00:aa:5e:bc(host2) (4) (UID 4582109497457910668)
   -> 16:6a:f6:7c:d3:a3(host1) [192.168.48.11:6783]
   -> de:2d:da:df:3d:a0(host3) [192.168.48.13:36514]
vagrant@host1:~$ weave status connections
     Address: 192.168.48.12:37442
        Type: direct
        Name: 2e:c8:00:aa:5e:bc
    NickName: host2
   Direction: inbound
       State: established

     Address: 192.168.48.13:6783
        Type: discovered
        Name: de:2d:da:df:3d:a0
    NickName: host3
   Direction: outbound
       State: established

     Address: 192.168.48.14:6783
        Type: direct
   LastError: dial tcp4 192.168.48.14:6783: no route to host
 TryInterval: 15.1875s
     NextTry: 2015-08-04 11:25:35.278219483 +0000 UTC
vagrant@host1:~$ weave status dns
16:6a:f6:7c:d3:a3 6f77ee896aa4 one.weave.local. 10.32.0.1
2e:c8:00:aa:5e:bc 9055e72e5662 one.weave.local. 10.40.0.0
de:2d:da:df:3d:a0 5061ea90126d one.weave.local. 10.36.0.2
16:6a:f6:7c:d3:a3 34f93519d499 three.weave.local. 10.32.0.3
2e:c8:00:aa:5e:bc 655f75310681 three.weave.local. 10.40.0.2
de:2d:da:df:3d:a0 17d444241c35 three.weave.local. 10.36.0.4
16:6a:f6:7c:d3:a3 4da48135970e two.weave.local. 10.32.0.2
2e:c8:00:aa:5e:bc 5c64c06ddc45 two.weave.local. 10.40.0.1
de:2d:da:df:3d:a0 3fbcb4d4e46c two.weave.local. 10.36.0.3
vagrant@host1:~$ weave report
{
    "Router": {
        "Encryption": false,
        "PeerDiscovery": true,
        "Name": "16:6a:f6:7c:d3:a3",
        "NickName": "host1",
        "Port": 6783,
        "Interface": {
            "Index": 2388,
            "MTU": 65535,
            "Name": "ethwe",
            "HardwareAddr": "0jSXfpG7",
            "Flags": 19
        },
        "MACs": [
            {
                "Mac": "2e:c8:00:aa:5e:bc",
                "Name": "2e:c8:00:aa:5e:bc",
                "NickName": "host2",
                "LastSeen": "2015-08-04T11:24:53.337657503Z"
            },
            {
                "Mac": "e6:45:a7:92:17:a3",
                "Name": "de:2d:da:df:3d:a0",
                "NickName": "host3",
                "LastSeen": "2015-08-04T11:24:54.187930367Z"
            },
            {
                "Mac": "86:e8:53:4a:42:65",
                "Name": "de:2d:da:df:3d:a0",
                "NickName": "host3",
                "LastSeen": "2015-08-04T11:24:56.518438483Z"
            },
            {
                "Mac": "e2:0a:6f:51:96:50",
                "Name": "de:2d:da:df:3d:a0",
                "NickName": "host3",
                "LastSeen": "2015-08-04T11:24:57.193494841Z"
            },
            {
                "Mac": "36:8e:f2:06:9b:ab",
                "Name": "16:6a:f6:7c:d3:a3",
                "NickName": "host1",
                "LastSeen": "2015-08-04T11:24:53.157198479Z"
            },
            {
                "Mac": "6a:ff:14:e8:ee:65",
                "Name": "2e:c8:00:aa:5e:bc",
                "NickName": "host2",
                "LastSeen": "2015-08-04T11:24:53.292978969Z"
            },
            {
                "Mac": "7a:bc:59:75:e1:1f",
                "Name": "16:6a:f6:7c:d3:a3",
                "NickName": "host1",
                "LastSeen": "2015-08-04T11:24:54.857414753Z"
            },
            {
                "Mac": "12:7b:e4:4b:1f:6b",
                "Name": "16:6a:f6:7c:d3:a3",
                "NickName": "host1",
                "LastSeen": "2015-08-04T11:24:55.196195167Z"
            },
            {
                "Mac": "16:4d:f1:4e:c9:19",
                "Name": "2e:c8:00:aa:5e:bc",
                "NickName": "host2",
                "LastSeen": "2015-08-04T11:24:55.854292448Z"
            },
            {
                "Mac": "8e:f1:74:19:55:e5",
                "Name": "de:2d:da:df:3d:a0",
                "NickName": "host3",
                "LastSeen": "2015-08-04T11:24:56.846981633Z"
            },
            {
                "Mac": "16:6a:f6:7c:d3:a3",
                "Name": "16:6a:f6:7c:d3:a3",
                "NickName": "host1",
                "LastSeen": "2015-08-04T11:24:52.941241947Z"
            },
            {
                "Mac": "a6:70:cd:8f:30:fa",
                "Name": "16:6a:f6:7c:d3:a3",
                "NickName": "host1",
                "LastSeen": "2015-08-04T11:24:54.533247559Z"
            },
            {
                "Mac": "02:63:ae:c2:c2:e6",
                "Name": "2e:c8:00:aa:5e:bc",
                "NickName": "host2",
                "LastSeen": "2015-08-04T11:24:55.523818358Z"
            },
            {
                "Mac": "d6:6d:b4:db:5c:f9",
                "Name": "2e:c8:00:aa:5e:bc",
                "NickName": "host2",
                "LastSeen": "2015-08-04T11:24:56.181313274Z"
            },
            {
                "Mac": "d2:34:97:7e:91:bb",
                "Name": "16:6a:f6:7c:d3:a3",
                "NickName": "host1",
                "LastSeen": "2015-08-04T11:24:52.351916459Z"
            },
            {
                "Mac": "de:2d:da:df:3d:a0",
                "Name": "de:2d:da:df:3d:a0",
                "NickName": "host3",
                "LastSeen": "2015-08-04T11:24:54.093621307Z"
            }
        ],
        "Peers": [
            {
                "Name": "16:6a:f6:7c:d3:a3",
                "NickName": "host1",
                "UID": 14872807027590051473,
                "Version": 6,
                "Connections": [
                    {
                        "Name": "2e:c8:00:aa:5e:bc",
                        "NickName": "host2",
                        "Address": "192.168.48.12:37442",
                        "Outbound": false,
                        "Established": true
                    },
                    {
                        "Name": "de:2d:da:df:3d:a0",
                        "NickName": "host3",
                        "Address": "192.168.48.13:6783",
                        "Outbound": true,
                        "Established": true
                    }
                ]
            },
            {
                "Name": "2e:c8:00:aa:5e:bc",
                "NickName": "host2",
                "UID": 4582109497457910668,
                "Version": 4,
                "Connections": [
                    {
                        "Name": "16:6a:f6:7c:d3:a3",
                        "NickName": "host1",
                        "Address": "192.168.48.11:6783",
                        "Outbound": true,
                        "Established": true
                    },
                    {
                        "Name": "de:2d:da:df:3d:a0",
                        "NickName": "host3",
                        "Address": "192.168.48.13:36514",
                        "Outbound": false,
                        "Established": true
                    }
                ]
            },
            {
                "Name": "de:2d:da:df:3d:a0",
                "NickName": "host3",
                "UID": 14086907666721500540,
                "Version": 4,
                "Connections": [
                    {
                        "Name": "16:6a:f6:7c:d3:a3",
                        "NickName": "host1",
                        "Address": "192.168.48.11:55304",
                        "Outbound": false,
                        "Established": true
                    },
                    {
                        "Name": "2e:c8:00:aa:5e:bc",
                        "NickName": "host2",
                        "Address": "192.168.48.12:6783",
                        "Outbound": true,
                        "Established": true
                    }
                ]
            }
        ],
        "UnicastRoutes": [
            {
                "Dest": "16:6a:f6:7c:d3:a3",
                "Via": "00:00:00:00:00:00"
            },
            {
                "Dest": "2e:c8:00:aa:5e:bc",
                "Via": "2e:c8:00:aa:5e:bc"
            },
            {
                "Dest": "de:2d:da:df:3d:a0",
                "Via": "de:2d:da:df:3d:a0"
            }
        ],
        "BroadcastRoutes": [
            {
                "Source": "16:6a:f6:7c:d3:a3",
                "Via": [
                    "2e:c8:00:aa:5e:bc",
                    "de:2d:da:df:3d:a0"
                ]
            },
            {
                "Source": "2e:c8:00:aa:5e:bc",
                "Via": null
            },
            {
                "Source": "de:2d:da:df:3d:a0",
                "Via": null
            }
        ],
        "ConnectionMaker": {
            "DirectPeers": [
                "192.168.48.12",
                "192.168.48.14"
            ],
            "Reconnects": [
                {
                    "Address": "192.168.48.14:6783",
                    "TryAfter": "2015-08-04T11:26:00.577588549Z",
                    "TryInterval": "22.78125s",
                    "LastError": "dial tcp4 192.168.48.14:6783: no route to host"
                }
            ]
        }
    },
    "IPAM": {
        "Paxos": null,
        "Range": "[10.32.0.0-10.48.0.0)",
        "DefaultSubnet": "10.32.0.0/12",
        "Entries": [
            {
                "Token": "10.32.0.0",
                "Peer": "16:6a:f6:7c:d3:a3",
                "Version": 5
            },
            {
                "Token": "10.36.0.2",
                "Peer": "de:2d:da:df:3d:a0",
                "Version": 3
            },
            {
                "Token": "10.40.0.0",
                "Peer": "2e:c8:00:aa:5e:bc",
                "Version": 3
            }
        ]
    },
    "DNS": {
        "Domain": "weave.local.",
        "Address": "0.0.0.0:53",
        "TTL": 1,
        "Entries": [
            {
                "Hostname": "one.weave.local.",
                "Origin": "16:6a:f6:7c:d3:a3",
                "ContainerID": "6f77ee896aa45dc94eb04ceb8bacddb24d2c23078516f859387064e440fe27a8",
                "Address": "10.32.0.1",
                "Version": 0
            },
            {
                "Hostname": "one.weave.local.",
                "Origin": "2e:c8:00:aa:5e:bc",
                "ContainerID": "9055e72e5662d5c1903bb77b44258fc37b47f22f80062c4bce2be5416c8907ec",
                "Address": "10.40.0.0",
                "Version": 0
            },
            {
                "Hostname": "one.weave.local.",
                "Origin": "de:2d:da:df:3d:a0",
                "ContainerID": "5061ea90126d442b74e452b0ea6c92d96df0bc394319af1d5beedecd6e44be4a",
                "Address": "10.36.0.2",
                "Version": 0
            },
            {
                "Hostname": "three.weave.local.",
                "Origin": "16:6a:f6:7c:d3:a3",
                "ContainerID": "34f93519d499331d465754c048285890a9da1b8e6cfae86cd0829300f4f8ef66",
                "Address": "10.32.0.3",
                "Version": 0
            },
            {
                "Hostname": "three.weave.local.",
                "Origin": "2e:c8:00:aa:5e:bc",
                "ContainerID": "655f75310681edb3a0f1c9b8ec6777f7c66972b290aeee5128d87abe1c85dc4e",
                "Address": "10.40.0.2",
                "Version": 0
            },
            {
                "Hostname": "three.weave.local.",
                "Origin": "de:2d:da:df:3d:a0",
                "ContainerID": "17d444241c35300095df27480a443f04454b6b76c32828cd73545a481a34dde1",
                "Address": "10.36.0.4",
                "Version": 0
            },
            {
                "Hostname": "two.weave.local.",
                "Origin": "16:6a:f6:7c:d3:a3",
                "ContainerID": "4da48135970e181e9084d21d59e5b7936051d9859edc0919b5deefa671521a90",
                "Address": "10.32.0.2",
                "Version": 0
            },
            {
                "Hostname": "two.weave.local.",
                "Origin": "2e:c8:00:aa:5e:bc",
                "ContainerID": "5c64c06ddc45e4d682631eeaadc0bf0402b843fb9fe73f49b4c8b73caa937eb4",
                "Address": "10.40.0.1",
                "Version": 0
            },
            {
                "Hostname": "two.weave.local.",
                "Origin": "de:2d:da:df:3d:a0",
                "ContainerID": "3fbcb4d4e46c350ffe8df3d78eb0ecd4d874c657e215f0773aa099c9917e5cd4",
                "Address": "10.36.0.3",
                "Version": 0
            }
        ]
    }
}

@awh awh force-pushed the issues/1025-enhanced-weave-status branch from def1e5e to f1cf50a Compare August 4, 2015 11:35
@tomwilkie
Copy link
Contributor

vagrant@host1:~$ weave status dns
16:6a:f6:7c:d3:a3 6f77ee896aa4 one.weave.local. 10.32.0.1
2e:c8:00:aa:5e:bc 9055e72e5662 one.weave.local. 10.40.0.0
de:2d:da:df:3d:a0 5061ea90126d one.weave.local. 10.36.0.2
16:6a:f6:7c:d3:a3 34f93519d499 three.weave.local. 10.32.0.3
2e:c8:00:aa:5e:bc 655f75310681 three.weave.local. 10.40.0.2
de:2d:da:df:3d:a0 17d444241c35 three.weave.local. 10.36.0.4
16:6a:f6:7c:d3:a3 4da48135970e two.weave.local. 10.32.0.2
2e:c8:00:aa:5e:bc 5c64c06ddc45 two.weave.local. 10.40.0.1
de:2d:da:df:3d:a0 3fbcb4d4e46c two.weave.local. 10.36.0.3

Can we have a header please?
And can the order be hostname, ip, container id, peer please?
And can you pad out the hostnames / ips so it all lines up?

@tomwilkie
Copy link
Contributor

Should we include tombstones in the DNS json?

@tomwilkie
Copy link
Contributor

Under IPAM, Entries: 3 is probably not particularly useful to users.
Also, IPAM used to say "waiting for consensus" - does it still do that appropriately?

@rade
Copy link
Member

rade commented Aug 4, 2015

I'd love to make weave status connections tabular. In the logs we condense a lot of information into s.t. like ->[192.168.48.11:6783|32:5d:5b:10:7a:05(host1)]. The -> could be reversed for 'inbound'. and the peer would be omitted if it's an attempting or to-be-retried connection. Using the same format as in the logs is good UX, imo.

->[192.168.48.12:37442|2e:c8:00:aa:5e:bc(host2)] direct     established
->[192.168.48.13:6783|de:2d:da:df:3d:a0(host3)]  discovered established
->[192.168.48.14:6783]                           direct     failed(dial tcp4 192.168.48.14:6783: no route to host), retry: 2015-08-04 11:25:35.278219483 +0000 UTC

(I've dropped the retry interval)

In weave status peers I'd drop the UID and version.

@rade
Copy link
Member

rade commented Aug 4, 2015

in weave status, instead of

    Reconnecting:  1
    Unestablished: 0

have

    Connections: 2 established, 0 unestablished, 1 failed

@rade
Copy link
Member

rade commented Aug 4, 2015

             MACs: 16
    UnicastRoutes: 3
  BroadcastRoutes: 3

I'd ditch those.

@rade
Copy link
Member

rade commented Aug 4, 2015

I'd also ditch Port: 6783. It's the in-container port, so not actually the port weave listens on.

@rade
Copy link
Member

rade commented Aug 4, 2015

And can the order be hostname, ip, container id, peer please?

Agreed. And I'd drop the weave.local.

@rade
Copy link
Member

rade commented Aug 4, 2015

It would be nice if peers were consistently referenced by their name and nickname. We can do that in a follow-on issue.

@awh
Copy link
Contributor Author

awh commented Aug 4, 2015

vagrant@host1:~$ weave status
       Service: router
          Name: ca:d4:a9:75:80:3c
      NickName: host1
    Encryption: disabled
 PeerDiscovery: enabled
         Peers: 3
   DirectPeers: 2
   Connections: 6 established, 0 unestablished, 1 failed

       Service: ipam
         Range: [10.32.0.0-10.48.0.0)
 DefaultSubnet: 10.32.0.0/12

       Service: dns
        Domain: weave.local.
       Address: 0.0.0.0:53
           TTL: 1
       Entries: 9

          Service: proxy
          Address: tcp://127.0.0.1:12375
vagrant@host1:~$ weave status peers
ca:d4:a9:75:80:3c(host1)
   -> be:83:d3:82:30:8a(host2) [192.168.48.12:36988]
   -> 1a:6f:ea:cf:a2:cd(host3) [192.168.48.13:48616]
be:83:d3:82:30:8a(host2)
   -> ca:d4:a9:75:80:3c(host1) [192.168.48.11:6783]
   -> 1a:6f:ea:cf:a2:cd(host3) [192.168.48.13:54229]
1a:6f:ea:cf:a2:cd(host3)
   -> be:83:d3:82:30:8a(host2) [192.168.48.12:6783]
   -> ca:d4:a9:75:80:3c(host1) [192.168.48.11:6783]
vagrant@host1:~$ weave status dns
one          10.36.0.2       1df9a3b20f5a 1a:6f:ea:cf:a2:cd
one          10.32.0.1       234cb9962382 be:83:d3:82:30:8a
one          10.40.0.0       5b76d251c28b ca:d4:a9:75:80:3c
three        10.36.0.4       e1011b6e8723 1a:6f:ea:cf:a2:cd
three        10.32.0.3       49c51c4d5328 be:83:d3:82:30:8a
three        10.40.0.2       54b132ac07eb ca:d4:a9:75:80:3c
two          10.36.0.3       1e92497688b2 1a:6f:ea:cf:a2:cd
two          10.32.0.2       b43642366c2d be:83:d3:82:30:8a
two          10.40.0.1       a325d63ebaf4 ca:d4:a9:75:80:3c

@awh
Copy link
Contributor Author

awh commented Aug 4, 2015

IPAM used to say "waiting for consensus" - does it still do that appropriately?

       Service: ipam
     Consensus: false
        Quorum: 6
    KnownNodes: 3
         Range: [10.32.0.0-10.48.0.0)
 DefaultSubnet: 10.32.0.0/12

New tabular connections:

vagrant@host1:~$ weave status connections
<- 192.168.48.12:48265   host2        direct     established
-> 192.168.48.13:6783    host3        discovered established
-> 192.168.48.15:6783                 direct     failed(dial tcp4 192.168.48.15:6783: no route to host)

@rade rade assigned awh Aug 8, 2015
@awh
Copy link
Contributor Author

awh commented Aug 10, 2015

Regarding version in weave status and weave report - given the existence of weave version and the presence of the version in weave report for diagnostic dumps, do we really need it in weave status? If so, it's in the wrong place IMO - the same version applies to the ipam and dns services, so it either needs pulling out separately so that it's clear that it applies to all services, or alternatively replicating under each service in case you wanted to allow for them varying independently in the future. The same applies to the JSON.

@awh
Copy link
Contributor Author

awh commented Aug 10, 2015

I'm not convinced listing the target peers on weave status is a good idea - the entire point of the original issue was to remove arbitrarily long output. It's going to look awful if I have more than five peers (e.g. any production sized deployment of weave)

@awh
Copy link
Contributor Author

awh commented Aug 10, 2015

weave status targets? (I'm glad you ditched 'Direct Peers' BTW, I've never been fond of that I don't think it's a good name)

@awh
Copy link
Contributor Author

awh commented Aug 10, 2015

Using the same column for remote peer name/failure reason 👍

@awh
Copy link
Contributor Author

awh commented Aug 10, 2015

Protocol: weave 1..2

How about using the same interval notation that we do for IPAM range?

@awh awh assigned rade and unassigned awh Aug 10, 2015
@rade
Copy link
Member

rade commented Aug 10, 2015

Protocol: weave 1..2
How about using the same interval notation that we do for IPAM range?

I considered that but 1-2 can easily be misinterpreted as version "one dash two".

@rade rade assigned awh and unassigned rade Aug 10, 2015
@rade
Copy link
Member

rade commented Aug 10, 2015

@awh ready for another review; I'll do some squashing once you are happy.

@awh
Copy link
Contributor Author

awh commented Aug 11, 2015

Am ready to merge once you've done your squashing.

@awh awh assigned rade and unassigned awh Aug 11, 2015
@rade rade force-pushed the issues/1025-enhanced-weave-status branch from ea0db16 to 1a4d0c5 Compare August 11, 2015 11:07
@rade rade force-pushed the issues/1025-enhanced-weave-status branch from 1a4d0c5 to fc9a1c8 Compare August 11, 2015 11:17
@rade rade assigned awh and unassigned rade Aug 11, 2015
awh added a commit that referenced this pull request Aug 11, 2015
@awh awh merged commit f3ad81d into master Aug 11, 2015
@rade rade modified the milestone: 1.1.0 Aug 12, 2015
@rade rade deleted the issues/1025-enhanced-weave-status branch August 19, 2015 20:33
rade added a commit that referenced this pull request Aug 27, 2015
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
4 participants