-
Notifications
You must be signed in to change notification settings - Fork 111
Expose node information to allow us building proper tooling to visualize our network. #1099
Comments
@holisticode chunk IDs that are in the local datastore should NOT have any security implications, because anyone is free to do whatever they want with their node, or am I missing something? |
@holisticode it is my node, so I am free to do whatever I want with it. People who care about privacy should be uploading/syncing encrypted content. Even if we don't implement this, it is trivial to run an iterator on LevelDB and print out all chunks, and therefore review all non-encrypted chunks, so we have to assume that users would do/have already done that. |
@nonsense Yes but also from outside like with an API - "What chunks do you have?" |
@holisticode yeah, we discussed that this will be an |
I think as @holisticode that this should be implemented over JSON-RPC not HTTP. Regarding the points @skylenet mentioned:
This should already be available through JSON-RPC. Do a
will expose this as the kademlia known peers in the address book
I'm assuming that there's no other way to do this apart from running a full database scan, checking each key if it is a chunk hash index key and if it is - assume it is a chunk in the database. I really think we should maintain several database connections when using leveldb as a backend (one for each index, basically, with separated data folders for each of them). |
@justelad Yes, iterating is needed in every request to get all chunks. But iteration should not be on all indexes (for both current or new localstore). We only need to iterate over retrieval index. Also, iteration is not locking the database. Every iterator is taking a snapshot on which it iterates, allowing other operations to commit changes to the log. I have measured that it takes around 0.7s to iterate over 1.000.000 keys on my laptop with pretty good ssd. This is an expensive operation, with 100% cpu utilization. But I doubt that we would need to provide all chunk keys in one response. Paging is a must for this lists. Having a response with a few million items, even if they are just 32 bytes long, is not efficient for both server and client. |
@skylenet for peers, we already have
We could get the node id with
This should be mostly enough to construct a network snapshot, once we query all nodes from a given deployment. cc @gluk256 |
@justelad getting the peers from |
I suggest as a next step we add an API to the Adding an API to hive for |
After conversation on standup 29/01/2019, the decision has been taken to actually implement a A complete |
A way to query |
Closed by ethereum/go-ethereum#18972 Two possible solutions for
|
|
Create an admin API that exposes the following information about individual swarm nodes:
The API could be in JSON RPC or HTTP.
This would bring up the possibility to create tooling that could iterate over all our nodes (in private deployments) and gather this information. Something similar has been already mention in the roundtable, .e.g "chunk explorer".
The text was updated successfully, but these errors were encountered: