You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a loose sketch of a potential bitswap architecture. It's mostly based on the current architecture (plus ipfs/go-bitswap#398), but I've tried to simplify some things.
Really, this is mostly to help me understand how everything is (and could be) structured. Unfortunately, reality is complected so I'm missing a lot of edge cases including:
Handling timeouts.
Handling unresponsive peers.
Refcounting connections.
Freeing resources when peers disconnect.
GetBlocks
A single "request" from the user, attached to a session.
Channels
inBlockCh (blocks from session)
outBlockCh (blocks to user)
Sends
to Session via requestCh
want(cids, inBlockCh)
cancel(inBlockCh)
to the user via outBlockCh
blocks
Receives
from Session via inBlockCh
blocks(blocks)
Blocks received on this channel should be buffered internally to avoid blocking the session (unless the channel buffers are sufficient).
context (user)
context (session)
Session
The session managers related wants/peers and makes all decisions concerning where/when to send wants.
Fields
id uint64 (session id)
wants map[cid]map[GetBlocks.inBlockCh]struct{} (used when routing incoming blocks to GetBlocks calls)
cancelMap map[GetBlocks.inBlockCh]map[cid]struct{} (used when canceling GetBlocks, not strictly necessary)
Channels
requestCh (want/cancel)
blockInfoCh (blocks, haves, don't haves, etc.)
peerInfoCh (peer availability changes)
Receives
from GetBlocks via requestCh
wants
cancels
from BlockRouter via blockInfoCh
receive(block, have, havenot)
from PeerRouter via peerInfoCh
available(peer)
gone(peer)
Sends
to GetBlocks via GetBlocks.inBlockCh
blocks(blocks)
to BlockRouter via BlockRouter.requestCh
want(cids, Session.blockInfoCh)
cancel(cids, Session.blockInfoCh)
cancelAll(session.blockInfoCh) (cancels all requests by this session)
After sending this, the service must drain the blockInfoCh until it's closed.
note: the session must also receive on blockInfoCh while sending on this channel to prevent a deadlock.
to PeerRouter via PeerRouter.requestCh
interest(peers, Session.peerInfoCh)
cancel(peers, Session.peerInfoCh)
cancelAll(Session.peerInfoCh)
After sending this, the service must drain the peerInfoCh until it's closed.
note: the session must also receive on peerInfoCh when sending these requests.
to WantManager via WantManager.requestCh
send(Session.id, peers, wants, haves, peers)
broadcast(cids, peers, Session.id)
cancel(cids, Session.id)
cancelAll(Session.id)
The service can walk away from the WantManager once it has sent a
cancel as the WantManager never sends anything back.
BlockRouter
Routes blocks (and information about blocks) to the appropriate services. All
incoming blocks pass through this service.
change: This was the WantRequestManager (mostly).
Fields
wants map[cid]map[Session.blockInfoCh]struct{} (used to route incoming information)
interest map[cid]map[Session.blockInfoCh]struct{} (used to route incoming information)
cancelMap map[Session.blockInfoCh]map[cid]struct{} (used for canceling, probably not necessary)
Channels
requestCh (wants from sessions)
Receives
from Session via requestCh
want(cids, Session.blockInfoCh)
Checks for the block.
If we have it, send it back.
If we don't, register interest in the block.
cancel(cids, Session.blockInfoCh)
Unregister interest in the block
cancelAll(Session.blockInfoCh) (cancels all requests by this session)
Unregister interest in all blocks associated with the session.
Closes Session.blockInfoCh when done to signal that the cancel is complete.
from Network via requestCh
message(from, blocks, haves, donthave)
from Bitswap via requestCh
blockPut(blocks)
Sends
to Session via blockInfoCh
message(from, blocks, haves, donthaves)
to Engine via Engine.requestCh
blocksAvailable(cids + block sizes)
to the blockstore
Synchronously.
PeerRouter
Routes incoming peer information to interested sessions.
Fields
peerInterest map[peer]map[Session.peerCh]struct{} // used to route incoming information
cancelMap map[Session.peerCh]map[cid]struct{} // Used for canceling. Probably not necessary
Channels
requestCh (inbound requests from the session and network)
Receives
from Session via requestCh
interest(peers, Session.peerInfoCh)
Registers session interest for that peer.
Checks to see if the peer is connected, notifying peerInfoCh when available.
cancel(peers, Session.peerInfoCh)
cancelAll(Session.peerInfoCh)
Unregister interest in all peers associated with the session.
Closes Session.peerInfoCh when done to signal that the cancel is complete.
from Network via requestCh
peerConnected(peer)
peerDisconnected(peer)
Sends
to Session via blockInfoCh
receive(block, have, havenot)
WantManager
Manages our outbound, per-peer wantlists (refcounting per session).
Fields
broadcastWants cidSessSet
peerWants map[peer]*peerWant
wantPeers map[cid]map[peer]struct{}
Channels
requestCh (inbound requests from sessions)
Receives
from Session via requestCh
send(Session.id, peers, wants, haves, peers)
broadcast(cids, peers, Session.id)
cancel(cids, Session.id)
cancelAll(Session.id)
Cancels all wants related to the session.
Sends
to PeerQueues
Currently, this would just add to the peer queue synchronously, then signal it.
Engine
Tracks inbound wantlists and sends back blocks.
Channels
requestCh (inbound requests from the network)
Receives
from Network via requestCh
wantlist(from, wantlist, cancels)
Synchronously checks to see if we have the requested blocks.
Enqueues a task if we do.
Records that the peer wants the blocks.
from BlockRouter via requestCh
blocksAvailable(cids + sizes)
Checks to see if any peers want the blocks, enqueuing tasks as necessary.
Sends
Writes to peer task queues, signaling workers.
Writes to network from workers.
Network
Service to interact with the network. This one is kind of funky as there's no
central event loop.
Receives (from per-stream worker)
Messages
Sends (from event handlers)
to PeerRouter via via PeerRouter.requestCh
peerConnected(peer)
peerDisconnected(peer)
TODO: ref counting?
Sends (from each per-stream worker)
to Engine via Engine.requestCh
wantlist(from, wantlist, cancels)
to BlockRouter via BlockRouter.requestCh
message(from, blocks, haves, donthave)
Ledger
Tracks our debt ratio with our peers. Updated synchronously by both the Engine (when sending blocks) and the BlockRouter (when receiving blocks).
change: Currently, this is embedded in the Engine. It should be it's own thing. change: Currently, the ledger also tracks who wants what. Wantlists should be tracked in the engine.
The text was updated successfully, but these errors were encountered:
If you have some time, want to take a stab at filling this out? Or modifying it to match bitswap as-is? It should make it easier to reason about the changes.
Bitswap Architecture
This is a loose sketch of a potential bitswap architecture. It's mostly based on the current architecture (plus ipfs/go-bitswap#398), but I've tried to simplify some things.
Really, this is mostly to help me understand how everything is (and could be) structured. Unfortunately, reality is complected so I'm missing a lot of edge cases including:
GetBlocks
A single "request" from the user, attached to a session.
want(cids, inBlockCh)
cancel(inBlockCh)
blocks(blocks)
Session
The session managers related wants/peers and makes all decisions concerning where/when to send wants.
uint64
(session id)map[cid]map[GetBlocks.inBlockCh]struct{}
(used when routing incoming blocks to GetBlocks calls)map[GetBlocks.inBlockCh]map[cid]struct{}
(used when canceling GetBlocks, not strictly necessary)receive(block, have, havenot)
available(peer)
gone(peer)
blocks(blocks)
want(cids, Session.blockInfoCh)
cancel(cids, Session.blockInfoCh)
cancelAll(session.blockInfoCh)
(cancels all requests by this session)interest(peers, Session.peerInfoCh)
cancel(peers, Session.peerInfoCh)
cancelAll(Session.peerInfoCh)
peerInfoCh
when sending these requests.send(Session.id, peers, wants, haves, peers)
broadcast(cids, peers, Session.id)
cancel(cids, Session.id)
cancelAll(Session.id)
cancel as the WantManager never sends anything back.
BlockRouter
Routes blocks (and information about blocks) to the appropriate services. All
incoming blocks pass through this service.
change: This was the WantRequestManager (mostly).
map[cid]map[Session.blockInfoCh]struct{}
(used to route incoming information)map[cid]map[Session.blockInfoCh]struct{}
(used to route incoming information)map[Session.blockInfoCh]map[cid]struct{}
(used for canceling, probably not necessary)want(cids, Session.blockInfoCh)
cancel(cids, Session.blockInfoCh)
cancelAll(Session.blockInfoCh)
(cancels all requests by this session)Session.blockInfoCh
when done to signal that the cancel is complete.message(from, blocks, haves, donthave)
blockPut(blocks)
message(from, blocks, haves, donthaves)
blocksAvailable(cids + block sizes)
PeerRouter
Routes incoming peer information to interested sessions.
map[peer]map[Session.peerCh]struct{}
// used to route incoming informationmap[Session.peerCh]map[cid]struct{}
// Used for canceling. Probably not necessaryinterest(peers, Session.peerInfoCh)
cancel(peers, Session.peerInfoCh)
cancelAll(Session.peerInfoCh)
Session.peerInfoCh
when done to signal that the cancel is complete.peerConnected(peer)
peerDisconnected(peer)
receive(block, have, havenot)
WantManager
Manages our outbound, per-peer wantlists (refcounting per session).
cidSessSet
map[peer]*peerWant
map[cid]map[peer]struct{}
send(Session.id, peers, wants, haves, peers)
broadcast(cids, peers, Session.id)
cancel(cids, Session.id)
cancelAll(Session.id)
Engine
Tracks inbound wantlists and sends back blocks.
wantlist(from, wantlist, cancels)
blocksAvailable(cids + sizes)
Network
Service to interact with the network. This one is kind of funky as there's no
central event loop.
peerConnected(peer)
peerDisconnected(peer)
wantlist(from, wantlist, cancels)
message(from, blocks, haves, donthave)
Ledger
Tracks our debt ratio with our peers. Updated synchronously by both the Engine (when sending blocks) and the BlockRouter (when receiving blocks).
change: Currently, this is embedded in the Engine. It should be it's own thing.
change: Currently, the ledger also tracks who wants what. Wantlists should be tracked in the engine.
The text was updated successfully, but these errors were encountered: