From 3957381eb241e3b830e296ef95eb16d49ae5e54d Mon Sep 17 00:00:00 2001 From: Orpheus Lummis Date: Tue, 28 Jun 2022 09:17:48 -0400 Subject: [PATCH] docs: Improve code documentation (#533) --- README.md | 26 +++++----- api/http/handler.go | 2 +- api/http/handlerfuncs_test.go | 2 +- api/http/http.go | 3 ++ api/http/router.go | 2 +- api/http/server.go | 4 +- cli/root.go | 3 ++ client/collection.go | 4 +- client/doc.go | 7 ++- client/dockey.go | 3 +- cmd/defradb/main.go | 1 + cmd/genclidocs/genclidocs.go | 3 ++ core/crdt/composite.go | 6 +-- core/crdt/doc.go | 32 ++++++------ core/crdt/lwwreg.go | 17 +++---- core/data.go | 27 +++++----- core/doc.go | 8 +-- core/key.go | 6 +-- core/node.go | 7 ++- datastore/blockstore.go | 3 +- datastore/dag.go | 5 +- datastore/doc.go | 14 ++++++ datastore/multi.go | 8 +-- datastore/store.go | 5 +- datastore/txn.go | 2 +- db/base/compare.go | 2 +- db/db.go | 15 +++--- db/fetcher/dag.go | 2 +- db/fetcher/versioned.go | 2 +- logging/logging.go | 13 +++++ merkle/clock/clock.go | 7 ++- merkle/clock/clock_test.go | 2 +- merkle/clock/doc.go | 2 +- merkle/clock/heads.go | 4 +- merkle/crdt/composite.go | 11 ++-- merkle/crdt/factory.go | 28 +++++------ merkle/crdt/lwwreg.go | 11 ++-- merkle/crdt/merklecrdt.go | 8 ++- net/doc.go | 42 ++++------------ node/node.go | 17 +++---- query/graphql/mapper/mapper.go | 22 ++++---- query/graphql/mapper/select.go | 2 +- query/graphql/mapper/targetable.go | 4 +- query/graphql/parser/doc.go | 15 +++--- query/graphql/parser/filter.go | 4 +- query/graphql/parser/mutation.go | 5 +- query/graphql/parser/query.go | 7 ++- query/graphql/parser/types/types.go | 3 ++ query/graphql/planner/commit.go | 5 +- query/graphql/planner/delete.go | 2 +- query/graphql/planner/doc.go | 75 ++++++++++++---------------- query/graphql/planner/update.go | 2 +- query/graphql/schema/descriptions.go | 2 +- query/graphql/schema/doc.go | 7 ++- query/graphql/schema/generate.go | 2 +- query/graphql/schema/manager.go | 5 +- query/graphql/schema/root.go | 14 +++--- query/graphql/schema/schema.go | 4 ++ tests/bench/README.md | 2 +- 59 files changed, 269 insertions(+), 279 deletions(-) create mode 100644 datastore/doc.go diff --git a/README.md b/README.md index 755322120e..da824fef2d 100644 --- a/README.md +++ b/README.md @@ -282,25 +282,25 @@ This only scratches the surface of the DefraDB Query Language, see below for the You can access the official DefraDB Query Language documentation online here: [https://hackmd.io/@source/BksQY6Qfw](https://hackmd.io/@source/BksQY6Qfw) -## Peer-to-Peer Data Syncronization +## Peer-to-Peer Data Synchronization DefraDB has a native P2P network builtin to each node, allowing them to exchange, synchronize, and replicate documents and commits. The P2P network uses a combination of server to server gRPC commands, gossip based pub-sub network, and a shared Distributed Hash Table, all powered by [LibP2P](https://libp2p.io/). -Unless specifying `--no-p2p` option when running `start` the default behaviour for a DefraDB node is to intialize the P2P network stack. +Unless specifying `--no-p2p` option when running `start` the default behaviour for a DefraDB node is to initialize the P2P network stack. When you start a node for the first time, DefraDB will auto generate a private key pair and store it in the `data` folder specified in the config or `--data` CLI option. Each node has a unique `Peer ID` generated based on the public key, which is printed to the console during startup. -You'll see a printed line: `Created LibP2P host with Peer ID XXX` where `XXX` is your nodes `Peer ID`. This is important to know if we want other nodes to connect to this node. +You'll see a printed line: `Created LibP2P host with Peer ID XXX` where `XXX` is your node's `Peer ID`. This is important to know if we want other nodes to connect to this node. There are two types of relationships a given DefraDB node can establish with another peer, which is a pubsub peer or a replicator peer. -Pubsub peers can be specified on the command line with `--peers` which accepts a comma seperated list of peer [MultiAddress](https://docs.libp2p.io/concepts/addressing/). Which take the form of `/ip4/IP_ADDRESS/tcp/PORT/p2p/PEER_ID`. +Pubsub peers can be specified on the command line with `--peers` which accepts a comma-separated list of peer [MultiAddress](https://docs.libp2p.io/concepts/addressing/). Which takes the form of `/ip4/IP_ADDRESS/tcp/PORT/p2p/PEER_ID`. > If a node is listening on port 9000 with the IP address `192.168.1.12` and a Peer ID of `12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B` then the fully quantified multi address is `/ip4/192.168.1.12/tcp/9000/p2p/12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B`. -Pubsub nodes *passively* synchronize data between nodes by broadcasting Document Commit updates over the pubsub channel with the document `DocKey` as the topic. This requires nodes to already be listening on this pubsub channel to recieve updates for. This is used when two nodes *already* have a shared document, and want to keep both their changes in sync with one another. +Pubsub nodes *passively* synchronize data between nodes by broadcasting Document Commit updates over the pubsub channel with the document `DocKey` as the topic. This requires nodes to already be listening on this pubsub channel to receive updates for. This is used when two nodes *already* have a shared document, and want to keep both their changes in sync with one another. Replicator nodes are specified using the CLI `rpc` command after a node has already started with `defradb rpc add-replicator `. @@ -310,38 +310,38 @@ Replicator nodes *actively* push changes from the specific collection *to* the t ### PubSub Example -Lets construct a simple example of two nodes (node1 & node2) connecting to one another over the pubsub network on the same machine. +Let's construct a simple example of two nodes (node1 & node2) connecting to one another over the pubsub network on the same machine. On Node1 start a regular node with all the defaults: ``` defradb start ``` -Make sure to get the `Peer ID` from the console output. Lets assume its `12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B`. +Make sure to get the `Peer ID` from the console output. Let's assume its `12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B`. One Node2 we need to change some of the default config options if we are running on the same machine. ``` defradb start --data $HOME/.defradb/data-node2 --p2paddr /ip4/0.0.0.0/tcp/9172 --url localhost:9182 --peers /ip4/0.0.0.0/tcp/9171/p2p/12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B ``` -Lets break this down +Let's break this down - `--data` specifies the data folder - `--p2paddr` is the multiaddress to listen on for the p2p network (default is port 9171) - `--url` is the HTTP address to listen on for the client HTTP and GraphQL API. -- `--peers` is a comma-sperated list of peer multiaddresses. This will be our first node we started, with the default config options. +- `--peers` is a comma-separated list of peer multiaddresses. This will be our first node we started, with the default config options. -This will startup two nodes, connect to eachother, and establish the P2P gossib pubsub network. +This will startup two nodes, connect to each other, and establish the P2P gossib pubsub network. ### Replicator Example -Lets construct a simple example of Node1 *replicating* to Node2. +Let's construct a simple example of Node1 *replicating* to Node2. -Node1 is the leader, lets startup the node **and** define a collection. +Node1 is the leader, let's startup the node **and** define a collection. ``` defradb start ``` -On Node2 lets startup a node +On Node2 let's startup a node ``` defradb start --data $HOME/.defradb/data-node2 --p2paddr /ip4/0.0.0.0/tcp/9172 --url localhost:9182 ``` diff --git a/api/http/handler.go b/api/http/handler.go index 59edf12310..c9879fcf40 100644 --- a/api/http/handler.go +++ b/api/http/handler.go @@ -78,7 +78,7 @@ func (h *handler) handle(f http.HandlerFunc) http.HandlerFunc { func getJSON(req *http.Request, v interface{}) error { err := json.NewDecoder(req.Body).Decode(v) if err != nil { - return errors.Wrap(err, "unmarshall error") + return errors.Wrap(err, "unmarshal error") } return nil } diff --git a/api/http/handlerfuncs_test.go b/api/http/handlerfuncs_test.go index d6d71e857b..1493894452 100644 --- a/api/http/handlerfuncs_test.go +++ b/api/http/handlerfuncs_test.go @@ -267,7 +267,7 @@ func TestExecGQLHandlerContentTypeJSONWithJSONError(t *testing.T) { assert.Contains(t, errResponse.Errors[0].Extensions.Stack, "invalid character") assert.Equal(t, http.StatusBadRequest, errResponse.Errors[0].Extensions.Status) assert.Equal(t, "Bad Request", errResponse.Errors[0].Extensions.HTTPError) - assert.Equal(t, "unmarshall error: invalid character ':' after array element", errResponse.Errors[0].Message) + assert.Equal(t, "unmarshal error: invalid character ':' after array element", errResponse.Errors[0].Message) } func TestExecGQLHandlerContentTypeJSON(t *testing.T) { diff --git a/api/http/http.go b/api/http/http.go index cea5da641a..8430543e53 100644 --- a/api/http/http.go +++ b/api/http/http.go @@ -8,6 +8,9 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. +/* +Package http provides DefraDB's HTTP API, offering various capabilities. +*/ package http import "github.com/sourcenetwork/defradb/logging" diff --git a/api/http/router.go b/api/http/router.go index 328c4ecf3e..4b91f49773 100644 --- a/api/http/router.go +++ b/api/http/router.go @@ -61,7 +61,7 @@ func setRoutes(h *handler) *handler { return h } -// JoinPaths takes a base path and any number of additionnal paths +// JoinPaths takes a base path and any number of additional paths // and combines them safely to form a full URL path. // The base must start with a http or https. func JoinPaths(base string, paths ...string) (*url.URL, error) { diff --git a/api/http/server.go b/api/http/server.go index a3e31ea493..f21d5eef63 100644 --- a/api/http/server.go +++ b/api/http/server.go @@ -16,7 +16,7 @@ import ( "github.com/sourcenetwork/defradb/client" ) -// The Server struct holds the Handler for the HTTP API +// Server struct holds the Handler for the HTTP API. type Server struct { options serverOptions http.Server @@ -26,7 +26,7 @@ type serverOptions struct { allowedOrigins []string } -// NewServer instantiated a new server with the given http.Handler. +// NewServer instantiates a new server with the given http.Handler. func NewServer(db client.DB, options ...func(*Server)) *Server { svr := &Server{} diff --git a/cli/root.go b/cli/root.go index 67acff8575..40a9e8b7f9 100644 --- a/cli/root.go +++ b/cli/root.go @@ -8,6 +8,9 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. +/* +Package cli provides the command-line interface. +*/ package cli import ( diff --git a/client/collection.go b/client/collection.go index b4d9ad2c32..20597e9ea3 100644 --- a/client/collection.go +++ b/client/collection.go @@ -78,7 +78,7 @@ type Collection interface { // // Target can be a Filter statement, a single docKey, a single document, // an array of docKeys, or an array of documents. - // It is recommened to use the respective typed versions of Update + // It is recommended to use the respective typed versions of Update // (e.g. UpdateWithFilter or UpdateWithKey) over this function if you can. // // Returns an ErrInvalidUpdateTarget error if the target type is not supported. @@ -107,7 +107,7 @@ type Collection interface { // DeleteWith deletes a target document. // // Target can be a Filter statement, a single docKey, a single document, an array of docKeys, - // or an array of documents. It is recommened to use the respective typed versions of Delete + // or an array of documents. It is recommended to use the respective typed versions of Delete // (e.g. DeleteWithFilter or DeleteWithKey) over this function if you can. // This operation will hard-delete all state relating to the given DocKey. This includes data, block, and head storage. // diff --git a/client/doc.go b/client/doc.go index 587ab7e4d5..1409775a81 100644 --- a/client/doc.go +++ b/client/doc.go @@ -9,10 +9,9 @@ // licenses/APL.txt. /* -The client package provides public members for interacting with a Defra DB instance. +Package client provides public members for interacting with a Defra DB instance. -Only calls made via the `DB` and `Collection` interfaces interact with the underlying datastores. -Currently the only provided implementation of `DB` is found in the `defra/db` package and can be -instantiated via the `NewDB` function. +Only calls made via the `DB` and `Collection` interfaces interact with the underlying datastores. Currently the only +provided implementation of `DB` is found in the `defra/db` package and can be instantiated via the `NewDB` function. */ package client diff --git a/client/dockey.go b/client/dockey.go index cc2bbfa303..d044065d46 100644 --- a/client/dockey.go +++ b/client/dockey.go @@ -53,8 +53,7 @@ type DocKey struct { } // NewDocKeyV0 creates a new doc key identified by the root data CID, peer ID, and -// namespaced by the versionNS -// TODO: Parameterize namespace Version +// namespaced by the versionNS. func NewDocKeyV0(dataCID cid.Cid) DocKey { return DocKey{ version: v0, diff --git a/cmd/defradb/main.go b/cmd/defradb/main.go index 42d7d0b7a0..14ede0907f 100644 --- a/cmd/defradb/main.go +++ b/cmd/defradb/main.go @@ -8,6 +8,7 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. +// defradb is a decentralized peer-to-peer, user-centric, privacy-focused document database. package main import "github.com/sourcenetwork/defradb/cli" diff --git a/cmd/genclidocs/genclidocs.go b/cmd/genclidocs/genclidocs.go index 003b84941d..dce985f6a3 100644 --- a/cmd/genclidocs/genclidocs.go +++ b/cmd/genclidocs/genclidocs.go @@ -8,6 +8,9 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. +/* +genclidocs is a tool to generate the command line interface documentation. +*/ package main import ( diff --git a/core/crdt/composite.go b/core/crdt/composite.go index 0fadd71493..1e364882a8 100644 --- a/core/crdt/composite.go +++ b/core/crdt/composite.go @@ -37,12 +37,12 @@ type CompositeDAGDelta struct { SubDAGs []core.DAGLink } -// GetPriority gets the current priority for this delta +// GetPriority gets the current priority for this delta. func (delta *CompositeDAGDelta) GetPriority() uint64 { return delta.Priority } -// SetPriority will set the priority for this delta +// SetPriority will set the priority for this delta. func (delta *CompositeDAGDelta) SetPriority(prio uint64) { delta.Priority = prio } @@ -130,7 +130,7 @@ func (c CompositeDAG) Merge(ctx context.Context, delta core.Delta, id string) er // DeltaDecode is a typed helper to extract // a LWWRegDelta from a ipld.Node -// for now lets do cbor (quick to implement) +// for now let's do cbor (quick to implement) func (c CompositeDAG) DeltaDecode(node ipld.Node) (core.Delta, error) { delta := &CompositeDAGDelta{} pbNode, ok := node.(*dag.ProtoNode) diff --git a/core/crdt/doc.go b/core/crdt/doc.go index a5f8c9d31d..77dd718a87 100644 --- a/core/crdt/doc.go +++ b/core/crdt/doc.go @@ -8,24 +8,22 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. -package crdt - -// Conflict-Free Replicated Data Types (CRDT) -// are a data structure which can be replicated across multiple computers in a network, -// where the replicas can be updated independently and concurrently without coordination -// between the replicas and are able to deterministically converge to the same state. +/* +Package crdt implements a collection of CRDT types specifically to be used in DefraDB, and use the Delta-State CRDT +architecture to update and replicate state. It is based on the go Merkle-CRDT project. -// This package implements a collection of CRDT types specifically to be used in DefraDB, -// and use the Delta-State CRDT architecture to update and replicate state. It is based on -// the go Merkle-CRDT project +Conflict-Free Replicated Data Types (CRDT) are a data structure which can be replicated across multiple computers in a +network, where the replicas can be updated independently and concurrently without coordination between the replicas and +are able to deterministically converge to the same state. -// The CRDTs shall satisfy the ReplicatedData interface which is a single merge function -// which given two states of the same data type will merge into a single state. +The CRDTs shall satisfy the ReplicatedData interface which is a single merge function which given two states of the +same data type will merge into a single state. -// Unless the explicitly enabling the entire state to be fully loaded into memory as an object, -// all data will reside inside the BadgerDB datastore. +Unless explicitly enabling the entire state to be fully loaded into memory as an object, all data will reside inside +the BadgerDB datastore. -// In general, each CRDT type will be implemented independent, and oblivious to its underlying -// datastore, and to how it will be structured as Merkle-CRDT. Instead they will focus on their -// core semantics and implementation and will be wrapped in handlers to ensure state persistence -// to DBs, DAG creation, and replication to peers. +In general, each CRDT type will be implemented independently, and oblivious to its underlying datastore, and to how it +will be structured as Merkle-CRDT. Instead they will focus on their core semantics and implementation and will be +wrapped in handlers to ensure state persistence to DBs, DAG creation, and replication to peers. +*/ +package crdt diff --git a/core/crdt/lwwreg.go b/core/crdt/lwwreg.go index 2a5ddb9d41..bc86f4cf9e 100644 --- a/core/crdt/lwwreg.go +++ b/core/crdt/lwwreg.go @@ -43,18 +43,18 @@ type LWWRegDelta struct { DocKey []byte } -// GetPriority gets the current priority for this delta +// GetPriority gets the current priority for this delta. func (delta *LWWRegDelta) GetPriority() uint64 { return delta.Priority } -// SetPriority will set the priority for this delta +// SetPriority will set the priority for this delta. func (delta *LWWRegDelta) SetPriority(prio uint64) { delta.Priority = prio } -// Marshal encodes the delta using CBOR -// for now lets do cbor (quick to implement) +// Marshal encodes the delta using CBOR. +// for now le'ts do cbor (quick to implement) func (delta *LWWRegDelta) Marshal() ([]byte, error) { h := &codec.CborHandle{} buf := bytes.NewBuffer(nil) @@ -74,14 +74,13 @@ func (delta *LWWRegDelta) Value() interface{} { return delta.Data } -// LWWRegister Last-Writer-Wins Register -// a simple CRDT type that allows set/get of an -// arbitrary data type that ensures convergence +// LWWRegister, Last-Writer-Wins Register, is a simple CRDT type that allows set/get +// of an arbitrary data type that ensures convergence. type LWWRegister struct { baseCRDT } -// NewLWWRegister returns a new instance of the LWWReg with the given ID +// NewLWWRegister returns a new instance of the LWWReg with the given ID. func NewLWWRegister(store datastore.DSReaderWriter, key core.DataStoreKey) LWWRegister { return LWWRegister{ baseCRDT: newBaseCRDT(store, key), @@ -171,7 +170,7 @@ func (reg LWWRegister) setValue(ctx context.Context, val []byte, priority uint64 // DeltaDecode is a typed helper to extract // a LWWRegDelta from a ipld.Node -// for now lets do cbor (quick to implement) +// for now let's do cbor (quick to implement) func (reg LWWRegister) DeltaDecode(node ipld.Node) (core.Delta, error) { delta := &LWWRegDelta{} pbNode, ok := node.(*dag.ProtoNode) diff --git a/core/data.go b/core/data.go index 44d844be2d..2709e7b0ec 100644 --- a/core/data.go +++ b/core/data.go @@ -12,17 +12,17 @@ package core import "strings" -// Span is a range of keys from [Start, End) +// Span is a range of keys from [Start, End). type Span interface { - // Start returns the starting key of the Span + // Start returns the starting key of the Span. Start() DataStoreKey - // End returns the ending key of the Span + // End returns the ending key of the Span. End() DataStoreKey - // Contains returns true of the Span contains the provided Span's range + // Contains returns true of the Span contains the provided Span's range. Contains(Span) bool - // Equal returns true if the provided Span is equal to the current + // Equal returns true if the provided Span is equal to the current. Equal(Span) bool - // Compare returns -1 if the provided span is less, 0 if it is equal, and 1 if its greater + // Compare returns -1 if the provided span is less, 0 if it is equal, and 1 if its greater. Compare(Span) SpanComparisonResult } @@ -39,22 +39,22 @@ func NewSpan(start, end DataStoreKey) Span { } } -// Start returns the starting key of the Span +// Start returns the starting key of the Span. func (s span) Start() DataStoreKey { return s.start } -// End returns the ending key of the Span +// End returns the ending key of the Span. func (s span) End() DataStoreKey { return s.end } -// Contains returns true of the Span contains the provided Span's range +// Contains returns true of the Span contains the provided Span's range. func (s span) Contains(s2 Span) bool { panic("not implemented") // TODO: Implement } -// Equal returns true if the provided Span is equal to the current +// Equal returns true if the provided Span is equal to the current. func (s span) Equal(s2 Span) bool { panic("not implemented") // TODO: Implement } @@ -153,10 +153,10 @@ func isAdjacent(this DataStoreKey, other DataStoreKey) bool { this.ToString() == other.PrefixEnd().ToString()) } -// Spans is a collection of individual spans +// Spans is a collection of individual spans. type Spans []Span -// KeyValue is a KV store response containing the resulting core.Key and byte array value +// KeyValue is a KV store response containing the resulting core.Key and byte array value. type KeyValue struct { Key DataStoreKey Value []byte @@ -237,8 +237,7 @@ func (spans Spans) MergeAscending() Spans { } // Removes any items from the collection (given index onwards) who's end key is smaller -// than the given value. The returned collection will be a different instance to the given -// and the given collection will not be mutated. +// than the given value. The returned collection will be a different instance. func (spans Spans) removeBefore(startIndex int, end string) Spans { indexOfLastMatchingItem := -1 for i := startIndex; i < len(spans); i++ { diff --git a/core/doc.go b/core/doc.go index 9dc5f590f4..0513c5d25d 100644 --- a/core/doc.go +++ b/core/doc.go @@ -8,6 +8,9 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. +/* +Package core provides commonly shared interfaces and building blocks. +*/ package core const DocKeyFieldIndex int = 0 @@ -78,7 +81,7 @@ type DocumentMapping struct { // The set of fields available using this mapping. // - // If a field-name is not in this collection, it esentially doesn't exist. + // If a field-name is not in this collection, it essentially doesn't exist. // Collection should include fields that are not rendered to the consumer. // Multiple fields may exist for any given name (for example if a property // exists under different aliases/filters). @@ -103,8 +106,7 @@ func NewDocumentMapping() *DocumentMapping { } } -// CloneWithoutRender deep copies the source mapping skipping over the -// RenderKeys. +// CloneWithoutRender deep copies the source mapping skipping over the RenderKeys. func (source *DocumentMapping) CloneWithoutRender() *DocumentMapping { result := DocumentMapping{ IndexesByName: make(map[string][]int, len(source.IndexesByName)), diff --git a/core/key.go b/core/key.go index be9297483f..7900207033 100644 --- a/core/key.go +++ b/core/key.go @@ -127,7 +127,7 @@ func DataStoreKeyFromDocKey(dockey client.DocKey) DataStoreKey { } // Creates a new HeadStoreKey from a string as best as it can, -// splitting the input using '/' as a field deliminater. It assumes +// splitting the input using '/' as a field deliminator. It assumes // that the input string is in the following format: // // /[DocKey]/[FieldId]/[Cid] @@ -153,7 +153,7 @@ func NewHeadStoreKey(key string) (HeadStoreKey, error) { } // Returns a formatted collection key for the system data store. -// it assumes the name of the collection is non-empty. +// It assumes the name of the collection is non-empty. func NewCollectionKey(name string) CollectionKey { return CollectionKey{CollectionName: name} } @@ -163,7 +163,7 @@ func NewCollectionSchemaKey(schemaId string) CollectionSchemaKey { } // NewSchemaKey returns a formatted schema key for the system data store. -// it assumes the name of the schema is non-empty. +// It assumes the name of the schema is non-empty. func NewSchemaKey(name string) SchemaKey { return SchemaKey{SchemaName: name} } diff --git a/core/node.go b/core/node.go index 2a05c04d0f..6e9589ea04 100644 --- a/core/node.go +++ b/core/node.go @@ -17,16 +17,15 @@ import ( ipld "github.com/ipfs/go-ipld-format" ) -// NodeDeltaPair is a Node with its underlying delta -// already extracted. Used in a channel response for streaming +// NodeDeltaPair is a Node with its underlying delta already extracted. +// Used in a channel response for streaming. type NodeDeltaPair interface { GetNode() ipld.Node GetDelta() Delta Error() error } -// A NodeGetter extended from ipld.NodeGetter with delta related -// functions +// A NodeGetter extended from ipld.NodeGetter with delta-related functions. type NodeGetter interface { ipld.NodeGetter GetDelta(context.Context, cid.Cid) (ipld.Node, Delta, error) diff --git a/datastore/blockstore.go b/datastore/blockstore.go index bcd57c1983..ae5b66599a 100644 --- a/datastore/blockstore.go +++ b/datastore/blockstore.go @@ -41,8 +41,7 @@ import ( // respective substores don't need to optimize or worry about Batching/Txn. // Hence the simplified DSReaderWriter. -// ErrHashMismatch is an error returned when the hash of a block -// is different than expected. +// ErrHashMismatch is an error returned when the hash of a block is different than expected. var ErrHashMismatch = errors.New("block in storage has different hash than requested") // defradb/store.ErrNotFound => error diff --git a/datastore/dag.go b/datastore/dag.go index f0d6f4b8cb..c85197a528 100644 --- a/datastore/dag.go +++ b/datastore/dag.go @@ -14,7 +14,7 @@ import ( blockstore "github.com/ipfs/go-ipfs-blockstore" ) -// DAGStore is the interface to the underlying BlockStore and BlockService +// DAGStore is the interface to the underlying BlockStore and BlockService. type dagStore struct { blockstore.Blockstore // become a Blockstore store DSReaderWriter @@ -22,8 +22,7 @@ type dagStore struct { // bserv blockservice.BlockService } -// NewDAGStore creates a new DAGStore with the supplied -// Batching datastore +// NewDAGStore creates a new DAGStore with the supplied Batching datastore. func NewDAGStore(store DSReaderWriter) DAGStore { dstore := &dagStore{ Blockstore: NewBlockstore(store), diff --git a/datastore/doc.go b/datastore/doc.go new file mode 100644 index 0000000000..5a0e47054b --- /dev/null +++ b/datastore/doc.go @@ -0,0 +1,14 @@ +// Copyright 2022 Democratized Data Foundation +// +// Use of this software is governed by the Business Source License +// included in the file licenses/BSL.txt. +// +// As of the Change Date specified in that file, in accordance with +// the Business Source License, use of this software will be governed +// by the Apache License, Version 2.0, included in the file +// licenses/APL.txt. + +/* +Package datastore provides the various datastore-related facilities. +*/ +package datastore diff --git a/datastore/multi.go b/datastore/multi.go index bbefbfc263..74654b80c4 100644 --- a/datastore/multi.go +++ b/datastore/multi.go @@ -47,22 +47,22 @@ func MultiStoreFrom(rootstore DSReaderWriter) MultiStore { return ms } -// Datastore implements MultiStore +// Datastore implements MultiStore. func (ms multistore) Datastore() DSReaderWriter { return ms.data } -// Headstore implements MultiStore +// Headstore implements MultiStore. func (ms multistore) Headstore() DSReaderWriter { return ms.head } -// DAGstore implements MultiStore +// DAGstore implements MultiStore. func (ms multistore) DAGstore() DAGStore { return ms.dag } -// Rootstore implements MultiStore +// Rootstore implements MultiStore. func (ms multistore) Rootstore() DSReaderWriter { return ms.root } diff --git a/datastore/store.go b/datastore/store.go index 5615edcff4..42b9948aab 100644 --- a/datastore/store.go +++ b/datastore/store.go @@ -21,8 +21,7 @@ var ( log = logging.MustNewLogger("defradb.store") ) -// MultiStore is an interface wrapper around the 3 main types of stores needed for -// MerkleCRDTs +// MultiStore is an interface wrapper around the 3 main types of stores needed for MerkleCRDTs. type MultiStore interface { Rootstore() DSReaderWriter @@ -45,7 +44,7 @@ type MultiStore interface { } // DSReaderWriter simplifies the interface that is exposed by a -// DSReaderWriter into its subcomponents Reader and Writer. +// DSReaderWriter into its sub-components Reader and Writer. // Using this simplified interface means that both DSReaderWriter // and ds.Txn satisfy the interface. Due to go-datastore#113 and // go-datastore#114 ds.Txn no longer implements DSReaderWriter diff --git a/datastore/txn.go b/datastore/txn.go index 6f32358cab..7ec6306402 100644 --- a/datastore/txn.go +++ b/datastore/txn.go @@ -18,7 +18,7 @@ import ( "github.com/sourcenetwork/defradb/datastore/iterable" ) -// Txn is a common interface to the db.Txn struct +// Txn is a common interface to the db.Txn struct. type Txn interface { MultiStore diff --git a/db/base/compare.go b/db/base/compare.go index e35eca2d58..5df5f027f9 100644 --- a/db/base/compare.go +++ b/db/base/compare.go @@ -23,7 +23,7 @@ import ( // returns 1 if a > b. // // The only possible values for a and b is a concrete field type -// and they are always the same type as eachother. +// and they are always the same type as each other. // @todo: Handle list/slice/array fields func Compare(a, b interface{}) int { switch v := a.(type) { diff --git a/db/db.go b/db/db.go index 41a8db009a..bb70fc1aa9 100644 --- a/db/db.go +++ b/db/db.go @@ -8,6 +8,10 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. +/* +Package db provides the implementation of the [client.DB] interface, collection operations, +and related components. +*/ package db import ( @@ -34,7 +38,7 @@ import ( var ( log = logging.MustNewLogger("defra.db") // ErrDocVerification occurs when a documents contents fail the verification during a Create() - // call against the supplied Document Key + // call against the supplied Document Key. ErrDocVerification = errors.New("The document verification failed") ErrOptionsEmpty = errors.New("Empty options configuration provided") @@ -74,7 +78,7 @@ func WithBroadcaster(bs corenet.Broadcaster) Option { } } -// NewDB creates a new instance of the DB using the given options +// NewDB creates a new instance of the DB using the given options. func NewDB(ctx context.Context, rootstore ds.Batching, options ...Option) (client.DB, error) { return newDB(ctx, rootstore, options...) } @@ -132,7 +136,7 @@ func (db *db) Root() ds.Batching { return db.rootstore } -// Blockstore returns the internal DAG store which contains IPLD blocks +// Blockstore returns the internal DAG store which contains IPLD blocks. func (db *db) Blockstore() blockstore.Blockstore { return db.multistore.DAGstore() } @@ -142,7 +146,7 @@ func (db *db) systemstore() datastore.DSReaderWriter { } // Initialize is called when a database is first run and creates all the db global meta data -// like Collection ID counters +// like Collection ID counters. func (db *db) initialize(ctx context.Context) error { db.glock.Lock() defer db.glock.Unlock() @@ -197,8 +201,7 @@ func (db *db) GetRelationshipIdField(fieldName, targetType, thisType string) (st } // Close is called when we are shutting down the database. -// This is the place for any last minute cleanup or releaseing -// of resources (IE: Badger instance) +// This is the place for any last minute cleanup or releasing of resources (i.e.: Badger instance). func (db *db) Close(ctx context.Context) { log.Info(ctx, "Closing DefraDB process...") err := db.rootstore.Close() diff --git a/db/fetcher/dag.go b/db/fetcher/dag.go index 189bc185f6..c01ab3d099 100644 --- a/db/fetcher/dag.go +++ b/db/fetcher/dag.go @@ -175,7 +175,7 @@ func (hh *heads) List() ([]cid.Cid, uint64, error) { } height, n := binary.Uvarint(r.Value) if n <= 0 { - return nil, 0, errors.New("error decocding height") + return nil, 0, errors.New("error decoding height") } heads = append(heads, headCid) if height > maxHeight { diff --git a/db/fetcher/versioned.go b/db/fetcher/versioned.go index 71fe382f26..321069c9ea 100644 --- a/db/fetcher/versioned.go +++ b/db/fetcher/versioned.go @@ -72,7 +72,7 @@ var ( // Future optimizations: // - Incremental checkpoint/snapshotting // - Reverse traversal (starting from the current state, and working backwards) -// - Create a effecient memory store for in-order traversal (BTree, etc) +// - Create a efficient memory store for in-order traversal (BTree, etc) // // Note: Should we transition this state traversal into the CRDT objects themselves, and not // within a new fetcher? diff --git a/logging/logging.go b/logging/logging.go index 338f02d27a..7fff352c1f 100644 --- a/logging/logging.go +++ b/logging/logging.go @@ -16,11 +16,13 @@ import ( var log = MustNewLogger("defra.logging") +// KV is a key-value pair used to pass structured data to loggers. type KV struct { key string value interface{} } +// NewKV creates a new KV key-value pair. func NewKV(key string, value interface{}) KV { return KV{ key: key, @@ -29,23 +31,34 @@ func NewKV(key string, value interface{}) KV { } type Logger interface { + // Debug logs a message at debug log level. Key-value pairs can be added. Debug(ctx context.Context, message string, keyvals ...KV) + // Info logs a message at info log level. Key-value pairs can be added. Info(ctx context.Context, message string, keyvals ...KV) + // Warn logs a message at warn log level. Key-value pairs can be added. Warn(ctx context.Context, message string, keyvals ...KV) + // Error logs a message at error log level. Key-value pairs can be added. Error(ctx context.Context, message string, keyvals ...KV) + // ErrorErr logs a message and an error at error log level. Key-value pairs can be added. ErrorE(ctx context.Context, message string, err error, keyvals ...KV) + // Fatal logs a message at fatal log level. Key-value pairs can be added. Fatal(ctx context.Context, message string, keyvals ...KV) + // FatalE logs a message and an error at fatal log level. Key-value pairs can be added. FatalE(ctx context.Context, message string, err error, keyvals ...KV) + // Flush flushes any buffered log entries. Flush() error + // ApplyConfig updates the logger with a new config. ApplyConfig(config Config) } +// MustNewLogger creates and registers a new logger with the given name, and panics if there is an error. func MustNewLogger(name string) Logger { logger := mustNewLogger(name) register(name, logger) return logger } +// SetConfig updates all registered loggers with the given config. func SetConfig(newConfig Config) { updatedConfig := setConfig(newConfig) updateLoggers(updatedConfig) diff --git a/merkle/clock/clock.go b/merkle/clock/clock.go index 7afb07c9d5..01ed8d4f71 100644 --- a/merkle/clock/clock.go +++ b/merkle/clock/clock.go @@ -35,8 +35,7 @@ type MerkleClock struct { crdt core.ReplicatedData } -// NewMerkleClock returns a new merkle clock to read/write events (deltas) to -// the clock +// NewMerkleClock returns a new MerkleClock to read/write events (deltas) to the clock. func NewMerkleClock( headstore datastore.DSReaderWriter, dagstore datastore.DAGStore, @@ -86,9 +85,9 @@ func (mc *MerkleClock) putBlock( // @todo Change AddDAGNode to AddDelta -// AddDAGNode adds a new delta to the existing DAG for this MerkleClock +// AddDAGNode adds a new delta to the existing DAG for this MerkleClock. // It checks the current heads, sets the delta priority in the merkle dag -// adds it to the blockstore the runs ProcessNode +// adds it to the blockstore the runs ProcessNode. func (mc *MerkleClock) AddDAGNode( ctx context.Context, delta core.Delta, diff --git a/merkle/clock/clock_test.go b/merkle/clock/clock_test.go index d0d51736b8..06ce54149b 100644 --- a/merkle/clock/clock_test.go +++ b/merkle/clock/clock_test.go @@ -73,7 +73,7 @@ func TestMerkleClockPutBlock(t *testing.T) { // tested as well here. } -func TetMerkleClockPutBlockWithHeads(t *testing.T) { +func TestMerkleClockPutBlockWithHeads(t *testing.T) { ctx := context.Background() clk := newTestMerkleClock() delta := &crdt.LWWRegDelta{ diff --git a/merkle/clock/doc.go b/merkle/clock/doc.go index c21629d730..314999ac77 100644 --- a/merkle/clock/doc.go +++ b/merkle/clock/doc.go @@ -13,7 +13,7 @@ package clock // CRDTs are composed of two structures, the payload and a clock. The // payload is the actual CRDT data which abides by the merge semantics. // The clock is a mechanism to provide a casual ordering of events, so -// we can determine which event proceeded eachother and apply the +// we can determine which event proceeded each other and apply the // various merge strategies. // // MerkleCRDTs are similar, they contain a CRDT payload, but instead diff --git a/merkle/clock/heads.go b/merkle/clock/heads.go index 07b10ed84a..b75103e4c0 100644 --- a/merkle/clock/heads.go +++ b/merkle/clock/heads.go @@ -93,7 +93,7 @@ func (hh *heads) Len(ctx context.Context) (int, error) { return len(list), err } -// Replace replaces a head with a new cid. +// Replace replaces a head with a new CID. func (hh *heads) Replace(ctx context.Context, h, c cid.Cid, height uint64) error { log.Info( ctx, @@ -172,7 +172,7 @@ func (hh *heads) List(ctx context.Context) ([]cid.Cid, uint64, error) { height, n := binary.Uvarint(r.Value) if n <= 0 { - return nil, 0, errors.New("error decocding height") + return nil, 0, errors.New("error decoding height") } heads = append(heads, headKey.Cid) if height > maxHeight { diff --git a/merkle/crdt/composite.go b/merkle/crdt/composite.go index f160e3fc12..abc15dba69 100644 --- a/merkle/crdt/composite.go +++ b/merkle/crdt/composite.go @@ -52,8 +52,7 @@ func init() { } } -// MerkleCompositeDAG is a MerkleCRDT implementation of the CompositeDAG -// using MerkleClocks +// MerkleCompositeDAG is a MerkleCRDT implementation of the CompositeDAG using MerkleClocks. type MerkleCompositeDAG struct { *baseMerkleCRDT // core.ReplicatedData @@ -61,7 +60,7 @@ type MerkleCompositeDAG struct { } // NewMerkleCompositeDAG creates a new instance (or loaded from DB) of a MerkleCRDT -// backed by a CompositeDAG CRDT +// backed by a CompositeDAG CRDT. func NewMerkleCompositeDAG( datastore datastore.DSReaderWriter, headstore datastore.DSReaderWriter, @@ -87,9 +86,7 @@ func NewMerkleCompositeDAG( } } -// Set sets the values of CompositeDAG. -// The value is always the object from the -// mutation operations. +// Set sets the values of CompositeDAG. The value is always the object from the mutation operations. func (m *MerkleCompositeDAG) Set( ctx context.Context, patch []byte, @@ -107,7 +104,7 @@ func (m *MerkleCompositeDAG) Set( return c, m.Broadcast(ctx, nd, delta) } -// Value is a no-op for a CompositeDAG +// Value is a no-op for a CompositeDAG. func (m *MerkleCompositeDAG) Value(ctx context.Context) ([]byte, error) { return m.reg.Value(ctx) } diff --git a/merkle/crdt/factory.go b/merkle/crdt/factory.go index 64d8dde68c..647bbb8e40 100644 --- a/merkle/crdt/factory.go +++ b/merkle/crdt/factory.go @@ -23,11 +23,11 @@ var ( ErrFactoryTypeNoExist = errors.New("No such factory for the given type exists") ) -// MerkleCRDTInitFn instantiates a MerkleCRDT with a given key +// MerkleCRDTInitFn instantiates a MerkleCRDT with a given key. type MerkleCRDTInitFn func(core.DataStoreKey) MerkleCRDT -// MerkleCRDTFactory instantiates a MerkleCRDTInitFn with a MultiStore -// returns a MerkleCRDTInitFn with all the necessary stores set +// MerkleCRDTFactory instantiates a MerkleCRDTInitFn with a MultiStore. +// Returns a MerkleCRDTInitFn with all the necessary stores set. type MerkleCRDTFactory func( mstore datastore.MultiStore, schemaID string, @@ -36,7 +36,7 @@ type MerkleCRDTFactory func( // Factory is a helper utility for instantiating new MerkleCRDTs. // It removes some of the overhead of having to coordinate all the various -// store parameters on every single new MerkleCRDT creation +// store parameters on every single new MerkleCRDT creation. type Factory struct { crdts map[client.CType]*MerkleCRDTFactory multistore datastore.MultiStore @@ -49,8 +49,8 @@ var ( DefaultFactory = NewFactory(nil) ) -// NewFactory returns a newly instanciated factory object with the assigned stores -// It may be called with all stores set to nil +// NewFactory returns a newly instantiated factory object with the assigned stores. +// It may be called with all stores set to nil. func NewFactory(multistore datastore.MultiStore) *Factory { return &Factory{ crdts: make(map[client.CType]*MerkleCRDTFactory), @@ -66,7 +66,7 @@ func (factory *Factory) Register(t client.CType, fn *MerkleCRDTFactory) error { } // Instance and execute the registered factory function for a given MerkleCRDT type -// supplied with all the current stores (passed in as a datastore.MultiStore object) +// supplied with all the current stores (passed in as a datastore.MultiStore object). func (factory Factory) Instance( schemaID string, bs corenet.Broadcaster, @@ -107,24 +107,24 @@ func (factory Factory) getRegisteredFactory(t client.CType) (*MerkleCRDTFactory, return fn, nil } -// SetStores sets all the current stores on the Factory in one call +// SetStores sets all the current stores on the Factory in one call. func (factory *Factory) SetStores(multistore datastore.MultiStore) error { factory.multistore = multistore return nil } -// WithStores returns a new instance of the Factory with all the stores set +// WithStores returns a new instance of the Factory with all the stores set. func (factory Factory) WithStores(multistore datastore.MultiStore) Factory { factory.multistore = multistore return factory } -// Rootstore impements MultiStore +// Rootstore implements MultiStore. func (factory Factory) Rootstore() datastore.DSReaderWriter { return nil } -// Data implements datastore.MultiStore and returns the current Datastore +// Data implements datastore.MultiStore and returns the current Datastore. func (factory Factory) Datastore() datastore.DSReaderWriter { if factory.multistore == nil { return nil @@ -132,7 +132,7 @@ func (factory Factory) Datastore() datastore.DSReaderWriter { return factory.multistore.Datastore() } -// Head implements datastore.MultiStore and returns the current Headstore +// Head implements datastore.MultiStore and returns the current Headstore. func (factory Factory) Headstore() datastore.DSReaderWriter { if factory.multistore == nil { return nil @@ -140,7 +140,7 @@ func (factory Factory) Headstore() datastore.DSReaderWriter { return factory.multistore.Headstore() } -// Head implements datastore.MultiStore and returns the current Headstore +// Head implements datastore.MultiStore and returns the current Headstore. func (factory Factory) Systemstore() datastore.DSReaderWriter { if factory.multistore == nil { return nil @@ -148,7 +148,7 @@ func (factory Factory) Systemstore() datastore.DSReaderWriter { return factory.multistore.Systemstore() } -// Dag implements datastore.MultiStore and returns the current Dagstore +// DAGstore implements datastore.MultiStore and returns the current DAGstore. func (factory Factory) DAGstore() datastore.DAGStore { if factory.multistore == nil { return nil diff --git a/merkle/crdt/lwwreg.go b/merkle/crdt/lwwreg.go index c97931b408..8e9ce00128 100644 --- a/merkle/crdt/lwwreg.go +++ b/merkle/crdt/lwwreg.go @@ -48,8 +48,7 @@ func init() { } } -// MerkleLWWRegister is a MerkleCRDT implementation of the LWWRegister -// using MerkleClocks +// MerkleLWWRegister is a MerkleCRDT implementation of the LWWRegister using MerkleClocks. type MerkleLWWRegister struct { *baseMerkleCRDT // core.ReplicatedData @@ -58,7 +57,7 @@ type MerkleLWWRegister struct { } // NewMerkleLWWRegister creates a new instance (or loaded from DB) of a MerkleCRDT -// backed by a LWWRegister CRDT +// backed by a LWWRegister CRDT. func NewMerkleLWWRegister( datastore datastore.DSReaderWriter, headstore datastore.DSReaderWriter, @@ -78,7 +77,7 @@ func NewMerkleLWWRegister( } } -// Set the value of the register +// Set the value of the register. func (mlwwreg *MerkleLWWRegister) Set(ctx context.Context, value []byte) (cid.Cid, error) { // Set() call on underlying LWWRegister CRDT // persist/publish delta @@ -87,13 +86,13 @@ func (mlwwreg *MerkleLWWRegister) Set(ctx context.Context, value []byte) (cid.Ci return c, err } -// Value will retrieve the current value from the db +// Value will retrieve the current value from the db. func (mlwwreg *MerkleLWWRegister) Value(ctx context.Context) ([]byte, error) { return mlwwreg.reg.Value(ctx) } // Merge writes the provided delta to state using a supplied -// merge semantic +// merge semantic. func (mlwwreg *MerkleLWWRegister) Merge(ctx context.Context, other core.Delta, id string) error { return mlwwreg.reg.Merge(ctx, other, id) } diff --git a/merkle/crdt/merklecrdt.go b/merkle/crdt/merklecrdt.go index 2db30a5296..9d82eaa3c1 100644 --- a/merkle/crdt/merklecrdt.go +++ b/merkle/crdt/merklecrdt.go @@ -40,10 +40,8 @@ var ( _ core.ReplicatedData = (*baseMerkleCRDT)(nil) ) -// The baseMerkleCRDT handles the merkle crdt overhead functions -// that aren't CRDT specific like the mutations and state retrieval -// functions. It handles creating and publishing the crdt DAG with -// the help of the MerkleClock +// baseMerkleCRDT handles the MerkleCRDT overhead functions that aren't CRDT specific like the mutations and state +// retrieval functions. It handles creating and publishing the CRDT DAG with the help of the MerkleClock. type baseMerkleCRDT struct { clock core.MerkleClock crdt core.ReplicatedData @@ -71,7 +69,7 @@ func (base *baseMerkleCRDT) ID() string { return base.crdt.ID() } -// Publishes the delta to state +// Publishes the delta to state. func (base *baseMerkleCRDT) Publish( ctx context.Context, delta core.Delta, diff --git a/net/doc.go b/net/doc.go index d890458278..22447e7dfd 100644 --- a/net/doc.go +++ b/net/doc.go @@ -11,42 +11,20 @@ // limitations under the License. /* -Package net provides p2p network functions for the core DefraDB -instance. +Package net provides p2p network functions for the core DefraDB instance. -Notable design descision. All DocKeys (Documents) have their own -respective PubSub topics. +Notable design descision: all DocKeys (Documents) have their own respective PubSub topics. -@todo: Needs review/scrutiny. - -Its structured as follows. - -We define a Peer object, which encapsulates an instanciated DB -objects, libp2p host object, libp2p DAGService. - - Peer is responsible for storing all network related meta-data, - maintaining open connections, pubsub mechanics, etc. - - - Peer object also contains a Server instance - -type Peer struct { - config +The Peer object encapsulates an instanciated DB objects, libp2p host object, libp2p DAGService. +Peer is responsible for storing all network related meta-data, maintaining open connections, pubsub mechanics, etc. +The Peer object also contains a Server instance. - DAGService - libp2pHost +The Server object is responsible for all underlying gRPC related functions and as it relates to the pubsub network. - db client.DB +Credit: Some of the base structure of this net package and its types is inspired/inherited from +Textile Threads (github.com/textileio/go-threads). As such, we are omitting copyright on this "net" package +and will release this folder under the Apache 2.0 license as per the header of each file. - context??? -} - -Server object is responsible for all underlying gRPC related -functions and as it relates to the pubsub network. - -Credit: Some of the base structure of this net package and its -types is inspired/inherited from Textile Threads -(github.com/textileio/go-threads). As such, we are omitting -copyright on this "net" package and will release this folder -under the Apache 2.0 license as per the header of each file. +@todo: Needs review/scrutiny. */ - package net diff --git a/node/node.go b/node/node.go index 27f52cce01..b96ee3f701 100644 --- a/node/node.go +++ b/node/node.go @@ -8,6 +8,12 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. +/* +Package node is responsible for interfacing a given DefraDB instance with a networked peer instance +and GRPC server. + +Basically it combines db/DB, net/Peer, and net/Server into a single Node object. +*/ package node import ( @@ -32,15 +38,6 @@ import ( "github.com/sourcenetwork/defradb/net" ) -/* - -Package node is responsible for interfacing a given DefraDB instance with -a networked peer instance and GRPC server. - -Basically it combines db/DB, net/Peer, and net/Server into a single Node -object. -*/ - var ( log = logging.MustNewLogger("defra.node") ) @@ -58,7 +55,7 @@ type Node struct { ctx context.Context } -// NewNode creates a new network node instance of DefraDB, wired into Libp2p +// NewNode creates a new network node instance of DefraDB, wired into libp2p. func NewNode( ctx context.Context, db client.DB, diff --git a/query/graphql/mapper/mapper.go b/query/graphql/mapper/mapper.go index 6470a7d13f..b0ba5e4c1a 100644 --- a/query/graphql/mapper/mapper.go +++ b/query/graphql/mapper/mapper.go @@ -327,7 +327,7 @@ func appendUnderlyingAggregates( return aggregates } -// appendIfNotExists attemps to match the given name and targets against existing +// appendIfNotExists attempts to match the given name and targets against existing // aggregates, if a match is not found, it will append a new aggregate. func appendIfNotExists( name string, @@ -337,8 +337,7 @@ func appendIfNotExists( ) ([]*aggregateRequest, *aggregateRequest) { field, exists := tryGetMatchingAggregate(name, targets, aggregates) if exists { - // If a match is found, there is nothing to do so we return the aggregages slice - // unchanged. + // If a match is found, there is nothing to do so we return the aggregates slice unchanged. return aggregates, field } @@ -464,8 +463,7 @@ func getCollectionName( return parsed.Name, nil } -// getTopLevelInfo returns the collection description and maps the fields directly -// on the object. +// getTopLevelInfo returns the collection description and maps the fields directly on the object. func getTopLevelInfo( descriptionsRepo *DescriptionsRepo, parsed *parser.Select, @@ -542,7 +540,7 @@ func resolveInnerFilterDependencies( if propertyMapped { // Inner properties should be recursively checked here, however at the moment - // filters do not support quering any deeper anyway. + // filters do not support querying any deeper anyway. // https://github.com/sourcenetwork/defradb/issues/509 continue } @@ -693,8 +691,8 @@ func toFilterMap( return key, typedClause } } else { - // If there are mutliple properties of the same name we can just take the first as - // we have no other reasonable way of identifing which property they mean if multiple + // If there are multiple properties of the same name we can just take the first as + // we have no other reasonable way of identifying which property they mean if multiple // consumer specified requestables are available. Aggregate dependencies should not // impact this as they are added after selects. index := mapping.FirstIndexOfName(sourceKey) @@ -743,8 +741,8 @@ func toGroupBy(source *parserTypes.GroupBy, mapping *core.DocumentMapping) *Grou indexes := make([]int, len(source.Fields)) for i, fieldName := range source.Fields { - // If there are mutliple properties of the same name we can just take the first as - // we have no other reasonable way of identifing which property they mean if multiple + // If there are multiple properties of the same name we can just take the first as + // we have no other reasonable way of identifying which property they mean if multiple // consumer specified requestables are available. Aggregate dependencies should not // impact this as they are added after selects. key := mapping.FirstIndexOfName(fieldName) @@ -767,8 +765,8 @@ func toOrderBy(source *parserTypes.OrderBy, mapping *core.DocumentMapping) *Orde fieldIndexes := make([]int, len(fields)) currentMapping := mapping for i, field := range fields { - // If there are mutliple properties of the same name we can just take the first as - // we have no other reasonable way of identifing which property they mean if multiple + // If there are multiple properties of the same name we can just take the first as + // we have no other reasonable way of identifying which property they mean if multiple // consumer specified requestables are available. Aggregate dependencies should not // impact this as they are added after selects. fieldIndex := currentMapping.FirstIndexOfName(field) diff --git a/query/graphql/mapper/select.go b/query/graphql/mapper/select.go index c95573feac..9a4d3ca823 100644 --- a/query/graphql/mapper/select.go +++ b/query/graphql/mapper/select.go @@ -16,7 +16,7 @@ import "github.com/sourcenetwork/defradb/core" // // It wraps child Fields belonging to this Select. type Select struct { - // Targeting infomation used to restrict or format the result. + // Targeting information used to restrict or format the result. Targetable // The document mapping for this select, describing how items yielded diff --git a/query/graphql/mapper/targetable.go b/query/graphql/mapper/targetable.go index d5802daebe..b25d4d78b7 100644 --- a/query/graphql/mapper/targetable.go +++ b/query/graphql/mapper/targetable.go @@ -127,11 +127,11 @@ type Targetable struct { // of documents returned. Limit *Limit - // An optional grouping clause, that can be specifed to group results by property + // An optional grouping clause, that can be specified to group results by property // value. GroupBy *GroupBy - // An optional order clause, that can be specifed to order results by property + // An optional order clause, that can be specified to order results by property // value OrderBy *OrderBy } diff --git a/query/graphql/parser/doc.go b/query/graphql/parser/doc.go index 66c2a7d367..2373c8928c 100644 --- a/query/graphql/parser/doc.go +++ b/query/graphql/parser/doc.go @@ -8,12 +8,11 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. -// Package parser provides a structured proxy to the underlying -// GraphQL AST and parser. Additionally it evaluates the parsed -// filter conditions on a document. -// -// Given an already parsed GraphQL ast.Document, this package -// can further parse the document into the DefraDB GraphQL -// Query structure, representing, Select statements, fields, -// filters, arguments, directives, etc. +/* +Package parser provides a structured proxy to the underlying GraphQL AST and parser. +Additionally it evaluates the parsed filter conditions on a document. + +Given an already parsed GraphQL ast.Document, this package can further parse the document into the +DefraDB GraphQL Query structure, representing, Select statements, fields, filters, arguments, directives, etc. +*/ package parser diff --git a/query/graphql/parser/filter.go b/query/graphql/parser/filter.go index e4ec889ec7..cdb0b8f088 100644 --- a/query/graphql/parser/filter.go +++ b/query/graphql/parser/filter.go @@ -35,7 +35,7 @@ type Filter struct { // type condition // NewFilter parses the given GraphQL ObjectValue AST type -// and extracts all the filter conditions into a usable map +// and extracts all the filter conditions into a usable map. func NewFilter(stmt *ast.ObjectValue) (*Filter, error) { conditions, err := ParseConditions(stmt) if err != nil { @@ -46,7 +46,7 @@ func NewFilter(stmt *ast.ObjectValue) (*Filter, error) { }, nil } -// NewFilterFromString creates a new filter from a string +// NewFilterFromString creates a new filter from a string. func NewFilterFromString(body string) (*Filter, error) { if !strings.HasPrefix(body, "{") { body = "{" + body + "}" diff --git a/query/graphql/parser/mutation.go b/query/graphql/parser/mutation.go index 0b01bdd149..7b5f087f60 100644 --- a/query/graphql/parser/mutation.go +++ b/query/graphql/parser/mutation.go @@ -66,9 +66,8 @@ func (m Mutation) GetRoot() parserTypes.SelectionType { return parserTypes.ObjectSelection } -// ToSelect returns a basic Select object, with the same Name, -// Alias, and Fields as the Mutation object. Used to create a -// Select planNode for the mutation return objects +// ToSelect returns a basic Select object, with the same Name, Alias, and Fields as +// the Mutation object. Used to create a Select planNode for the mutation return objects. func (m Mutation) ToSelect() *Select { return &Select{ Name: m.Schema, diff --git a/query/graphql/parser/query.go b/query/graphql/parser/query.go index f573e0ccfc..75b5a99fcd 100644 --- a/query/graphql/parser/query.go +++ b/query/graphql/parser/query.go @@ -98,11 +98,10 @@ func (c Field) GetRoot() parserTypes.SelectionType { // ParseQuery parses a root ast.Document, and returns a // formatted Query object. -// Requires a non-nil doc, will error if given a nil -// doc +// Requires a non-nil doc, will error if given a nil doc. func ParseQuery(doc *ast.Document) (*Query, error) { if doc == nil { - return nil, errors.New("ParseQuery requires a non nil ast.Document") + return nil, errors.New("ParseQuery requires a non-nil ast.Document") } q := &Query{ Statement: doc, @@ -128,7 +127,7 @@ func ParseQuery(doc *ast.Document) (*Query, error) { } q.Mutations = append(q.Mutations, mdef) } else { - return nil, errors.New("Unkown graphql operation type") + return nil, errors.New("Unknown GraphQL operation type") } } } diff --git a/query/graphql/parser/types/types.go b/query/graphql/parser/types/types.go index 5c9718e491..9de9366f39 100644 --- a/query/graphql/parser/types/types.go +++ b/query/graphql/parser/types/types.go @@ -8,6 +8,9 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. +/* +Package types defines the GraphQL types used by the query service. +*/ package types import "github.com/graphql-go/graphql/language/ast" diff --git a/query/graphql/planner/commit.go b/query/graphql/planner/commit.go index 0171b97ceb..806a975235 100644 --- a/query/graphql/planner/commit.go +++ b/query/graphql/planner/commit.go @@ -149,9 +149,8 @@ func (p *Planner) commitSelectLatest(parsed *mapper.CommitSelect) (*commitSelect return commit, nil } -// commitSelectBlock is a CommitSelect node intialized witout a headsetScanNode, and is -// expected to be given a target CID in the mapper.CommitSelect object. It returns -// a single commit if found +// commitSelectBlock is a CommitSelect node initialized without a headsetScanNode, and is expected +// to be given a target CID in the parser.CommitSelect object. It returns a single commit if found. func (p *Planner) commitSelectBlock(parsed *mapper.CommitSelect) (*commitSelectNode, error) { dag := p.DAGScan(parsed) if parsed.Cid != "" { diff --git a/query/graphql/planner/delete.go b/query/graphql/planner/delete.go index e2d3d2e92c..63186c7405 100644 --- a/query/graphql/planner/delete.go +++ b/query/graphql/planner/delete.go @@ -86,7 +86,7 @@ func (n *deleteNode) Next() (bool, error) { n.isDeleting = false - // lets release the results dockeys slice memory + // let's release the results dockeys slice memory results.DocKeys = nil } diff --git a/query/graphql/planner/doc.go b/query/graphql/planner/doc.go index 6303c8c3e2..d376655b18 100644 --- a/query/graphql/planner/doc.go +++ b/query/graphql/planner/doc.go @@ -8,46 +8,37 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. -// Package planner creates DefraDB GraphQL Query Plans. -// -// DefraDB Query Planner -// ===================== -// -// The DefraDB Query Planner creates an Execution Plan from the currently defined schemas, -// collections, query, and any possible indexes. -// -// An execution plan is a hierarchical structure that represents how and what data to -// query the database for. It defines what should be done at each execution step, and if it -// should be done in parallel or sequentially. -// -// The plan is structured as a graph of nodes using the Volcano iterator Method Query -//Evaluation approach. -// Volcano - An Extensible and Parallel Query Evaluation System -// [Paper](https://paperhub.s3.amazonaws.com/dace52a42c07f7f8348b08dc2b186061.pdf). -// -// This method enables extensibility, optimization, and potential parallelism. -// -// Each node in the plan graph implements the planNode interface. Each node is responsible -// for generating exactly one result document. The last leaf in the graph is often a -// data source, that generates the incoming data. All the nodes above the source leaf -// node continuously processes the document, until it reaches the top root node. Some -// nodes may loop continuously until they can produce a result document. Eg. JoinNodes -// will collection batches of results from their source node, and compute joins with one -// or more other source nodes, it will loop until it finds a result document that -// satisfies the join predicate. -// -// The plan is executed as defined above, result by result, by iteratively calling the -// Next() method. Which will either return True or False, depending on if it successfully -// produced a record, which can be accessed via the Values() method. -// -// The plan starts with a base ast.Document, which represents the entire provided -// request query string, parsed into an appropriate AST Document. The AST Document -// is generated by the github.com/graphql-go/graphql package. It is then further -// parsed using a native DefraDB GraphQL Parser -// (github.com/sourcenetwork/defradb/query/graphql/parser), which converts the complex -// AST Document, into a manageable structure, with all the relevant query information -// readily available. -// -// More details about the DefraDB Query Planner can be found in the DefraDB Technical -// Specification Document. +/* +Package planner creates DefraDB GraphQL Query Plans. + +The DefraDB Query Planner creates an Execution Plan from the currently defined schemas, collections, query, and any +possible indexes. + +An execution plan is a hierarchical structure that represents how and what data to query the database for. It defines +what should be done at each execution step, and whether it should be done in parallel or sequentially. + +The plan is structured as a graph of nodes using the Volcano iterator Method Query valuation approach. (Volcano - An +Extensible and Parallel Query Evaluation System +https://paperhub.s3.amazonaws.com/dace52a42c07f7f8348b08dc2b186061.pdf). + +This method enables extensibility, optimization, and potential parallelism. + +Each node in the plan graph implements the planNode interface. Each node is responsible for generating exactly one +result document. The last leaf in the graph is often a data source that generates the incoming data. All the nodes +above the source leaf node continuously process the document, until the top root node is reached. Some nodes may loop +continuously until they can produce a result document. Eg. JoinNodes will collect batches of results from their source +node, and compute joins with one or more other source nodes, looping until a result document that satisfies the join +predicate is found. + +The plan is executed as defined above, result by result, by iteratively calling the Next() method. Which will either +return True or False, depending on if it successfully produced a record, which can be accessed via the Values() method. + +The plan starts with a base ast.Document, which represents the entire provided request query string, parsed into an +appropriate AST Document. The AST Document is generated by the https://github.com/graphql-go/graphql package. It is +then further parsed using a native DefraDB GraphQL Parser +(https://github.com/sourcenetwork/defradb/query/graphql/parser), which converts the complex AST Document, into a +manageable structure, with all the relevant query information readily available. + +More details about the DefraDB Query Planner can be found in the DefraDB Technical Specification Document. +*/ package planner diff --git a/query/graphql/planner/update.go b/query/graphql/planner/update.go index c5912d06ca..b0e6a1303d 100644 --- a/query/graphql/planner/update.go +++ b/query/graphql/planner/update.go @@ -91,7 +91,7 @@ func (n *updateNode) Next() (bool, error) { } n.isUpdating = false - // lets release the results dockeys slice memory + // let's release the results dockeys slice memory results.DocKeys = nil } diff --git a/query/graphql/schema/descriptions.go b/query/graphql/schema/descriptions.go index d5e703e1e4..f4668a61d1 100644 --- a/query/graphql/schema/descriptions.go +++ b/query/graphql/schema/descriptions.go @@ -146,7 +146,7 @@ func (g *Generator) CreateDescriptions( // field associated with a related type, as // its defined down below in the IsObject block. if _, exists := desc.GetField(fname); exists { - // lets make sure its an _id field, otherwise + // let's make sure its an _id field, otherwise // we might have an error here if !strings.HasSuffix(fname, "_id") { return nil, fmt.Errorf("Error: found a duplicate field '%s' for type %s", fname, t.Name()) diff --git a/query/graphql/schema/doc.go b/query/graphql/schema/doc.go index 6b7c05dd34..7614abaefa 100644 --- a/query/graphql/schema/doc.go +++ b/query/graphql/schema/doc.go @@ -8,9 +8,8 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. -/* Package graphql provides the necessary schema tooling, including parsing, - validation, and generation for developer defined types for the GraphQL - implementation of DefraDB. +/* +Package schema provides the necessary schema tooling, including parsing, validation, and generation +for developer defined types for the GraphQL implementation of DefraDB. */ - package schema diff --git a/query/graphql/schema/generate.go b/query/graphql/schema/generate.go index 03f8ffd0e2..7acfb9c0af 100644 --- a/query/graphql/schema/generate.go +++ b/query/graphql/schema/generate.go @@ -190,7 +190,7 @@ func (g *Generator) fromAST(ctx context.Context, document *ast.Document) ([]*gql return nil, err } - // now lets generate the mutation types. + // now let's generate the mutation types. mutationType := g.manager.schema.MutationType() for _, t := range g.typeDefs { fs, err := g.GenerateMutationInputForGQLType(t) diff --git a/query/graphql/schema/manager.go b/query/graphql/schema/manager.go index ccca3fbca9..f0d35e16a2 100644 --- a/query/graphql/schema/manager.go +++ b/query/graphql/schema/manager.go @@ -60,9 +60,8 @@ func (s *SchemaManager) Schema() *gql.Schema { } // ResolveTypes ensures all necessary types are defined, and -// resolves any remaning thunks/closures defined on object -// fields. -// Should be called *after* all dependant types have been added +// resolves any remaining thunks/closures defined on object fields. +// It should be called *after* all dependent types have been added. func (s *SchemaManager) ResolveTypes() error { // basically, this function just refreshes the // schema.TypeMap, and runs the internal diff --git a/query/graphql/schema/root.go b/query/graphql/schema/root.go index 6a29939122..e1d44a32e8 100644 --- a/query/graphql/schema/root.go +++ b/query/graphql/schema/root.go @@ -14,7 +14,7 @@ import ( gql "github.com/graphql-go/graphql" ) -// orderingEnum is an enum for the Ordering argument +// orderingEnum is an enum for the Ordering argument. var orderingEnum = gql.NewEnum(gql.EnumConfig{ Name: "Ordering", Values: gql.EnumValueConfigMap{ @@ -27,7 +27,7 @@ var orderingEnum = gql.NewEnum(gql.EnumConfig{ }, }) -// booleanOperatorBlock filter block for boolean types +// booleanOperatorBlock filter block for boolean types. var booleanOperatorBlock = gql.NewInputObject(gql.InputObjectConfig{ Name: "BooleanOperatorBlock", Fields: gql.InputObjectConfigFieldMap{ @@ -49,7 +49,7 @@ var booleanOperatorBlock = gql.NewInputObject(gql.InputObjectConfig{ }, }) -// dateTimeOperatorBlock filter block for DateTime types +// dateTimeOperatorBlock filter block for DateTime types. var dateTimeOperatorBlock = gql.NewInputObject(gql.InputObjectConfig{ Name: "DateTimeOperatorBlock", Fields: gql.InputObjectConfigFieldMap{ @@ -80,7 +80,7 @@ var dateTimeOperatorBlock = gql.NewInputObject(gql.InputObjectConfig{ }, }) -// floatOperatorBlock filter block for Float types +// floatOperatorBlock filter block for Float types. var floatOperatorBlock = gql.NewInputObject(gql.InputObjectConfig{ Name: "FloatOperatorBlock", Fields: gql.InputObjectConfigFieldMap{ @@ -111,7 +111,7 @@ var floatOperatorBlock = gql.NewInputObject(gql.InputObjectConfig{ }, }) -// intOperatorBlock filter block for Int types +// intOperatorBlock filter block for Int types. var intOperatorBlock = gql.NewInputObject(gql.InputObjectConfig{ Name: "IntOperatorBlock", Fields: gql.InputObjectConfigFieldMap{ @@ -142,7 +142,7 @@ var intOperatorBlock = gql.NewInputObject(gql.InputObjectConfig{ }, }) -// stringOperatorBlock filter block for string types +// stringOperatorBlock filter block for string types. var stringOperatorBlock = gql.NewInputObject(gql.InputObjectConfig{ Name: "StringOperatorBlock", Fields: gql.InputObjectConfigFieldMap{ @@ -164,7 +164,7 @@ var stringOperatorBlock = gql.NewInputObject(gql.InputObjectConfig{ }, }) -// idOperatorBlock filter block for ID types +// idOperatorBlock filter block for ID types. var idOperatorBlock = gql.NewInputObject(gql.InputObjectConfig{ Name: "IDOperatorBlock", Fields: gql.InputObjectConfigFieldMap{ diff --git a/query/graphql/schema/schema.go b/query/graphql/schema/schema.go index 7d9b912320..967cd1a3c1 100644 --- a/query/graphql/schema/schema.go +++ b/query/graphql/schema/schema.go @@ -8,6 +8,10 @@ // by the Apache License, Version 2.0, included in the file // licenses/APL.txt. +/* +Package schema provides the necessary schema tooling, including parsing, validation, and generation for developer +defined types for the GraphQL implementation of DefraDB. +*/ package schema import ( diff --git a/tests/bench/README.md b/tests/bench/README.md index 702b1dd491..3e73d8667b 100644 --- a/tests/bench/README.md +++ b/tests/bench/README.md @@ -1,7 +1,7 @@ # DefraDB Benchmark Suite This folder contains the DefraDB Benchmark Suite, its related code, sub packages, utilities, and data generators. -The goal of this suite is to provide an insight to DefraDBs performance, and to provide a quantitative approach to performance analysis and comparison. As such, the benchmark results should be used soley as a relative basis, and not concrete absolute values. +The goal of this suite is to provide an insight to DefraDBs performance, and to provide a quantitative approach to performance analysis and comparison. As such, the benchmark results should be used solely as a relative basis, and not concrete absolute values. > Database benchmarking is a notorious complex issue to provide fair evaluations, that are void of contrived examples aimed to put the database "best foot forward".