Skip to content

Local Benchmarks

Alberto Sonnino edited this page Mar 21, 2021 · 16 revisions

When running benchmarks, the codebase is automatically compiled with the feature flag benchmark. This enables the node to print some special log lines that are then read by the python scripts and used to compute performance. Those special log lines are clearly indicated with comments in the code, but make sure to not modify or delete them (otherwise the benchmark scripts will fail to interpret the logs).

Parametrize the benchmark

After cloning the repo and installing all dependencies, you can use Fabric to run benchmarks on your local machine. Locate the task called local in the file fabfile.py:

@task
def local(ctx):
    ...

The task specifies two types of parameters, the benchmark parameters and the nodes parameters. The benchmark parameters look as follows:

bench_params = {
    'nodes': 4,
    'rate': 1_000,
    'tx_size': 512,
    'duration': 20,
}

They specify the number of nodes to deploy (nodes), the input rate (tx/s) at which the clients submits transactions to the system (rate), the size of each transaction in bytes (tx_size), and the duration of the benchmark in seconds (duration). The benchmarking script will deploy as many clients as nodes and divide the input rate equally amongst each client. For instance, if you configure the testbed with 4 nodes and an input rate of 1,000 tx/s (as in the example above), the scripts will deploy 4 clients each submitting transactions to one node at a rate of 250 tx/s.

The nodes parameters contain the configuration of the consensus and the mempool:

node_params = {
    'consensus': {
        'timeout_delay': 1_000,
        'sync_retry_delay': 10_000,
        'max_payload_size': 500,
        'min_block_delay': 0
    },
    'mempool': {
        'queue_capacity': 10_000,
        'sync_retry_delay': 10_000,
        'max_payload_size': 15_000,
        'min_block_delay': 0
    }
}

They are defined as follows:

  • timeout_delay (consensus): Nodes trigger a view-change when this timeout (in milliseconds) is reached.
  • queue_capacity (mempool): Set the maximum number of payloads that the mempool keeps in memory. When this capacity is exceeded, the mempool start dropping incoming clients' transactions.
  • sync_retry_delay (consensus and mempool): Nodes re-broadcast sync requests when this timeout (in milliseconds) is reached.
  • max_payload_size (consensus and mempool): Set the maximum size of the payload (in bytes).
  • min_block_delay (consensus and mempool): Minimum delay between block creation (in milliseconds).

Run the benchmark

Once you specified both bench_params and node_params as desired, run:

$ fab local

This command first recompiles your code in release mode (and with the benchmark feature flag activated), thus ensuring you always benchmark the latest version of your code. This may take a long time the first time you run it. It then generates the configuration files and keys for each node, and runs the benchmarks with the specified parameters. It finally parses the logs and displays a summary of the execution similarly to the one below. All the configuration and key files are hidden JSON files; i.e., their name starts with a dot (.), such as .committee.json.

-----------------------------------------
 SUMMARY:
-----------------------------------------
 + CONFIG:
 Committee size: 4 nodes
 Input rate: 1,000 tx/s
 Transaction size: 512 B
 Execution time: 20 s

 Consensus max payloads size: 500 B
 Consensus min block delay: 0 ms
 Mempool max payloads size: 15,000 B
 Mempool min block delay: 0 ms

 + RESULTS:
 Consensus TPS: 966 tx/s
 Consensus BPS: 494,627 B/s
 Consensus latency: 1 ms

 End-to-end TPS: 966 tx/s
 End-to-end BPS: 494,576 B/s
 End-to-end latency: 4 ms
-----------------------------------------

The next section provides a step-by-step tutorial to run benchmarks on Amazon Web Services (AWS).

Clone this wiki locally