Skip to content

Commit

Permalink
Update docs for review comments
Browse files Browse the repository at this point in the history
  • Loading branch information
zalegrala committed Oct 12, 2021
1 parent a1dc937 commit 6916a70
Show file tree
Hide file tree
Showing 3 changed files with 37 additions and 43 deletions.
46 changes: 16 additions & 30 deletions docs/tempo/website/operations/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,8 +52,11 @@ which is the single binary deployment mode.

A single binary mode deployment runs all top-level components in a single
process, forming an instance of Tempo. The single binary mode is the simplest
implement. Refer to [Architecture]({{< relref "./architecture" >}}) for
descriptions of the components.
to deploy but cannot but can not horizontally scale. Refer to
[Architecture]({{< relref "./architecture" >}}) for descriptions of the
components.

To enable this mode, `-target=all` is used, which is the default.

Find docker-compose deployment examples at:

Expand All @@ -62,43 +65,26 @@ Find docker-compose deployment examples at:

## Scalable single binary

A scalable single binary mode deployment may have more than one single binary
mode Tempo instance. Matching components, such as the distributors, within
each instance are aware of each other; this is known as clustering. The
components form a [consistent hash ring]({{< relref "./consistent_hash_ring"
>}}) to coordinate operations among the multiple Tempo instances.
The configuration of a scalable single binary defines `kvstore`. Here is an
example `memberlist` configuration:

```yaml
target: scalable-single-binary
ingester:
lifecycler:
ring:
kvstore:
store: memberlist
```
Additionally, the `queriers` must know the DNS name that will contain the addresses of all other instances.
For example:
A scalable single binary deployment is similar to the single binary mode in
that all components are deployed in one binary but it is capable of
horizontally scaling. This mode offers some flexibility of scaling without the
complexity of the full microservices deployment.

```yaml
querier:
frontend_worker:
frontend_address: tempo.lab.example.com:9095
```
Each of the `queriers` will perform a DNS lookup for the `frontend_address` and
connect to the addresses found within the DNS record.

Each of the `queriers` will perform a DNS lookup for the `frontend_address` and connect to the addresses found within the DNS record.
To enable this mode, `-target=scalable-single-binary` is used.

Find a docker-compose deployment example at:

- [https://github.com/grafana/tempo/tree/main/example/docker-compose/scalable-single-binary](https://github.com/grafana/tempo/tree/main/example/docker-compose/scalable-single-binary)

## Microservices

In microservices mode, components are deployed in distinct processes.
Scaling is per component, leading to flexibility in scaling.
In microservices mode, components are deployed in distinct processes. Scaling
is per component which allows for greater flexibility in scaling and more
granular failure domains. This is the preferred method for a production
deployment, but it is also the most complex

The configuration associated with each component's deployment specifies a
`target`. For example, to deploy a `querier`, the configuration would contain
Expand Down
15 changes: 9 additions & 6 deletions example/docker-compose/scalable-single-binary/readme.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
## Scalable Single Binary

In this example tempo is configured to write data to MinIO which presents an S3 compatible API. Additionally, `memberlist` is enabled to demonstrate how a single binary can run all services and still make use of the cluster-awareness that `memberlist` provides.
In this example Tempo is configured to write data to MinIO which presents an
S3-compatible API. Additionally, `memberlist` is enabled to demonstrate how a
single binary can run all services and still make use of the cluster-awareness
that `memberlist` provides.

1. First start up the local stack.

Expand All @@ -25,10 +28,10 @@ scalable-single-binary-tempo3-1 "/tempo -target=scal…" t
scalable-single-binary-vulture-1 "/tempo-vulture -pro…" vulture running
```

2. If you're interested you can see the wal/blocks as they are being created. Navigate to minio at
2. If you're interested you can see the WAL/blocks as they are being created. Navigate to MinIO at
http://localhost:9001 and use the username/password of `tempo`/`supersecret`.

3. The synthetic-load-generator is now printing out trace ids it's flushing into Tempo. To view its logs use -
3. The synthetic-load-generator is now printing out trace IDs it's flushing into Tempo. To view its logs use -

```console
docker-compose logs -f synthetic-load-generator
Expand All @@ -46,12 +49,12 @@ Logs are in the form
Emitted traceId <traceid> for service frontend route /cart
```

Copy one of these trace ids.
Copy one of these trace IDs.

4. Navigate to [Grafana](http://localhost:3000/explore) and paste the trace id to request it from Tempo.
4. Navigate to [Grafana](http://localhost:3000/explore) and paste the trace ID to request it from Tempo.
Also notice that you can query Tempo metrics from the Prometheus data source setup in Grafana.

5. To stop the setup use -
5. To stop the setup use:

```console
docker-compose down -v
Expand Down
19 changes: 12 additions & 7 deletions integration/microservices/README.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,31 @@
# tempo-load-test

This repo aims to make it easier to measure and analyze tempo performance in micro-services mode.
There are already many examples for running tempo under load, but they use the single-binary approach and are not representative of what is occuring in larger installations.
Here tempo is run with separate containers for distributor and ingesters, and replication factor = 3, meaning that the distributor will mirror all incoming traces to 3 ingesters.
This example aims to make it easier to measure and analyze tempo performance in
micro-services mode. There are already many examples for running tempo under
load, but they use the single-binary approach and are not representative of
what is occurring in larger installations. Here tempo is run with separate
containers for distributor and ingesters, and replication factor = 3, meaning
that the distributor will mirror all incoming traces to 3 ingesters.

![dashboard](./dashboard.png)

# What this repo contains
# What this example contains

1. Tempo in micro-services mode
1. 1x distributor
1. 3x ingesters
1. ReplicationFactor=3 meaning that the distributor mirrors incoming traces
1. `ReplicationFactor=3` meaning that the distributor mirrors incoming traces
1. S3/Min.IO virtual storage
1. Dashboard and metrics using
1. Prometheus
1. Grafana
1. cadvisor - to gather container cpu usage and other metrics
1. cadvisor - to gather container CPU usage and other metrics

# Instructions

This repo is expected to be used in conjuction with tempo development in a rapid feedback loop. It is assumed you have a working go installation and a copy of tempo already cloned somewhere.
This example is expected to be used in conjunction with tempo development in a
rapid feedback loop. It is assumed you have a working go installation and a
copy of tempo already cloned somewhere.

1. Build the tempo container
1. Run `make docker-tempo`
Expand Down

0 comments on commit 6916a70

Please sign in to comment.