diff --git a/docs/sources/tempo/troubleshooting/_index.md b/docs/sources/tempo/troubleshooting/_index.md index 26824cd5992..ba1ba9a2750 100644 --- a/docs/sources/tempo/troubleshooting/_index.md +++ b/docs/sources/tempo/troubleshooting/_index.md @@ -16,18 +16,18 @@ In addition, the [Tempo runbook](https://github.com/grafana/tempo/blob/main/oper ## Sending traces -- [Spans are being refused with "pusher failed to consume trace data"]({{< relref "./max-trace-limit-reached" >}}) -- [Is Grafana Alloy sending to the backend?]({{< relref "./agent" >}}) +- [Spans are being refused with "pusher failed to consume trace data"](https://grafana.com/docs/tempo//troubleshooting/max-trace-limit-reached/) +- [Is Grafana Alloy sending to the backend?](https://grafana.com/docs/tempo//troubleshooting/alloy/) ## Querying -- [Unable to find my traces in Tempo]({{< relref "./unable-to-see-trace" >}}) -- [Error message "Too many jobs in the queue"]({{< relref "./too-many-jobs-in-queue" >}}) -- [Queries fail with 500 and "error using pageFinder"]({{< relref "./bad-blocks" >}}) -- [I can search traces, but there are no service name or span name values available]({{< relref "./search-tag" >}}) -- [Error message `response larger than the max ( vs )`]({{< relref "./response-too-large" >}}) -- [Search results don't match trace lookup results with long-running traces]({{< relref "./long-running-traces" >}}) +- [Unable to find my traces in Tempo](https://grafana.com/docs/tempo//troubleshooting/unable-to-see-trace/) +- [Error message "Too many jobs in the queue"](https://grafana.com/docs/tempo//troubleshooting/too-many-jobs-in-queue/) +- [Queries fail with 500 and "error using pageFinder"](https://grafana.com/docs/tempo//troubleshooting/bad-blocks/) +- [I can search traces, but there are no service name or span name values available](https://grafana.com/docs/tempo//troubleshooting/search-tag) +- [Error message `response larger than the max ( vs )`](https://grafana.com/docs/tempo//troubleshooting/response-too-large/) +- [Search results don't match trace lookup results with long-running traces](https://grafana.com/docs/tempo//troubleshooting/long-running-traces/) ## Metrics-generator -- [Metrics or service graphs seem incomplete]({{< relref "./metrics-generator" >}}) +- [Metrics or service graphs seem incomplete](https://grafana.com/docs/tempo//troubleshooting/metrics-generator/) diff --git a/docs/sources/tempo/troubleshooting/agent.md b/docs/sources/tempo/troubleshooting/alloy.md similarity index 57% rename from docs/sources/tempo/troubleshooting/agent.md rename to docs/sources/tempo/troubleshooting/alloy.md index b959e484cdb..1fc938a0c95 100644 --- a/docs/sources/tempo/troubleshooting/agent.md +++ b/docs/sources/tempo/troubleshooting/alloy.md @@ -5,6 +5,7 @@ description: Gain visibility on how many traces are being pushed to Grafana Allo weight: 472 aliases: - ../operations/troubleshooting/agent/ +- ./agent.md # /docs/tempo//troubleshooting/agent.md --- # Troubleshoot Grafana Alloy @@ -33,6 +34,23 @@ exporter_sent_spans_ratio_total exporter_send_failed_spans_ratio_total ``` +Alloy has a Prometheus scrape endpoint, `/metrics`, that you can use to check metrics locally by opening a browser to `http://localhost:12345/metrics`. +The `/metrics` HTTP endpoint of the Alloy HTTP server exposes the Alloy component and controller metrics. +Refer to the [Monitor the Grafana Alloy component controller](https://grafana.com/docs/alloy/latest/troubleshoot/controller_metrics/) documentation for more information. + +### Check metrics in Grafana Cloud + +In your Grafana Cloud instance, you can check metrics using the `grafanacloud-usage` data source. +To view the metrics, use the following steps: + +1. From your Grafana instance, select **Explore** in the left menu. +1. Change the data source to `grafanacloud-usage`. +1. Type the metric to verify in the text box. If you start with `grafanacloud_traces_`, you can use autocomplete to browse the list of available metrics. + +Refer to [Cloud Traces usage metrics](https://grafana.com/docs/grafana-cloud/cost-management-and-billing/understand-your-invoice/usage-limits/#cloud-traces-usage) for a list of metrics related to tracing usage. + +![Use Explore to check the metrics for traces sent to Grafana Cloud](/media/docs/tempo/screenshot-tempo-trouble-metrics-search.png) + ## Trace span logging If metrics and logs are looking good, but you are still unable to find traces in Grafana Cloud, you can configure Alloy to output all the traces it receives to the [console](https://grafana.com/docs/tempo//configuration/grafana-alloy/automatic-logging/).