-
Notifications
You must be signed in to change notification settings - Fork 532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingester data received has increased several fold since update to 2.7 #4606
Comments
Hi @remi-hackle. gRPC compression was disabled by default in #4429, which landed in v2.7. We found that Tempo performed better without compression between components. |
@mapno Ah, I see, thanks for the answer. |
You can restore compression between distributors and ingesters with this config: distributor:
ingester_client:
grpc_client_config:
grpc_compression: "snappy" |
It's mentioned here: https://grafana.com/docs/tempo/latest/release-notes/v2-7/#grpc-compression-disabled Do you think it should be more prominent? I think the read path change is important but less impactful. Perhaps we should revert the write path change? |
Hi Joe, I believe it’s worth noting, as mentioned above, that this comes with increased network costs in large clusters. An alternative approach could be enabling compression by default, while advising that it can be turned off to reduce CPU usage—though this would impact network traffic. In any case, for oss setups, cost allocations are crucial, and these changes need to be better communicated along with their potential consequences. |
I'll be happy to update the docs to clarify. |
I've added doc for gRPC compression to the Configuration doc to make it more visible: #4626 |
I noticed something strange after updating tempo to 2.7.0 this time.
After updating to that version, the amount of data received by ingester increased several times.
I didn't see any major changes in the release notes for this,
so is there an option change or special configuration that I didn't notice?
Updated to tempo 2.7.0 around 12:00 on January 20, 2025
7 days worth of metrics
Rolled back to 2.6 at around 10:50 today (1/24/2024)
metric numbers since rolling back to 2.6.
The text was updated successfully, but these errors were encountered: