-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[netpath] Add rate limit to dynamic paths #33841
base: main
Are you sure you want to change the base?
Conversation
comp/networkpath/npcollector/npcollectorimpl/pathteststore/pathteststore.go
Outdated
Show resolved
Hide resolved
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv aws.create-vm --pipeline-id=55445725 --os-family=ubuntu Note: This applies to commit acb0a73 |
Static quality checks ✅Please find below the results from static quality gates Info
|
Uncompressed package size comparisonComparison with ancestor Diff per package
Decision |
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: ba9f4f1 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | quality_gate_logs | % cpu utilization | +3.41 | [+0.28, +6.54] | 1 | Logs |
➖ | quality_gate_idle_all_features | memory utilization | +0.70 | [+0.63, +0.76] | 1 | Logs bounds checks dashboard |
➖ | quality_gate_idle | memory utilization | +0.35 | [+0.31, +0.39] | 1 | Logs bounds checks dashboard |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +0.35 | [-0.52, +1.22] | 1 | Logs |
➖ | tcp_syslog_to_blackhole | ingress throughput | +0.07 | [-0.00, +0.14] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | +0.05 | [-0.81, +0.91] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | +0.02 | [-0.75, +0.79] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | +0.02 | [-0.76, +0.80] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.02 | [-0.27, +0.31] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.01 | [-0.03, +0.01] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | -0.02 | [-0.66, +0.63] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | -0.04 | [-0.51, +0.43] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | -0.06 | [-0.93, +0.82] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | -0.09 | [-0.83, +0.65] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.09 | [-0.91, +0.72] | 1 | Logs |
➖ | file_tree | memory utilization | -0.29 | [-0.36, -0.21] | 1 | Logs |
Bounds Checks: ✅ Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | intake_connections | 10/10 | |
✅ | quality_gate_logs | lost_bytes | 10/10 | |
✅ | quality_gate_logs | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
pathtestBudget := math.MaxInt | ||
// scale the pathtest budget based on how many minutes passed since the last flush | ||
if f.config.MaxPerMinute > 0 { | ||
elapsedMinutes := float64(elapsed) / float64(time.Minute) | ||
// if channels are blocked, rarely a long time can pass between flushes. | ||
// clamp the elapsed time to 1 minute to avoid a huge budget | ||
elapsedMinutes = math.Min(elapsedMinutes, 1.0) | ||
pathtestBudget = int(elapsedMinutes * float64(f.config.MaxPerMinute)) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pimlu Shall we use "golang.org/x/time/rate"
to avoid reimplementing rate limiting logic, this seems to be already used in multiple places in datadog-agent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion, golang.org/x/time/rate
is awesome - I changed this to use it. It also lets us pass in mocked times which is great because the tests still work nicely.
pkg/config/setup/config.go
Outdated
@@ -471,6 +471,7 @@ func InitConfig(config pkgconfigmodel.Setup) { | |||
config.BindEnvAndSetDefault("network_path.collector.pathtest_ttl", "15m") | |||
config.BindEnvAndSetDefault("network_path.collector.pathtest_interval", "5m") | |||
config.BindEnvAndSetDefault("network_path.collector.flush_interval", "10s") | |||
config.BindEnvAndSetDefault("network_path.collector.pathtest_max_per_minute", 1500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1500 traceroutes per min seems to be quite high, it's 25 traceroute per sec.
For comparison, if 1 traceroute takes about 2sec, with 4 workers (default) are are about 120 traceroute per sec ((60/2)*4).
So, maybe, 150 traceroutes per host per minute might be a better default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good callout, updated to 150!
What does this PR do?
This PR adds a rate limit to the dynamic paths collector. It offers a guarantee that no more than N traceroutes will be run per host, per minute. This limit is currently set to 150 by default.
It is configurable by setting
network_path.collector.pathtest_max_per_minute
. If it's set to 0, it is disabled.Additionally, there is
network_path.collector.pathtest_max_burst_duration
which is not intended to be changed by the customer. It determines the math for how large the rate limiter's burst is. It is 30s by default, so for example, ifpathtest_max_per_minute
is 150, then the burst is 75 (which 30 seconds' worth of path tests, half a minute).Motivation
Increase confidence we can deploy dynamic paths to more clusters.
Describe how you validated your changes
PathTestStore
has some new tests:Also,
npCollectorImpl
tests should still pass:Possible Drawbacks / Trade-offs
I'm not a fan of the way I init the PathTestStore in
start()
. It needs to happen afterstatsd.Client
is ready though, so idk if there's a better way.Additional Notes