-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RUM-5555 Benchmarks: Collect Memory Metric #1993
Conversation
e7e13e0
to
436afd3
Compare
c0165f1
to
121b2fd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, looks great! 🙌
I have a general question: will this benchmark work on iOS only, or is it compatible with all supported platforms?
/// | ||
/// - Parameter series: The timeseries. | ||
func submit(series: [Serie]) throws { | ||
var data = try series.reduce(Data()) { data, serie in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we expect large payloads here? If so, what happens if it exceeds a certain size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no caching so we will upload 3 metrics per payload, with only one point per metrics. Each upload will happen every 10 seconds as configured here. So each payload will be very small 👍
run: run | ||
) | ||
) | ||
case .profiling: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How will the .profiling
use case differ from .metrics
in practice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we will perform different runs from baseline, metrics, and profiling. This way we avoid skewing metrics data with profiling overhead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks great 👌 and the abstraction makes sense. I left few suggestions and found one blocking issue.
_ = meter.createDoubleObservableGauge(name: "ios.benchmark.memory") { metric in | ||
do { | ||
let mem = try Memory.footprint() | ||
metric.observe(value: mem, labels: labels) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question/ How does it work? We create Gauge metric ✅ but when / how frequently will this closure be called?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is based on Asynchronous Gauge from otel specs where Callback functions will be called only when the Meter is being observed.
The meter is observed when there is a push, in our case it will be every 10s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks!
/// Replacement of otel `DatadogExporter` for metrics. | ||
/// | ||
/// This version does not store data to disk, it uploads to the intake directly. | ||
/// Additionally, it does not crash. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this crash a known issue in opentelemetry-swift
? If so let's link it here, so we can go back to the OOB exporter once fixed. If there are no reports, let's make one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't reported it just yet, will do.
Basically, these force unwrap can fail.
case baseline | ||
case metrics | ||
case profiling |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion/ This is quite crucial piece to understand the whole Benchmark automation. Can we add comments explaining what instrumentation is activated during each run?
What and why?
Collect Memory metrics during benchmarks.
How?
Scenario
abstraction: split interface from instrumentation to facilitate baseline runs.Review checklist