Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RUM-5555 Benchmarks: Collect Memory Metric #1993

Merged
merged 7 commits into from
Aug 20, 2024

Conversation

maxep
Copy link
Member

@maxep maxep commented Aug 13, 2024

What and why?

Collect Memory metrics during benchmarks.

How?

  • Create a custom open-telemetry exporter for metrics: provided one is faulty
  • Redesign Scenario abstraction: split interface from instrumentation to facilitate baseline runs.
  • Measure memory footprint

Review checklist

  • Feature or bugfix MUST have appropriate tests (unit, integration)
  • Make sure each commit and the PR mention the Issue number or JIRA reference
  • Add CHANGELOG entry for user facing changes

@maxep maxep self-assigned this Aug 13, 2024
@maxep maxep force-pushed the maxep/RUM-5555/memory-metrics branch from e7e13e0 to 436afd3 Compare August 13, 2024 14:40
@maxep maxep force-pushed the maxep/RUM-5555/memory-metrics branch from c0165f1 to 121b2fd Compare August 14, 2024 08:50
Base automatically changed from feature/continuous-benchmarks to develop August 14, 2024 14:21
@maxep maxep marked this pull request as ready for review August 14, 2024 14:22
@maxep maxep requested review from a team as code owners August 14, 2024 14:22
Copy link
Member

@mariedm mariedm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, looks great! 🙌

I have a general question: will this benchmark work on iOS only, or is it compatible with all supported platforms?

///
/// - Parameter series: The timeseries.
func submit(series: [Serie]) throws {
var data = try series.reduce(Data()) { data, serie in
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we expect large payloads here? If so, what happens if it exceeds a certain size?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no caching so we will upload 3 metrics per payload, with only one point per metrics. Each upload will happen every 10 seconds as configured here. So each payload will be very small 👍

run: run
)
)
case .profiling:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will the .profiling use case differ from .metrics in practice?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we will perform different runs from baseline, metrics, and profiling. This way we avoid skewing metrics data with profiling overhead.

@maxep maxep requested a review from mariedm August 19, 2024 12:55
Copy link
Member

@ncreated ncreated left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks great 👌 and the abstraction makes sense. I left few suggestions and found one blocking issue.

Comment on lines +118 to +121
_ = meter.createDoubleObservableGauge(name: "ios.benchmark.memory") { metric in
do {
let mem = try Memory.footprint()
metric.observe(value: mem, labels: labels)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question/ How does it work? We create Gauge metric ✅ but when / how frequently will this closure be called?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is based on Asynchronous Gauge from otel specs where Callback functions will be called only when the Meter is being observed.
The meter is observed when there is a push, in our case it will be every 10s

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks!

Comment on lines +14 to +17
/// Replacement of otel `DatadogExporter` for metrics.
///
/// This version does not store data to disk, it uploads to the intake directly.
/// Additionally, it does not crash.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this crash a known issue in opentelemetry-swift? If so let's link it here, so we can go back to the OOB exporter once fixed. If there are no reports, let's make one.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't reported it just yet, will do.
Basically, these force unwrap can fail.

Comment on lines 55 to 57
case baseline
case metrics
case profiling
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion/ This is quite crucial piece to understand the whole Benchmark automation. Can we add comments explaining what instrumentation is activated during each run?

@maxep maxep requested a review from ncreated August 20, 2024 11:40
@maxep maxep merged commit 6d875cc into develop Aug 20, 2024
2 checks passed
@maxep maxep deleted the maxep/RUM-5555/memory-metrics branch August 20, 2024 13:00
@mariedm mariedm mentioned this pull request Sep 11, 2024
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants