-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Google Protobuf Instrumentation #6166
Conversation
Datadog ReportBranch report: ✅ 0 Failed, 558477 Passed, 4621 Skipped, 46h 31m 20.16s Total Time |
Execution-Time Benchmarks Report ⏱️Execution-time results for samples comparing the following branches/commits: Execution-time benchmarks measure the whole time it takes to execute a program. And are intended to measure the one-off costs. Cases where the execution time results for the PR are worse than latest master results are shown in red. The following thresholds were used for comparing the execution times:
Note that these results are based on a single point-in-time result for each branch. For full results, see the dashboard. Graphs show the p99 interval based on the mean and StdDev of the test run, as well as the mean value of the run (shown as a diamond below the graph). gantt
title Execution time (ms) FakeDbCommand (.NET Framework 4.6.2)
dateFormat X
axisFormat %s
todayMarker off
section Baseline
This PR (6166) - mean (69ms) : 66, 73
. : milestone, 69,
master - mean (69ms) : 67, 71
. : milestone, 69,
section CallTarget+Inlining+NGEN
This PR (6166) - mean (993ms) : 969, 1018
. : milestone, 993,
master - mean (987ms) : 967, 1008
. : milestone, 987,
gantt
title Execution time (ms) FakeDbCommand (.NET Core 3.1)
dateFormat X
axisFormat %s
todayMarker off
section Baseline
This PR (6166) - mean (102ms) : 99, 105
. : milestone, 102,
master - mean (102ms) : 100, 104
. : milestone, 102,
section CallTarget+Inlining+NGEN
This PR (6166) - mean (672ms) : 655, 688
. : milestone, 672,
master - mean (670ms) : 652, 688
. : milestone, 670,
gantt
title Execution time (ms) FakeDbCommand (.NET 6)
dateFormat X
axisFormat %s
todayMarker off
section Baseline
This PR (6166) - mean (89ms) : 87, 91
. : milestone, 89,
master - mean (89ms) : 87, 91
. : milestone, 89,
section CallTarget+Inlining+NGEN
This PR (6166) - mean (626ms) : 609, 643
. : milestone, 626,
master - mean (633ms) : 610, 656
. : milestone, 633,
gantt
title Execution time (ms) HttpMessageHandler (.NET Framework 4.6.2)
dateFormat X
axisFormat %s
todayMarker off
section Baseline
This PR (6166) - mean (190ms) : 186, 194
. : milestone, 190,
master - mean (190ms) : 186, 193
. : milestone, 190,
section CallTarget+Inlining+NGEN
This PR (6166) - mean (1,099ms) : 1069, 1129
. : milestone, 1099,
master - mean (1,094ms) : 1071, 1117
. : milestone, 1094,
gantt
title Execution time (ms) HttpMessageHandler (.NET Core 3.1)
dateFormat X
axisFormat %s
todayMarker off
section Baseline
This PR (6166) - mean (270ms) : 265, 274
. : milestone, 270,
master - mean (269ms) : 265, 273
. : milestone, 269,
section CallTarget+Inlining+NGEN
This PR (6166) - mean (865ms) : 827, 903
. : milestone, 865,
master - mean (861ms) : 829, 893
. : milestone, 861,
gantt
title Execution time (ms) HttpMessageHandler (.NET 6)
dateFormat X
axisFormat %s
todayMarker off
section Baseline
This PR (6166) - mean (262ms) : 258, 265
. : milestone, 262,
master - mean (261ms) : 257, 266
. : milestone, 261,
section CallTarget+Inlining+NGEN
This PR (6166) - mean (841ms) : 807, 876
. : milestone, 841,
master - mean (841ms) : 810, 872
. : milestone, 841,
|
Throughput/Crank Report ⚡Throughput results for AspNetCoreSimpleController comparing the following branches/commits: Cases where throughput results for the PR are worse than latest master (5% drop or greater), results are shown in red. Note that these results are based on a single point-in-time result for each branch. For full results, see one of the many, many dashboards! gantt
title Throughput Linux x64 (Total requests)
dateFormat X
axisFormat %s
section Baseline
This PR (6166) (11.162M) : 0, 11161793
master (11.223M) : 0, 11223304
benchmarks/2.9.0 (11.045M) : 0, 11045405
section Automatic
This PR (6166) (7.265M) : 0, 7264852
master (7.369M) : 0, 7368879
benchmarks/2.9.0 (7.885M) : 0, 7885346
section Trace stats
master (7.636M) : 0, 7635764
section Manual
master (11.208M) : 0, 11208283
section Manual + Automatic
This PR (6166) (6.810M) : 0, 6810342
master (6.860M) : 0, 6860273
section DD_TRACE_ENABLED=0
master (10.386M) : 0, 10385517
gantt
title Throughput Linux arm64 (Total requests)
dateFormat X
axisFormat %s
section Baseline
This PR (6166) (9.764M) : 0, 9764307
master (9.546M) : 0, 9546423
benchmarks/2.9.0 (9.586M) : 0, 9586476
section Automatic
This PR (6166) (6.449M) : 0, 6448736
master (6.583M) : 0, 6583133
section Trace stats
master (6.861M) : 0, 6860986
section Manual
master (9.767M) : 0, 9766866
section Manual + Automatic
This PR (6166) (5.942M) : 0, 5941979
master (5.972M) : 0, 5972206
section DD_TRACE_ENABLED=0
master (8.948M) : 0, 8947586
gantt
title Throughput Windows x64 (Total requests)
dateFormat X
axisFormat %s
section Baseline
This PR (6166) (9.861M) : 0, 9861124
section Automatic
This PR (6166) (6.395M) : 0, 6394947
section Manual + Automatic
This PR (6166) (5.984M) : 0, 5983994
|
Benchmarks Report for tracer 🐌Benchmarks for #6166 compared to master:
The following thresholds were used for comparing the benchmark speeds:
Allocation changes below 0.5% are ignored. Benchmark detailsBenchmarks.Trace.ActivityBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.AgentWriterBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.AspNetCoreBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.CIVisibilityProtocolWriterBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.DbCommandBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.ElasticsearchBenchmark - Same speed ✔️ Same allocations ✔️Raw results
Benchmarks.Trace.GraphQLBenchmark - Faster 🎉 Same allocations ✔️
|
Benchmark | base/diff | Base Median (ns) | Diff Median (ns) | Modality |
---|---|---|---|---|
Benchmarks.Trace.GraphQLBenchmark.ExecuteAsync‑net6.0 | 1.133 | 1,443.30 | 1,274.00 |
Raw results
Branch | Method | Toolchain | Mean | StdError | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|---|
master | ExecuteAsync |
net6.0 | 1.44μs | 0.83ns | 3.21ns | 0.0137 | 0 | 0 | 952 B |
master | ExecuteAsync |
netcoreapp3.1 | 1.63μs | 3.62ns | 14ns | 0.0122 | 0 | 0 | 952 B |
master | ExecuteAsync |
net472 | 1.77μs | 0.313ns | 1.17ns | 0.145 | 0 | 0 | 915 B |
#6166 | ExecuteAsync |
net6.0 | 1.27μs | 1.69ns | 6.56ns | 0.0135 | 0 | 0 | 952 B |
#6166 | ExecuteAsync |
netcoreapp3.1 | 1.62μs | 4.56ns | 17.7ns | 0.0129 | 0 | 0 | 952 B |
#6166 | ExecuteAsync |
net472 | 1.81μs | 0.64ns | 2.48ns | 0.145 | 0 | 0 | 915 B |
Benchmarks.Trace.HttpClientBenchmark - Same speed ✔️ Same allocations ✔️
Raw results
Branch | Method | Toolchain | Mean | StdError | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|---|
master | SendAsync |
net6.0 | 4.32μs | 1ns | 3.74ns | 0.0324 | 0 | 0 | 2.31 KB |
master | SendAsync |
netcoreapp3.1 | 5.23μs | 1.88ns | 7.3ns | 0.0393 | 0 | 0 | 2.85 KB |
master | SendAsync |
net472 | 7.38μs | 2.24ns | 8.68ns | 0.495 | 0 | 0 | 3.12 KB |
#6166 | SendAsync |
net6.0 | 4.4μs | 2.03ns | 7.6ns | 0.0308 | 0 | 0 | 2.31 KB |
#6166 | SendAsync |
netcoreapp3.1 | 5.18μs | 1.87ns | 7.24ns | 0.0391 | 0 | 0 | 2.85 KB |
#6166 | SendAsync |
net472 | 7.44μs | 2.55ns | 9.55ns | 0.492 | 0 | 0 | 3.12 KB |
Benchmarks.Trace.ILoggerBenchmark - Same speed ✔️ Same allocations ✔️
Raw results
Branch | Method | Toolchain | Mean | StdError | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|---|
master | EnrichedLog |
net6.0 | 1.45μs | 0.721ns | 2.7ns | 0.0228 | 0 | 0 | 1.64 KB |
master | EnrichedLog |
netcoreapp3.1 | 2.14μs | 1.27ns | 4.74ns | 0.0222 | 0 | 0 | 1.64 KB |
master | EnrichedLog |
net472 | 2.52μs | 0.772ns | 2.89ns | 0.249 | 0 | 0 | 1.57 KB |
#6166 | EnrichedLog |
net6.0 | 1.49μs | 0.58ns | 2.17ns | 0.0231 | 0 | 0 | 1.64 KB |
#6166 | EnrichedLog |
netcoreapp3.1 | 2.28μs | 0.951ns | 3.68ns | 0.0216 | 0 | 0 | 1.64 KB |
#6166 | EnrichedLog |
net472 | 2.58μs | 1.03ns | 3.98ns | 0.249 | 0 | 0 | 1.57 KB |
Benchmarks.Trace.Log4netBenchmark - Same speed ✔️ Same allocations ✔️
Raw results
Branch | Method | Toolchain | Mean | StdError | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|---|
master | EnrichedLog |
net6.0 | 112μs | 123ns | 475ns | 0.0561 | 0 | 0 | 4.28 KB |
master | EnrichedLog |
netcoreapp3.1 | 116μs | 144ns | 537ns | 0 | 0 | 0 | 4.28 KB |
master | EnrichedLog |
net472 | 151μs | 224ns | 868ns | 0.672 | 0.224 | 0 | 4.46 KB |
#6166 | EnrichedLog |
net6.0 | 112μs | 163ns | 631ns | 0.0563 | 0 | 0 | 4.28 KB |
#6166 | EnrichedLog |
netcoreapp3.1 | 117μs | 156ns | 606ns | 0.0586 | 0 | 0 | 4.28 KB |
#6166 | EnrichedLog |
net472 | 150μs | 201ns | 780ns | 0.674 | 0.225 | 0 | 4.46 KB |
Benchmarks.Trace.NLogBenchmark - Same speed ✔️ Same allocations ✔️
Raw results
Branch | Method | Toolchain | Mean | StdError | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|---|
master | EnrichedLog |
net6.0 | 2.95μs | 1.38ns | 5.36ns | 0.0311 | 0 | 0 | 2.2 KB |
master | EnrichedLog |
netcoreapp3.1 | 4.13μs | 1.53ns | 5.72ns | 0.029 | 0 | 0 | 2.2 KB |
master | EnrichedLog |
net472 | 4.94μs | 1.29ns | 5ns | 0.321 | 0 | 0 | 2.02 KB |
#6166 | EnrichedLog |
net6.0 | 3.05μs | 0.886ns | 3.31ns | 0.0305 | 0 | 0 | 2.2 KB |
#6166 | EnrichedLog |
netcoreapp3.1 | 4.24μs | 1.19ns | 4.6ns | 0.0297 | 0 | 0 | 2.2 KB |
#6166 | EnrichedLog |
net472 | 4.93μs | 1.95ns | 7.29ns | 0.32 | 0 | 0 | 2.02 KB |
Benchmarks.Trace.RedisBenchmark - Same speed ✔️ Same allocations ✔️
Raw results
Branch | Method | Toolchain | Mean | StdError | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|---|
master | SendReceive |
net6.0 | 1.41μs | 1.49ns | 5.79ns | 0.0157 | 0 | 0 | 1.14 KB |
master | SendReceive |
netcoreapp3.1 | 1.76μs | 0.847ns | 3.28ns | 0.015 | 0 | 0 | 1.14 KB |
master | SendReceive |
net472 | 2.12μs | 7.29ns | 28.2ns | 0.183 | 0 | 0 | 1.16 KB |
#6166 | SendReceive |
net6.0 | 1.32μs | 1.11ns | 4.16ns | 0.0159 | 0 | 0 | 1.14 KB |
#6166 | SendReceive |
netcoreapp3.1 | 1.78μs | 0.438ns | 1.64ns | 0.0152 | 0 | 0 | 1.14 KB |
#6166 | SendReceive |
net472 | 2.11μs | 1.32ns | 5.1ns | 0.183 | 0 | 0 | 1.16 KB |
Benchmarks.Trace.SerilogBenchmark - Same speed ✔️ Same allocations ✔️
Raw results
Branch | Method | Toolchain | Mean | StdError | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|---|
master | EnrichedLog |
net6.0 | 2.73μs | 1.03ns | 3.85ns | 0.0218 | 0 | 0 | 1.6 KB |
master | EnrichedLog |
netcoreapp3.1 | 3.79μs | 1.75ns | 6.79ns | 0.0227 | 0 | 0 | 1.65 KB |
master | EnrichedLog |
net472 | 4.44μs | 2.6ns | 9.73ns | 0.322 | 0 | 0 | 2.04 KB |
#6166 | EnrichedLog |
net6.0 | 2.67μs | 5.49ns | 21.3ns | 0.0216 | 0 | 0 | 1.6 KB |
#6166 | EnrichedLog |
netcoreapp3.1 | 3.8μs | 1.89ns | 7.33ns | 0.021 | 0 | 0 | 1.65 KB |
#6166 | EnrichedLog |
net472 | 4.49μs | 5.56ns | 21.5ns | 0.323 | 0 | 0 | 2.04 KB |
Benchmarks.Trace.SpanBenchmark - Slower ⚠️ Same allocations ✔️
Slower ⚠️ in #6166
Benchmark
diff/base
Base Median (ns)
Diff Median (ns)
Modality
Benchmarks.Trace.SpanBenchmark.StartFinishSpan‑netcoreapp3.1
1.233
541.36
667.46
Benchmarks.Trace.SpanBenchmark.StartFinishSpan‑net6.0
1.213
386.53
469.05
Benchmark | diff/base | Base Median (ns) | Diff Median (ns) | Modality |
---|---|---|---|---|
Benchmarks.Trace.SpanBenchmark.StartFinishSpan‑netcoreapp3.1 | 1.233 | 541.36 | 667.46 | |
Benchmarks.Trace.SpanBenchmark.StartFinishSpan‑net6.0 | 1.213 | 386.53 | 469.05 |
Raw results
Branch | Method | Toolchain | Mean | StdError | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|---|
master | StartFinishSpan |
net6.0 | 387ns | 0.469ns | 1.82ns | 0.00807 | 0 | 0 | 576 B |
master | StartFinishSpan |
netcoreapp3.1 | 542ns | 1.1ns | 4.24ns | 0.00782 | 0 | 0 | 576 B |
master | StartFinishSpan |
net472 | 597ns | 1.37ns | 5.31ns | 0.0917 | 0 | 0 | 578 B |
master | StartFinishScope |
net6.0 | 601ns | 0.998ns | 3.87ns | 0.00983 | 0 | 0 | 696 B |
master | StartFinishScope |
netcoreapp3.1 | 760ns | 1.81ns | 6.99ns | 0.00935 | 0 | 0 | 696 B |
master | StartFinishScope |
net472 | 843ns | 2.07ns | 8ns | 0.104 | 0 | 0 | 658 B |
#6166 | StartFinishSpan |
net6.0 | 469ns | 0.519ns | 2.01ns | 0.00802 | 0 | 0 | 576 B |
#6166 | StartFinishSpan |
netcoreapp3.1 | 668ns | 0.877ns | 3.4ns | 0.00762 | 0 | 0 | 576 B |
#6166 | StartFinishSpan |
net472 | 578ns | 1.28ns | 4.95ns | 0.0917 | 0 | 0 | 578 B |
#6166 | StartFinishScope |
net6.0 | 564ns | 0.746ns | 2.89ns | 0.00972 | 0 | 0 | 696 B |
#6166 | StartFinishScope |
netcoreapp3.1 | 734ns | 0.615ns | 2.38ns | 0.00931 | 0 | 0 | 696 B |
#6166 | StartFinishScope |
net472 | 819ns | 1.41ns | 5.45ns | 0.104 | 0 | 0 | 658 B |
Benchmarks.Trace.TraceAnnotationsBenchmark - Same speed ✔️ Same allocations ✔️
Raw results
Branch | Method | Toolchain | Mean | StdError | StdDev | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|---|
master | RunOnMethodBegin |
net6.0 | 612ns | 0.778ns | 3.01ns | 0.0098 | 0 | 0 | 696 B |
master | RunOnMethodBegin |
netcoreapp3.1 | 941ns | 1.79ns | 6.94ns | 0.00942 | 0 | 0 | 696 B |
master | RunOnMethodBegin |
net472 | 1.12μs | 2.26ns | 8.77ns | 0.104 | 0 | 0 | 658 B |
#6166 | RunOnMethodBegin |
net6.0 | 637ns | 0.695ns | 2.69ns | 0.00987 | 0 | 0 | 696 B |
#6166 | RunOnMethodBegin |
netcoreapp3.1 | 939ns | 2.19ns | 8.21ns | 0.00938 | 0 | 0 | 696 B |
#6166 | RunOnMethodBegin |
net472 | 1.07μs | 0.772ns | 2.68ns | 0.105 | 0 | 0 | 658 B |
40cb1af
to
daf650f
Compare
41f0d81
to
209f25c
Compare
## Summary of changes added the code of Microsoft.OpenApi to the tracer, removed/changed the parts that require more recent versions of the language/fwk than we have, and updated generated files. ## Reason for change It's going to be needed for the protobuf instrumentation in #6166, and I'm doing it in a separate PR to limit the size of the diff in the actual change. ## Implementation details @bouwkast ran the vendoring tool to add the files, since it's not working on Mac apparently ## Test coverage ## Other details When looking at the size of artefacts published on the CI, it looks like this is increasing the size of the dll by ~180KiB <!--⚠️ Note: where possible, please obtain 2 approvals prior to merging. Unless CODEOWNERS specifies otherwise, for external teams it is typically best to have one review from a team member, and one review from apm-dotnet. Trivial changes do not require 2 reviews. --> --------- Co-authored-by: Steven Bouwkamp <[email protected]>
72fd7b2
to
164139c
Compare
164139c
to
d2f7ece
Compare
…the items property
Datadog ReportBranch report: ✅ 0 Failed, 573075 Passed, 5449 Skipped, 46h 6m 5.07s Total Time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Mostly just nitpicks and a couple test questions
tracer/src/Datadog.Trace/ClrProfiler/AutoInstrumentation/Protobuf/SchemaExtractor.cs
Outdated
Show resolved
Hide resolved
tracer/src/Datadog.Trace/ClrProfiler/AutoInstrumentation/Protobuf/SchemaExtractor.cs
Outdated
Show resolved
Hide resolved
tracer/src/Datadog.Trace/ClrProfiler/AutoInstrumentation/Protobuf/SchemaExtractor.cs
Show resolved
Hide resolved
/// - the type ID as defined in https://github.com/protocolbuffers/protobuf/blob/main/src/google/protobuf/type.proto | ||
/// - the zero-based depth (allowing to distinguish a nested type) | ||
/// If a field references a sub-message, that sub-message is expended first, before the current field is added to the hash. | ||
/// Extraction stops after 10 nested messages, this affects the hash, so it must be kept consistent across tracers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we plan to enforce this? Do we have system tests that cover it?
It doesn't appear to be configurable here so that seems good.
Would it be best to have a link to locations in the other tracers where this hashing is calculated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes I was thinking that it's pretty brittle as I was writing it. The only mechanism I know to ensure consistency between tracers are system tests, we should probably write one with a proto schema hitting as many branches as possible and use that to enforce this rule.
tracer/src/Datadog.Trace/ClrProfiler/AutoInstrumentation/Protobuf/SchemaExtractor.cs
Show resolved
Hide resolved
@@ -0,0 +1,14 @@ | |||
<Project Sdk="Microsoft.NET.Sdk"> | |||
<PropertyGroup> | |||
<ApiVersion Condition="'$(ApiVersion)' == ''">3.28.2</ApiVersion> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if you tested with older versions as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I don't think I did. Do you think I should just run it once locally, or add test cases ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was just thinking manually swapping it to an older version locally just to see that it works?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wow turns out testing on older versions is super annoying because you have to use the protobuf generator (protoc) that matches the version of protobuf you are using.
WORK = 3; | ||
} | ||
|
||
message PhoneNumber { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How feasible would it be to test the path where we hit the limits for the number of properties and/or the extraction depth?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
number of properties idk, I think it's something that we'd rarely encounter, and I'm not even sure my code is 100% aligned with what Java does.
The extraction depth is tested indirectly in the protobuf exploration tests, and I think we should test it in a system test, not sure it adds a lot of value to also test it here
Co-authored-by: Steven Bouwkamp <[email protected]>
Co-authored-by: Steven Bouwkamp <[email protected]>
ced8ee6
to
9da4b7f
Compare
This reverts commit 02d20a4.
## Summary of changes This reverts commit 02d20a4. ## Reason for change This has broken the master build, so reverting to unblock the pipeline ## Implementation details ## Test coverage ## Other details <!-- Fixes #{issue} --> <!--⚠️ Note: where possible, please obtain 2 approvals prior to merging. Unless CODEOWNERS specifies otherwise, for external teams it is typically best to have one review from a team member, and one review from apm-dotnet. Trivial changes do not require 2 reviews. -->
## Summary of changes same as #6166 except with: - fixes by @bouwkast from #6640 - gave up on generating the proto object file on the fly - fixed a couple inconsistency issues I discovered when system-testing vs the Java instrumentation (depth calculation was one-off, int value of types were not in sync, and we were not using the full name for references) --------- Co-authored-by: Steven Bouwkamp <[email protected]>
Summary of changes
Adds a new instrumentation for Google Protobuf (not https://github.com/protobuf-net/protobuf-net)
The instrumentation doesn't create any new spans, we are just adding tags on existing ones. The main one being a tag containing the full protobuf schema as an OpenAPI json. Since extracting the schema is costly, we only do it once every 30 seconds (hardcoded for now).
Reason for change
On request from DSM team, as an equivalent of its java counterpart: https://github.com/DataDog/dd-trace-java/tree/master/dd-java-agent/instrumentation/protobuf
Implementation details
DbConnectionCache
, where it's using a cache for small cardinality, but if there are too many different values, it stops caching.Test coverage
Added a sample app that serializes and deserializes a protobuf message. Spans are created manually since this integration doesn't create one.
Other details