From 6c9a840372137166a3fe86e6fb4923a1729a26c0 Mon Sep 17 00:00:00 2001 From: Martin Kuba Date: Wed, 27 Sep 2023 17:22:26 -0700 Subject: [PATCH 1/5] docs: added performance benchmarking doc --- CHANGELOG.md | 2 ++ doc/contributing/benchmark-tests.md | 53 +++++++++++++++++++++++++++++ 2 files changed, 55 insertions(+) create mode 100644 doc/contributing/benchmark-tests.md diff --git a/CHANGELOG.md b/CHANGELOG.md index f8a0051fab0..5a330b09e72 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -21,6 +21,8 @@ For experimental package changes, see the [experimental CHANGELOG](experimental/ ### :books: (Refine Doc) +* docs(contributing): added guidelines for adding benchmark tests [#4169](https://github.com/open-telemetry/opentelemetry-js/pull/4169) + ### :house: (Internal) * test: added a performance benchmark test for span creation [#4105](https://github.com/open-telemetry/opentelemetry-js/pull/4105) diff --git a/doc/contributing/benchmark-tests.md b/doc/contributing/benchmark-tests.md new file mode 100644 index 00000000000..141475eccb9 --- /dev/null +++ b/doc/contributing/benchmark-tests.md @@ -0,0 +1,53 @@ + +# Performance Benchmark Testing Guide + +Benchmark tests are intended to measure performance of small units of code. + +It is recommended that operations that have a high impact on the performance of the SDK (or potential for) are accompanied by a benchmark test. This helps end-users understand the performance trend over time, and it also helps maintainers catch performance regressions. + +Benchmark tests are run automatically with every release, and the results are available at . + +## Running benchmark tests + +Performance benchmark tests can be run from the root for all modules or from a single module directory only for that module: + +``` bash +# benchmark all modules +npm run test:bench + +# benchmark a single module +cd packages/opentelemetry-sdk-trace-base +npm run test:bench +``` + +## Adding a benchmark test + +Unlike unit tests, benchmark tests should be written in plain JavaScript (not Typescript). + +Add a new test file in folder `test/performance/benchmark` using the following as a template: + +``` javascript +const Benchmark = require('benchmark'); + +const suite = new Benchmark.Suite(); + +suite.on('cycle', event => { + console.log(String(event.target)); +}); + +suite.add('new benchmark test', function() { + // write code to test ... +}); + +suite.run(); +``` + +## Automatically running benchmark tests + +If you want your test to run automatically with every release (to track trend over time), register the new test file by requiring it in `test/performance/benchmark/index.js`. + +Add the `test:bench` script in package.json, if the module does not contain. + +``` json +"test:bench": "node test/performance/benchmark/index.js" +``` From 63a80d5281095be56b2830e637d086234b9202bd Mon Sep 17 00:00:00 2001 From: Martin Kuba Date: Tue, 3 Oct 2023 07:53:58 -0700 Subject: [PATCH 2/5] updated instructions for test:bench script --- doc/contributing/benchmark-tests.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/contributing/benchmark-tests.md b/doc/contributing/benchmark-tests.md index 141475eccb9..86bbae6a0af 100644 --- a/doc/contributing/benchmark-tests.md +++ b/doc/contributing/benchmark-tests.md @@ -46,8 +46,8 @@ suite.run(); If you want your test to run automatically with every release (to track trend over time), register the new test file by requiring it in `test/performance/benchmark/index.js`. -Add the `test:bench` script in package.json, if the module does not contain. +Add the `test:bench` script in package.json, if the module does not contain it already. ``` json -"test:bench": "node test/performance/benchmark/index.js" +"test:bench": "node test/performance/benchmark/index.js | tee .benchmark-results.txt" ``` From 77620d071a461994d4b0d83a2424ff96be547b48 Mon Sep 17 00:00:00 2001 From: Martin Kuba Date: Tue, 3 Oct 2023 11:14:18 -0700 Subject: [PATCH 3/5] updated cadence of automated runs Co-authored-by: Tyler Benson --- doc/contributing/benchmark-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/contributing/benchmark-tests.md b/doc/contributing/benchmark-tests.md index 86bbae6a0af..962a9a811b9 100644 --- a/doc/contributing/benchmark-tests.md +++ b/doc/contributing/benchmark-tests.md @@ -44,7 +44,7 @@ suite.run(); ## Automatically running benchmark tests -If you want your test to run automatically with every release (to track trend over time), register the new test file by requiring it in `test/performance/benchmark/index.js`. +If you want your test to run automatically with every merge to main (to track trend over time), register the new test file by requiring it in `test/performance/benchmark/index.js`. Add the `test:bench` script in package.json, if the module does not contain it already. From dd24b7bf6831c3dff0296376cf392f12c34a33cd Mon Sep 17 00:00:00 2001 From: Martin Kuba Date: Tue, 3 Oct 2023 15:32:08 -0700 Subject: [PATCH 4/5] update broken links --- doc/metrics.md | 4 ++-- doc/tracing.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/doc/metrics.md b/doc/metrics.md index e2f2b5a1472..314fcbaf391 100644 --- a/doc/metrics.md +++ b/doc/metrics.md @@ -286,7 +286,7 @@ await myTask() ## Describing a instrument measurement -Using attributes, kind, and the related [semantic conventions](https://github.com/open-telemetry/opentelemetry-specification/tree/main/specification/metrics/semantic_conventions), we can more accurately describe the measurement in a way our metrics backend will more easily understand. The following example uses these mechanisms, which are described below, for recording a measurement +Using attributes, kind, and the related [semantic conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/general/metrics.md), we can more accurately describe the measurement in a way our metrics backend will more easily understand. The following example uses these mechanisms, which are described below, for recording a measurement of a HTTP request. Each metric instruments allows to associate a description, unit of measure, and the value type. @@ -343,7 +343,7 @@ One problem with metrics names and attributes is recognizing, categorizing, and The use of semantic conventions is always recommended where applicable, but they are merely conventions. For example, you may find that some name other than the name suggested by the semantic conventions more accurately describes your metric, you may decide not to include a metric attribute which is suggested by semantic conventions for privacy reasons, or you may wish to add a custom attribute which isn't covered by semantic conventions. All of these cases are fine, but please keep in mind that if you stray from the semantic conventions, the categorization of metrics in your metrics backend may be affected. -_See the current metrics semantic conventions in the OpenTelemetry Specification repository: _ +_See the current metrics semantic conventions in the OpenTelemetry Specification repository: _ [spec-overview]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md diff --git a/doc/tracing.md b/doc/tracing.md index fb3371d4fa5..77787549bcf 100644 --- a/doc/tracing.md +++ b/doc/tracing.md @@ -76,7 +76,7 @@ server.on("GET", "/user/:id", onGet); ## Describing a Span -Using span relationships, attributes, kind, and the related [semantic conventions](https://github.com/open-telemetry/opentelemetry-specification/tree/main/specification/trace/semantic_conventions), we can more accurately describe the span in a way our tracing backend will more easily understand. The following example uses these mechanisms, which are described below. +Using span relationships, attributes, kind, and the related [semantic conventions](https://github.com/open-telemetry/semantic-conventions/blob/main/docs/general/trace.md), we can more accurately describe the span in a way our tracing backend will more easily understand. The following example uses these mechanisms, which are described below. ```typescript import { NetTransportValues, SemanticAttributes } from '@opentelemetry/semantic-conventions'; @@ -209,6 +209,6 @@ Consumer spans represent the processing of a job created by a producer and may s One problem with span names and attributes is recognizing, categorizing, and analyzing them in your tracing backend. Between different applications, libraries, and tracing backends there might be different names and expected values for various attributes. For example, your application may use `http.status` to describe the HTTP status code, but a library you use may use `http.status_code`. In order to solve this problem, OpenTelemetry uses a library of semantic conventions which describe the name and attributes which should be used for specific types of spans. The use of semantic conventions is always recommended where applicable, but they are merely conventions. For example, you may find that some name other than the name suggested by the semantic conventions more accurately describes your span, you may decide not to include a span attribute which is suggested by semantic conventions for privacy reasons, or you may wish to add a custom attribute which isn't covered by semantic conventions. All of these cases are fine, but please keep in mind that if you stray from the semantic conventions, the categorization of spans in your tracing backend may be affected. -_See the current trace semantic conventions in the OpenTelemetry Specification repository: _ +_See the current trace semantic conventions in the OpenTelemetry Specification repository: _ [spec-overview]: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md From f44530e6582b8c9115cbec4eb41d9136ea2b9274 Mon Sep 17 00:00:00 2001 From: Marc Pichler Date: Wed, 4 Oct 2023 08:54:17 +0200 Subject: [PATCH 5/5] Update doc/contributing/benchmark-tests.md --- doc/contributing/benchmark-tests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/contributing/benchmark-tests.md b/doc/contributing/benchmark-tests.md index 962a9a811b9..e97b1659b84 100644 --- a/doc/contributing/benchmark-tests.md +++ b/doc/contributing/benchmark-tests.md @@ -5,7 +5,7 @@ Benchmark tests are intended to measure performance of small units of code. It is recommended that operations that have a high impact on the performance of the SDK (or potential for) are accompanied by a benchmark test. This helps end-users understand the performance trend over time, and it also helps maintainers catch performance regressions. -Benchmark tests are run automatically with every release, and the results are available at . +Benchmark tests are run automatically with every merge to main, and the results are available at . ## Running benchmark tests