-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(performance): Make improved tests for search #1983
Conversation
📝 WalkthroughWalkthroughThis pull request introduces two new performance testing scripts for K6, enhancing the workflow configuration by adding new test suite paths. The scripts, Changes
Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js (1)
10-11
: Consider adding a comment explaining search termsThe test data arrays for positive and negative matches could benefit from a brief comment explaining what makes these terms likely to match or not match, to make the test more maintainable.
-const texts = [ "påkrevd", "rapportering", "sammendrag", "Utvidet Status", "ingen HTML-støtte", "et eller annet", "Skjema", "Skjema for rapportering av et eller annet", "Maks 200 tegn", "liste" ]; -const texts_no_hit = [ "sjøvegan", "larvik", "kvalsund", "jøssheim", "sørli"]; +// Common terms that should appear in dialogs +const texts = [ "påkrevd", "rapportering", "sammendrag", "Utvidet Status", "ingen HTML-støtte", "et eller annet", "Skjema", "Skjema for rapportering av et eller annet", "Maks 200 tegn", "liste" ]; +// Terms unlikely to appear in test dialogs +const texts_no_hit = [ "sjøvegan", "larvik", "kvalsund", "jøssheim", "sørli"];
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/dispatch-k6-performance.yml
(1 hunks)tests/k6/common/k6-utils.js
(1 hunks)tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js
(1 hunks)
🧰 Additional context used
🪛 GitHub Actions: serviceowner search 5/3m/1
tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js
[error] 1-1: Response status: expected 500 to equal 200. The test failed due to unexpected server error.
[error] 1-1: http_req_failed: 10.64% 28 out of 263 requests failed.
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Dry run deploy infrastructure / Deploy to test
- GitHub Check: build / build-and-test
🔇 Additional comments (7)
tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js (5)
28-41
: Well-structured filter combinations for comprehensive testingThe test is designed with a good variety of filter combinations that cover different search scenarios. This approaches testing from multiple angles, which is excellent for performance analysis of search functionality.
46-56
: Good performance metrics configurationThe configuration captures detailed percentiles (95th, 99th, 99.5th, 99.9th) which is excellent for identifying performance outliers. Setting up separate thresholds for each filter combination allows for granular performance tracking.
96-98
: Good implementation of trace loggingThe conditional addition of the traceparent to the request tags is a good practice for debugging. This allows detailed request tracing when needed without adding overhead to normal test runs.
3-3
: Verify URL import is properly usedThe script correctly imports the URL utility and uses it effectively at line 100 to build the request URL with search parameters. This validates the addition of the URL export in the k6-utils.js file.
105-110
:❓ Verification inconclusive
Investigate server errors in pipeline
The pipeline failures show 10.64% of requests failed with 500 status codes. This could be related to the date range issue or server-side problems.
🏁 Script executed:
#!/bin/bash # Check for patterns in the code that might cause the observed 500 errors # Look for problematic filter combinations that might trigger server errors echo "Checking filter combinations in the test file:" grep -A 15 "const filter_combos" tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js # Check implementation of date filters in the API handler echo "Searching for date filter implementation in the codebase:" rg -A 5 "createdAfter|createdBefore" --type csLength of output: 1372
Action Required: Confirm Server-Side Handling of Date Filters
The performance test in
tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js
is invoking various filter combinations via thefilter_combos
array, including ones that use"createdAfter"
and"createdBefore"
. While these filters are clearly exercised in the tests, our search for their handling in the C# codebase didn’t return any direct implementations. Given the observed 10.64% failure rate with 500 errors in the pipeline, it’s important to manually verify that the backend (or API handler) properly supports these date filters and that the tests match the expected API behavior.
- In File:
tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js
- Lines: ~105-110 contain the test snippet invoking the service owner dialog list.
- The
filter_combos
array (printed via grep) includes several date-filter related entries.- Next Step: Please review the server-side endpoint(s) to confirm that parameters like
"createdAfter"
and"createdBefore"
are correctly implemented and validated. If these filters are intended to trigger specific behaviors or if they are unsupported, adjustments might be necessary to prevent the 500 errors.tests/k6/common/k6-utils.js (1)
2-2
: Clean addition of URL export.The export of the URL module from the specified library is well-structured and aligns with the existing code pattern. This addition will support URL handling functionality needed in the new service owner random search performance test.
.github/workflows/dispatch-k6-performance.yml (1)
56-56
: Path added correctly for the new performance test.The new test path has been properly added to the options list, making the service owner random search test available for selection in the workflow dispatch UI.
Consider maintaining consistent naming conventions across test files. Other test files use kebab-case (e.g.,
create-dialog.js
,serviceowner-search.js
), while this new one uses camelCase (serviceOwnerRandomSearch.js
). This is just a minor observation that doesn't affect functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
tests/k6/tests/enduser/performance/enduserRandomSearch.js (5)
44-55
: Consider adding specific duration thresholds.While you've set up thresholds for HTTP request failures, the duration thresholds are empty arrays. For performance testing, it's valuable to define specific duration thresholds (e.g., p95 < 500) to ensure the search API meets performance expectations.
for (var filter of filter_combos) { - options.thresholds[[`http_req_duration{name:${filter.label}}`]] = []; + options.thresholds[[`http_req_duration{name:${filter.label}}`]] = ['p(95)<500', 'p(99)<1000']; options.thresholds[[`http_req_failed{name:${filter.label}}`]] = ['rate<=0.0']; }
57-65
: Consider using more modern JavaScript syntax.The function looks good, but it's using
var
for variable declarations instead of the more modernlet
orconst
, which provide better scoping.function get_query_params(endUser) { - var search_params = {}; - var filter_combo = randomItem(filter_combos); - var label = filter_combo.label + const search_params = {}; + const filter_combo = randomItem(filter_combos); + const label = filter_combo.label; for (var filter of filter_combo.filters) { search_params[filter] = get_filter_value(filter, label, endUser) } return [search_params, label]; }
61-61
: Update for loop variable declaration.Use
let
instead ofvar
for better scoping.- for (var filter of filter_combo.filters) { + for (let filter of filter_combo.filters) {
80-89
: Remove redundant null check in setup function.The function already has a default parameter value, so the null check is unnecessary.
export function setup(numberOfEndUsers = defaultNumberOfEndUsers) { const tokenOptions = { scopes: "digdir:dialogporten" } - if (numberOfEndUsers === null) { - numberOfEndUsers = defaultNumberOfEndUsers; - } const endusers = getEndUserTokens(numberOfEndUsers, tokenOptions); return endusers }
115-120
: Consider adding more detailed assertions.While the current assertions check for a 200 status code and valid JSON, you might want to add more specific assertions about the response content to ensure the search functionality is working correctly.
describe('Perform enduser dialog list', () => { let r = http.get(url.toString(), paramsWithToken); expectStatusFor(r).to.equal(200); expect(r, 'response').to.have.validJsonBody(); + // Add assertions for response content + const body = r.json(); + expect(body, 'response body').to.include('dialogs'); + expect(body, 'response body').to.include('pageSize'); return r });
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/dispatch-k6-performance.yml
(1 hunks)tests/k6/tests/enduser/performance/enduserRandomSearch.js
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/dispatch-k6-performance.yml
🧰 Additional context used
🧠 Learnings (1)
tests/k6/tests/enduser/performance/enduserRandomSearch.js (1)
Learnt from: dagfinno
PR: Altinn/dialogporten#1843
File: tests/k6/tests/enduser/performance/enduser-search.js:5-5
Timestamp: 2025-02-12T10:32:18.061Z
Learning: The hardcoded values in k6 performance tests (e.g., defaultNumberOfEndUsers=2799, setupTimeout='10m') are temporary and will be made configurable via environment variables once Altinn-testtools is fixed.
🔇 Additional comments (5)
tests/k6/tests/enduser/performance/enduserRandomSearch.js (5)
1-8
: Imports and configuration look good!The script correctly imports all necessary K6 modules and utilities. The use of environment variables for configuration (
traceCalls
andNUMBER_OF_ENDUSERS
) is a good practice for test parameterization.
8-8
: Consider making the hardcoded value configurable.I see from the retrieved learning that these hardcoded values are temporary and will be made configurable via environment variables once Altinn-testtools is fixed. Just flagging this as a reminder.
10-23
: Test data looks comprehensive and well-organized.The arrays of test texts and resources provide good coverage for testing various search scenarios.
28-42
: Filter combinations provide good test coverage.The variety of filter combinations enables thorough testing of the search functionality with different parameter combinations.
91-121
: Main test function looks well-structured and robust.The test function correctly selects a random end user, generates appropriate query parameters, and makes a request with proper headers and authentication. The assertions correctly verify that the response has a 200 status code and valid JSON body.
…ed by performance tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tests/k6/tests/performancetest_data/01-create-dialog.js (1)
4-14
: Consider documenting the function's purpose.The
cleanUp
function plays an important role in the test setup by adding the sentinel performance value to search tags, but lacks documentation explaining its purpose and return value. Adding a JSDoc comment would improve code readability and maintainability.+/** + * Adds a sentinel performance value to the search tags of the dialog payload + * to make it identifiable in performance tests. + * + * @param {Object} originalPayload - The original dialog payload + * @returns {Object} Modified payload with sentinel value in search tags + * @throws {Error} If the payload is invalid + */ function cleanUp(originalPayload) { if (!originalPayload || typeof originalPayload !== 'object') { throw new Error('Invalid payload'); } const payload = { ...originalPayload, searchTags: [...(originalPayload.searchTags || []), { "value": sentinelPerformanceValue }] }; return payload }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tests/k6/common/config.js
(1 hunks)tests/k6/tests/enduser/performance/enduserRandomSearch.js
(1 hunks)tests/k6/tests/performancetest_data/01-create-dialog.js
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/k6/tests/enduser/performance/enduserRandomSearch.js
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Dry run deploy infrastructure / Deploy to test
- GitHub Check: build / build-and-test
🔇 Additional comments (3)
tests/k6/common/config.js (1)
73-73
:❓ Verification inconclusive
Updated sentinel value for performance testing
This change renames the sentinel value from "dialogporten-e2e-sentinel-performance" to "dialogporten-performance-sentinel", which provides better naming consistency by following the pattern established in line 72 with "dialogporten-e2e-sentinel". This update aligns with the PR objectives to enhance the testing framework for performance testing.
🏁 Script executed:
#!/bin/bash # Check if the new sentinel value is properly used in the new performance test scripts mentioned in the PR echo "Checking for references to the new sentinel value in performance test scripts..." rg "dialogporten-performance-sentinel" tests/k6/tests/ --type jsLength of output: 235
Action Required: Verify Updated Sentinel Usage in Performance Test Scripts
The updated sentinel value in
tests/k6/common/config.js
("dialogporten-performance-sentinel"
) correctly follows the desired naming pattern. However, our automated search in thetests/k6/tests/
directory did not find any direct references to this new value. Please manually verify that the performance test scripts are indeed importing and utilizing the updated sentinel from the common config. If the tests are still referencing the old value or not importing the constant as expected, the necessary updates should be applied.tests/k6/tests/performancetest_data/01-create-dialog.js (2)
13-13
: Function simplification improves maintainability.The
cleanUp
function has been simplified to only add search tags with a sentinel value and return the payload, removing previous complexity related to activities and visibleFrom handling. This aligns with the PR's objective of improving search tests by focusing the function on its core purpose.
1-35
:❓ Verification inconclusive
Verify integration with new performance tests.
This simplified function is part of a broader change introducing new performance testing scripts. Ensure that the dialog payload created by this function properly integrates with the new
serviceOwnerRandomSearch.js
andenduserRandomSearch.js
scripts mentioned in the PR summary.
🏁 Script executed:
#!/bin/bash # Check how the dialog creation function is used in the new performance test scripts echo "Checking usage in serviceOwnerRandomSearch.js:" rg -A 3 "create-dialog" tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js || echo "No direct references found" echo -e "\nChecking usage in enduserRandomSearch.js:" rg -A 3 "create-dialog" tests/k6/tests/enduser/performance/enduserRandomSearch.js || echo "No direct references found" echo -e "\nChecking how the sentinel value is used in search tests:" rg -A 3 "sentinelPerformanceValue" tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js tests/k6/tests/enduser/performance/enduserRandomSearch.jsLength of output: 791
Performance Test Integration Requires Manual Verification
The change introduces a simplified function for creating dialog payloads for performance testing. However, the automated search in the expected performance test files (i.e.
serviceOwnerRandomSearch.js
andenduserRandomSearch.js
) did not yield direct references to this function or usage of thesentinelPerformanceValue
. Please verify manually that:
- The dialog payload created by this function (including the appended
sentinelPerformanceValue
) is appropriately integrated within the new performance test scripts.- The integration is either handled indirectly or the payload adjustments are correctly reflected where needed.
🤖 I have created a release *beep* *boop* --- ## [1.57.2](v1.57.1...v1.57.2) (2025-03-03) ### Bug Fixes * Add missing prefixes in Swagger TypeNameConverter exlude list ([#1992](#1992)) ([88925c1](88925c1)) * **ci:** Wait for infra deployment before deploying apps in prod pipeline ([#1989](#1989)) ([224e837](224e837)) ### Miscellaneous Chores * **performance:** Make improved tests for search ([#1983](#1983)) ([9412a85](9412a85)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
Improved test for serviceowner search, uses combinations of query-options
Description
Related Issue(s)
Verification
Documentation
docs
-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)