Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(performance): Make improved tests for search #1983

Merged
merged 10 commits into from
Mar 3, 2025

Conversation

dagfinno
Copy link
Collaborator

Improved test for serviceowner search, uses combinations of query-options

Description

Related Issue(s)

Verification

  • Your code builds clean without any errors or warnings
  • Manual testing done (required)
  • Relevant automated test added (if you find this hard, leave it and we'll help out)

Documentation

  • Documentation is updated (either in docs-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)

@dagfinno dagfinno added the performance Issue related to performance label Feb 28, 2025
@dagfinno dagfinno requested review from a team as code owners February 28, 2025 11:46
Copy link
Contributor

coderabbitai bot commented Feb 28, 2025

📝 Walkthrough

Walkthrough

This pull request introduces two new performance testing scripts for K6, enhancing the workflow configuration by adding new test suite paths. The scripts, serviceOwnerRandomSearch.js and enduserRandomSearch.js, facilitate performance testing for service owners and end users, respectively. Additionally, the common utilities are expanded by exporting the URL module from an external library. The overall structure and functionality of the existing workflow remain unchanged.

Changes

File Change Summary
.github/workflows/dispatch-k6-performance.yml Added two new test suite options with paths tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js and tests/k6/tests/enduser/performance/enduserRandomSearch.js to the testSuitePath inputs in the workflow configuration.
tests/k6/common/k6-utils.js Introduced a new export statement for URL from https://jslib.k6.io/url/1.0.0/index.js to expand available utility functions.
tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js Added a new performance testing script that performs HTTP GET requests with dynamically generated query parameters and response validations.
tests/k6/tests/enduser/performance/enduserRandomSearch.js Added a new performance testing script that simulates end-user interactions with a dialog service, including dynamic query generation and error handling.
tests/k6/common/config.js Updated the exported constant sentinelPerformanceValue from "dialogporten-e2e-sentinel-performance" to "dialogporten-performance-sentinel".
tests/k6/tests/performancetest_data/01-create-dialog.js Removed the constant ACTIVITY_TYPE_INFORMATION and simplified the cleanUp function to return the original payload without modifications.

Possibly related PRs

  • chore(performance): run tests in k8s #1660: Related to the changes in the main PR as both involve modifications to the K6 performance testing workflows, specifically in the context of adding new test options and adjusting input parameters.
  • chore(performance): Adding breakpoint tests #1793: Related to the addition of new test scripts for K6 performance testing, which aligns with the enhancements made in the retrieved PR that introduces breakpoint tests and modifies workflow configurations for performance testing.

Suggested labels

enhancement, tests

Suggested reviewers

  • oskogstad
  • arealmaas
✨ Finishing Touches
  • 📝 Generate Docstrings

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js (1)

10-11: Consider adding a comment explaining search terms

The test data arrays for positive and negative matches could benefit from a brief comment explaining what makes these terms likely to match or not match, to make the test more maintainable.

-const texts = [ "påkrevd", "rapportering", "sammendrag", "Utvidet Status", "ingen HTML-støtte", "et eller annet", "Skjema", "Skjema for rapportering av et eller annet", "Maks 200 tegn", "liste" ];
-const texts_no_hit = [ "sjøvegan", "larvik", "kvalsund", "jøssheim", "sørli"];
+// Common terms that should appear in dialogs
+const texts = [ "påkrevd", "rapportering", "sammendrag", "Utvidet Status", "ingen HTML-støtte", "et eller annet", "Skjema", "Skjema for rapportering av et eller annet", "Maks 200 tegn", "liste" ];
+// Terms unlikely to appear in test dialogs
+const texts_no_hit = [ "sjøvegan", "larvik", "kvalsund", "jøssheim", "sørli"];
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7a506b3 and 54c1b2f.

📒 Files selected for processing (3)
  • .github/workflows/dispatch-k6-performance.yml (1 hunks)
  • tests/k6/common/k6-utils.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js (1 hunks)
🧰 Additional context used
🪛 GitHub Actions: serviceowner search 5/3m/1
tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js

[error] 1-1: Response status: expected 500 to equal 200. The test failed due to unexpected server error.


[error] 1-1: http_req_failed: 10.64% 28 out of 263 requests failed.

⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Dry run deploy infrastructure / Deploy to test
  • GitHub Check: build / build-and-test
🔇 Additional comments (7)
tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js (5)

28-41: Well-structured filter combinations for comprehensive testing

The test is designed with a good variety of filter combinations that cover different search scenarios. This approaches testing from multiple angles, which is excellent for performance analysis of search functionality.


46-56: Good performance metrics configuration

The configuration captures detailed percentiles (95th, 99th, 99.5th, 99.9th) which is excellent for identifying performance outliers. Setting up separate thresholds for each filter combination allows for granular performance tracking.


96-98: Good implementation of trace logging

The conditional addition of the traceparent to the request tags is a good practice for debugging. This allows detailed request tracing when needed without adding overhead to normal test runs.


3-3: Verify URL import is properly used

The script correctly imports the URL utility and uses it effectively at line 100 to build the request URL with search parameters. This validates the addition of the URL export in the k6-utils.js file.


105-110:

❓ Verification inconclusive

Investigate server errors in pipeline

The pipeline failures show 10.64% of requests failed with 500 status codes. This could be related to the date range issue or server-side problems.


🏁 Script executed:

#!/bin/bash
# Check for patterns in the code that might cause the observed 500 errors

# Look for problematic filter combinations that might trigger server errors
echo "Checking filter combinations in the test file:"
grep -A 15 "const filter_combos" tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js

# Check implementation of date filters in the API handler
echo "Searching for date filter implementation in the codebase:"
rg -A 5 "createdAfter|createdBefore" --type cs

Length of output: 1372


Action Required: Confirm Server-Side Handling of Date Filters

The performance test in tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js is invoking various filter combinations via the filter_combos array, including ones that use "createdAfter" and "createdBefore". While these filters are clearly exercised in the tests, our search for their handling in the C# codebase didn’t return any direct implementations. Given the observed 10.64% failure rate with 500 errors in the pipeline, it’s important to manually verify that the backend (or API handler) properly supports these date filters and that the tests match the expected API behavior.

  • In File: tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js
    • Lines: ~105-110 contain the test snippet invoking the service owner dialog list.
    • The filter_combos array (printed via grep) includes several date-filter related entries.
  • Next Step: Please review the server-side endpoint(s) to confirm that parameters like "createdAfter" and "createdBefore" are correctly implemented and validated. If these filters are intended to trigger specific behaviors or if they are unsupported, adjustments might be necessary to prevent the 500 errors.
tests/k6/common/k6-utils.js (1)

2-2: Clean addition of URL export.

The export of the URL module from the specified library is well-structured and aligns with the existing code pattern. This addition will support URL handling functionality needed in the new service owner random search performance test.

.github/workflows/dispatch-k6-performance.yml (1)

56-56: Path added correctly for the new performance test.

The new test path has been properly added to the options list, making the service owner random search test available for selection in the workflow dispatch UI.

Consider maintaining consistent naming conventions across test files. Other test files use kebab-case (e.g., create-dialog.js, serviceowner-search.js), while this new one uses camelCase (serviceOwnerRandomSearch.js). This is just a minor observation that doesn't affect functionality.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
tests/k6/tests/enduser/performance/enduserRandomSearch.js (5)

44-55: Consider adding specific duration thresholds.

While you've set up thresholds for HTTP request failures, the duration thresholds are empty arrays. For performance testing, it's valuable to define specific duration thresholds (e.g., p95 < 500) to ensure the search API meets performance expectations.

for (var filter of filter_combos) {
-    options.thresholds[[`http_req_duration{name:${filter.label}}`]] = [];
+    options.thresholds[[`http_req_duration{name:${filter.label}}`]] = ['p(95)<500', 'p(99)<1000'];
    options.thresholds[[`http_req_failed{name:${filter.label}}`]] = ['rate<=0.0'];
}

57-65: Consider using more modern JavaScript syntax.

The function looks good, but it's using var for variable declarations instead of the more modern let or const, which provide better scoping.

function get_query_params(endUser) {
-    var search_params = {};
-    var filter_combo = randomItem(filter_combos);
-    var label = filter_combo.label
+    const search_params = {};
+    const filter_combo = randomItem(filter_combos);
+    const label = filter_combo.label;
    for (var filter of filter_combo.filters) {
        search_params[filter] = get_filter_value(filter, label, endUser)
    }
    return [search_params, label];
}

61-61: Update for loop variable declaration.

Use let instead of var for better scoping.

-    for (var filter of filter_combo.filters) {
+    for (let filter of filter_combo.filters) {

80-89: Remove redundant null check in setup function.

The function already has a default parameter value, so the null check is unnecessary.

export function setup(numberOfEndUsers = defaultNumberOfEndUsers) {
    const tokenOptions = {
        scopes: "digdir:dialogporten"
    }
-    if (numberOfEndUsers === null) {
-        numberOfEndUsers = defaultNumberOfEndUsers;
-    }
    const endusers = getEndUserTokens(numberOfEndUsers, tokenOptions);
    return endusers
}

115-120: Consider adding more detailed assertions.

While the current assertions check for a 200 status code and valid JSON, you might want to add more specific assertions about the response content to ensure the search functionality is working correctly.

describe('Perform enduser dialog list', () => {
    let r = http.get(url.toString(), paramsWithToken);
    expectStatusFor(r).to.equal(200);
    expect(r, 'response').to.have.validJsonBody();
+    // Add assertions for response content
+    const body = r.json();
+    expect(body, 'response body').to.include('dialogs');
+    expect(body, 'response body').to.include('pageSize');
    return r
});
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 54c1b2f and 53e5b74.

📒 Files selected for processing (2)
  • .github/workflows/dispatch-k6-performance.yml (1 hunks)
  • tests/k6/tests/enduser/performance/enduserRandomSearch.js (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/dispatch-k6-performance.yml
🧰 Additional context used
🧠 Learnings (1)
tests/k6/tests/enduser/performance/enduserRandomSearch.js (1)
Learnt from: dagfinno
PR: Altinn/dialogporten#1843
File: tests/k6/tests/enduser/performance/enduser-search.js:5-5
Timestamp: 2025-02-12T10:32:18.061Z
Learning: The hardcoded values in k6 performance tests (e.g., defaultNumberOfEndUsers=2799, setupTimeout='10m') are temporary and will be made configurable via environment variables once Altinn-testtools is fixed.
🔇 Additional comments (5)
tests/k6/tests/enduser/performance/enduserRandomSearch.js (5)

1-8: Imports and configuration look good!

The script correctly imports all necessary K6 modules and utilities. The use of environment variables for configuration (traceCalls and NUMBER_OF_ENDUSERS) is a good practice for test parameterization.


8-8: Consider making the hardcoded value configurable.

I see from the retrieved learning that these hardcoded values are temporary and will be made configurable via environment variables once Altinn-testtools is fixed. Just flagging this as a reminder.


10-23: Test data looks comprehensive and well-organized.

The arrays of test texts and resources provide good coverage for testing various search scenarios.


28-42: Filter combinations provide good test coverage.

The variety of filter combinations enables thorough testing of the search functionality with different parameter combinations.


91-121: Main test function looks well-structured and robust.

The test function correctly selects a random end user, generates appropriate query parameters, and makes a request with proper headers and authentication. The assertions correctly verify that the response has a 200 status code and valid JSON body.

@dagfinno dagfinno temporarily deployed to test March 3, 2025 07:13 — with GitHub Actions Inactive
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/k6/tests/performancetest_data/01-create-dialog.js (1)

4-14: Consider documenting the function's purpose.

The cleanUp function plays an important role in the test setup by adding the sentinel performance value to search tags, but lacks documentation explaining its purpose and return value. Adding a JSDoc comment would improve code readability and maintainability.

+/**
+ * Adds a sentinel performance value to the search tags of the dialog payload
+ * to make it identifiable in performance tests.
+ * 
+ * @param {Object} originalPayload - The original dialog payload
+ * @returns {Object} Modified payload with sentinel value in search tags
+ * @throws {Error} If the payload is invalid
+ */
function cleanUp(originalPayload) {
    if (!originalPayload || typeof originalPayload !== 'object') {
        throw new Error('Invalid payload');
    }

    const payload = {
        ...originalPayload,
        searchTags: [...(originalPayload.searchTags || []), { "value": sentinelPerformanceValue }]
    };
    return payload
}
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 53e5b74 and eba085a.

📒 Files selected for processing (3)
  • tests/k6/common/config.js (1 hunks)
  • tests/k6/tests/enduser/performance/enduserRandomSearch.js (1 hunks)
  • tests/k6/tests/performancetest_data/01-create-dialog.js (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/k6/tests/enduser/performance/enduserRandomSearch.js
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: Dry run deploy infrastructure / Deploy to test
  • GitHub Check: build / build-and-test
🔇 Additional comments (3)
tests/k6/common/config.js (1)

73-73:

❓ Verification inconclusive

Updated sentinel value for performance testing

This change renames the sentinel value from "dialogporten-e2e-sentinel-performance" to "dialogporten-performance-sentinel", which provides better naming consistency by following the pattern established in line 72 with "dialogporten-e2e-sentinel". This update aligns with the PR objectives to enhance the testing framework for performance testing.


🏁 Script executed:

#!/bin/bash
# Check if the new sentinel value is properly used in the new performance test scripts mentioned in the PR

echo "Checking for references to the new sentinel value in performance test scripts..."
rg "dialogporten-performance-sentinel" tests/k6/tests/ --type js

Length of output: 235


Action Required: Verify Updated Sentinel Usage in Performance Test Scripts

The updated sentinel value in tests/k6/common/config.js ("dialogporten-performance-sentinel") correctly follows the desired naming pattern. However, our automated search in the tests/k6/tests/ directory did not find any direct references to this new value. Please manually verify that the performance test scripts are indeed importing and utilizing the updated sentinel from the common config. If the tests are still referencing the old value or not importing the constant as expected, the necessary updates should be applied.

tests/k6/tests/performancetest_data/01-create-dialog.js (2)

13-13: Function simplification improves maintainability.

The cleanUp function has been simplified to only add search tags with a sentinel value and return the payload, removing previous complexity related to activities and visibleFrom handling. This aligns with the PR's objective of improving search tests by focusing the function on its core purpose.


1-35:

❓ Verification inconclusive

Verify integration with new performance tests.

This simplified function is part of a broader change introducing new performance testing scripts. Ensure that the dialog payload created by this function properly integrates with the new serviceOwnerRandomSearch.js and enduserRandomSearch.js scripts mentioned in the PR summary.


🏁 Script executed:

#!/bin/bash
# Check how the dialog creation function is used in the new performance test scripts

echo "Checking usage in serviceOwnerRandomSearch.js:"
rg -A 3 "create-dialog" tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js || echo "No direct references found"

echo -e "\nChecking usage in enduserRandomSearch.js:"
rg -A 3 "create-dialog" tests/k6/tests/enduser/performance/enduserRandomSearch.js || echo "No direct references found"

echo -e "\nChecking how the sentinel value is used in search tests:"
rg -A 3 "sentinelPerformanceValue" tests/k6/tests/serviceowner/performance/serviceOwnerRandomSearch.js tests/k6/tests/enduser/performance/enduserRandomSearch.js

Length of output: 791


Performance Test Integration Requires Manual Verification

The change introduces a simplified function for creating dialog payloads for performance testing. However, the automated search in the expected performance test files (i.e. serviceOwnerRandomSearch.js and enduserRandomSearch.js) did not yield direct references to this function or usage of the sentinelPerformanceValue. Please verify manually that:

  • The dialog payload created by this function (including the appended sentinelPerformanceValue) is appropriately integrated within the new performance test scripts.
  • The integration is either handled indirectly or the payload adjustments are correctly reflected where needed.

@dagfinno dagfinno merged commit 9412a85 into main Mar 3, 2025
23 checks passed
@dagfinno dagfinno deleted the performance/improve-serviceowner-search-tests branch March 3, 2025 08:46
oskogstad pushed a commit that referenced this pull request Mar 3, 2025
🤖 I have created a release *beep* *boop*
---


##
[1.57.2](v1.57.1...v1.57.2)
(2025-03-03)


### Bug Fixes

* Add missing prefixes in Swagger TypeNameConverter exlude list
([#1992](#1992))
([88925c1](88925c1))
* **ci:** Wait for infra deployment before deploying apps in prod
pipeline ([#1989](#1989))
([224e837](224e837))


### Miscellaneous Chores

* **performance:** Make improved tests for search
([#1983](#1983))
([9412a85](9412a85))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Issue related to performance
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants