-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
setup() per scenario #1638
Comments
Currently, you can say that a scenario should start after X time of the test has passed. Where the idea is that it will actually start at that time ... not that a Also, this will mean that if the |
Yeah, given that VUs can be reused between scenarios, something like this will be a corner case factory. Probably not impossible, but much, much trickier than #785 |
Given these two things:
I think we can use both of these to implement 90+% of this proposed feature with no additional changes to k6, almost completely avoiding the corner case factory I mentioned above. Essentially, if the per-scenario VU init function is something like this: function vuInitForScenarioX(){
return new SharedObject('vuInitForScenarioXData', function () {
// This function will be called only once, and the request below will be executed only once,
/ /but other VUs will get a memory-safe read-only reference to the data.
let resp = http.get(/* ... */)
return {foo: 123, data: JSON.parse(resp.body), /* ... */};
});
} The only potential drawback to this that I can think of is that for distributed/cloud tests that run on multiple instances, the lambda will be executed once on every instance. This may actually be a feature for some use cases, but it's definitely going to be a drawback for others. Still, for those, the global So, yeah, given that per-scenario per-VU init functions are more flexible and can be used to implement 90+% of the per-scenario |
Ability to have a per-scenario setup() function.
Feature Description
The current setup() function is global for the test script. It gets executed only once before the test starts.
The usefulness of the global setup() function is limited when many scenarios are used.
The setup() function isn't scenario-aware, so it's not possible to return different data depending on the scenario.
While it's possible to return an object {} of data for all scenarios from the setup() function, it's not very good user experience.
Sample script showing the
Suggested Solution
This is open for discussion, but the obvious suggestion is to follow the same pattern as with the
exec
function.The scenario-specific setup() function should be executed right before the scenario starts rather than at the beginning of the test.
The text was updated successfully, but these errors were encountered: