Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: experimental single tab run mode for component testing #23104
feat: experimental single tab run mode for component testing #23104
Changes from 10 commits
2ecdd91
7d52188
1b473df
918997e
b5574b5
5c760d4
ff28d69
48c637d
7cf8da9
3adcb40
adfd7ce
ca5a16c
58efbac
9861034
0dcfe74
4c99b4d
84f0d8a
33b9d43
805bcdc
646857d
3747bc8
1d1df2e
81a3a0c
293a0f4
d7aa638
e83e56c
bc430a9
7453063
cb609e6
3cb9ab3
785e1ae
80cd48a
56bf522
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our config isn't in the greatest state given we don't validate against this list.. As is, e2e testers could set these even if the server didn't honor it. it'd be better to add this to the list of breaking e2e test configuration changes that are lower in this file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. Also a good chance to solicit feedback from CT users. What do you think of this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We reset the state here, much like we do between tabs, so this minimizes the chance of negatively impacting spec isolation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe this doesn't matter since this is experimental, but if something goes wrong, this loses the test retries for CT.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh true, this seems like a problem - something we will definitely need to highlight in the notes, unless we can find a fix prior to releasing this experiment (if that's even acceptable, will need to update the tech brief with this drawback... good point).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually @emilyrohrbough I don't get it - I tried to use
retries
but it works okay: 4c99b4d#diff-ff7a203eea50ba8d34ac184c54310e07e4f279447a923ba9bec878e643b9e289R699-R700Is there another way to configuring this or an edge case that I'm missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The retries are only for the initial browser launch right? I'm not sure we'd need retries for when we're navigating in this new way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we might be dropping the server reset(). does it make single tab move if we reset this regardless of the flag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean "move"?
Also I don't exactly remember why we reset here at all - I'm not sure how this impacts single tab run mode uniquely, I'm guessing anything that impacts regular run mode will also impact single tab run mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean our system test isn't in a great state if we arn't testing against our supported versions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's happening is we are falling back to the bundled version of webpack that we ship with Cypress. In open mode we actually show a warning in the launchpad. In run mode, we don't do that - we print the warning in the CLI. In that sense, the system tests are doing exactly what they should - warning us we are missing a dependency that we should install.
We could add
webpack
to thepackage.json
for this test project, if we'd like to get rid of this warning. The warning should probably be "If you're experiencing problems, install the missing dependency or downgrade to a support version and restart Cypress.".In most cases, people won't get to this point, since they'll use launchpad to configure everything, and we have a screen to check and help them install the correct dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming no, but with our current system tests, this there any way to benchmark the performance gains from this flag?
Also, how interactive are the tests in this system test? just curious if it'd highlight potential test isolation leaks or if they are pretty "simple" tests that should be reliable regardless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are fairly simple at this point - I don't think the will reveal and isolation issues, but having some that do sure would be nice. I'm not entirely sure of the best example of one, though - something that modifies
window.top
? 🤔There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As for performance, and tracking regressions, no... I think we should look into this, probably as a fast follow. One idea I had was we could just run
app-component-tests
twice using the system test infrastructure, once with the experimental flag and once without, then print a report and make some assertions. It might cost us a bit of extra $, will need to look into this more closely.I think there's some talk of using Graphana in #team-e2e-testing but this seems like it won't land anytime soon.
I'll try to add something small in here to at least verify it is faster with the experimental flag.