-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
End-to-End Tests on Generated Pipelines using Puppeteer #580
Conversation
for more information, see https://pre-commit.ci
Most reliable way would probably be to Though, is this just for the tutorial data? Did you want to add cases from the general data based testing suite? If tutorial only, might be worth finishing up #530 to simplify the issue altogether |
You mean this PR? #530 |
Either one is fine. Depends on how comprehensive we want to be (e.g. we could quite easily run through all the YAML pipelines automatically). It would be great if all the tests worked for any user without any configuration (i.e. we have the data included somehow), but there are ways to seamlessly transition between the two states. |
Ah, yes, #530 |
That's pretty tough to do in a way that scales well (even in actions we utilize caching) and is really interoperable Simplest approach is for us to require the data to exist, and simplest approach for data to exist is to use tools specifically built for data transfer |
#530 still requires that the testing data is on the user's computer, so we'll still have to install on the CI in a default location. You're right, though, we will want users to be interacting with a multi-subject, multi-session dataset for the tutorial—so it would be nice to have this merged. I'll use the generated dataset for our manual pipeline test. |
Got it, thank you for the clarification. I'll use the approach you've defined above. |
What do you mean by 'testing data'? That PR does not require the GIN example data, it generates synthetic files, if those are the ones you're talking about. But since we're the ones making them we have full control where they get saved |
Oh I see, I read your changes incorrectly. Do you want to run E2E tests using the testing data at all, or d'you think that generating these files is sufficient—at least for now? |
I'll continue to think about it Since this mainly came up as a way to automatically keep tutorial screenshots up to date, I think focusing on that first and leaving open the possibility of full real data-based testing suite going into the future might just be the best way to go |
@garrettmflynn Is coverage possible to track via e2e tests? That could reduce amount of other tests we need to add to increase it |
Hmm it looks possible but could be somewhat complicated. Having some issues figuring it out because of blocked tests after adding the DANDI_CACHE environment variable—but I'll try a few things and get back to you once we figure that out. |
@garrettmflynn Has this been replaced? |
Technically no—though we have pretty much decided not to pursue this. This particular PR prepares the E2E tests to run through all of the generated test pipelines. Something outside the scope of the tutorials, but still possibly useful. Do we want to approach this or just close this out? |
@garrettmflynn CI does not like something in the dependencies ATM - on any platform it seems |
…aWithoutBorders/nwb-guide into full-pipeline-puppeteer
for more information, see https://pre-commit.ci
…aWithoutBorders/nwb-guide into full-pipeline-puppeteer
Thanks for flagging! Should be fixed. Also was able to knock out the TDT issue here. Turns out it was more related to these changes than I thought |
Let me know when all CI are passing Also looks like docs have a build issue too Are all the tutorial changes intended in this PR? I had thought they were from a different branch |
for more information, see https://pre-commit.ci
…aWithoutBorders/nwb-guide into full-pipeline-puppeteer
src/renderer/src/stories/pages/guided-mode/data/GuidedMetadata.js
Outdated
Show resolved
Hide resolved
Looked over everything, just one small question about the form of an error message and the rest looks good to go! |
Co-authored-by: Cody Baker <[email protected]>
@garrettmflynn Thanks for the hard work on this, this is gonna be great for maintainability! |
This PR will implement an end-to-end testing suite that runs through generated and manual pipelines using Puppeteer.
@CodyCBakerPhD Before we can push forward on this reliably, we have to get the GIN testing data downloaded in the Github Actions workflow. I noticed how you're doing this in NeuroConv, but it wasn't clear how to specify where the datasets are downloaded to. For this case, I've made it an assumption that the E2E tests must have the GIN data at
~/NWB_GUIDE/test-data
, as it isn't clear how to pass this path dynamically usingvitest
.Can you clarify how we'd install the GIN data at this path for Github Actions to use?