-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specific E2E tests failing due to data mismatches #782
Comments
This commit matches the data for the affected end-to-end tests but is not a fix for the issue so the tests have been skipped by @sam-glendenning. |
Found some further problems that indicate issues with the DataGateway API date and time code. Goel and I have found a test in studies.spec.ts that has mismatching data between the CI and preprod in one particular study that it is testing for. Regarding this test, Study 4 is the expected data but Study 325 sometimes appears instead. If you go on preprod, you'll see that Study 325 has a start date of 2081 but the CI has Study 325 with a start date of 2000, ahead of Study 4. What's even more confusing is when Goel and I tried to locally test this using preprod e2e settings, the data we received also has a start date of 2000 on Study 325. |
More e2e tests have been found to fail due to data mismatches. See #758 (comment) |
I believe this is caused by preprod not having quite the newest version of the frontend. On master (my VSCode window), investigation start date is being displayed (as per ISIS recommended changes) but preprod is displaying the study start date (hail the lack of configured language file!) Once preprod contains the ISIS recommended changes, I think this issue will be fixed, though I think the expected study name will need to be changed to STUDY 325. |
That makes sense now @MRichards99, once the preprod has been updated with the latest changes, we can check the tests again. |
Nice work Matt, I would never have noticed that! |
The differences between the file sizes (on the more information panel) was due to preprod's TopCAT caching. Louise cleared the cache and file sizes are now the same between preprod and my local instance. Like the study tests, the sizes might have to be changed (I think they're expecting the old values) but that issue is now resolved. Just the timezone issue to investigate, one for tomorrow me thinks. |
That's awesome @MRichards99! 😃 |
Ah, interesting. Sounds like another thing we'll have to remember to do if we ever regenerate the data on preprod machines - clear the cache. |
The inconsistent timezone issue was occurring because the preprod machine was set to BST. Louise set the JVM timezone to UTC (we didn't want to set the system timezone to UTC in case RIG's automation changed it back) and this has fixed it. I've spent the afternoon trying to force the generator script to add UTC to all the dates it generates, but it doesn't seem to work. I'm going to add a note on the API's documentation about this but it seems ensuring we use UTC is the best/easiest workaround. I'll unskip and update the expected data on the tests (they just need to be changed back) and then make a PR for it at some point. |
Great work, Matt, I'll keep an eye on the process. We can say the issue will be closed when that future PR is merged. |
I've made the changes for this, but it can't be reviewed until preprod has been redeployed with #756 on. @louise-davies could you let me know when this is done please? No rush from me on this, just so I know when to change the PR to 'ready for review' :) |
@MRichards99 master has now been updated as of earlier today |
These skips were missed and after removing them were also found to pass
Adding filtering by date between test Also trying to fix a filtering by multiple columns test which occasionally resulted in a failure.
Skip is added back in because the API on preprod is producing data with different dates than the one used in the CI. This is possibly because preprod's API is not up to date
Some of these were missed and were skipped either due to the data mismatch issue that is now fixed or the one-to-many issue that is also now fixed on the API side of things.
Description:
This is a full issue to the comment @MRichards99 made on PR #756 which had failing end-to-end tests for DataView and has been discussed with @sam-glendenning. The current issue is that
table/isis/facilityCycles.spec.ts
andtable/isis/investigations.spec.ts
fail due to data mismatches between what the CI receives in the table and what preprod contains (or when testing locally using the preprod).The first issue is that there are timezone differences between the data which the e2e tests use on the CI and the preprod data which is on deployed development version and used when running e2e tests locally. The differences are due to a
+01:00
on the test data (as shown below).CI screenshot:
Preprod screenshot:
Another example of this is in
table/isis/investigations.spec.ts
on thebeforeEach
for the test "should be able to view details". Here the test data for CI shows the size as10.2 GB
whilst the preprod test data is10.27 GB
(shown in the screenshots below).CI screenshot:
Preprod screenshot:
After further testing, we have also found data mismatches in the Studies table. Studies on preprod have different start dates than they do locally, causing some tests to fail that check for items first in the list. See tests for 'STUDY 4' in this describe.
Acceptance criteria:
The text was updated successfully, but these errors were encountered: