-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PR: Skip several failing tests on Windows and one on a specific CI build #6044
Conversation
This is just a comment, I'm not arguing against this PR, but this seems like we have been disabling a lot of tests on Windows lately... I'm not sure that is a good thing... Currently, even before this PR, the Windows build is about 2% behind in test coverage compared to the Linux build. @CAM-Gerlach Have you seen some sort of correlation between the tests you are disabling and the issues we are receiving from our users on Windows? I'm under the impression we are receiving a lot of bug reports about the introspection one. |
@jnsebgosselin Thanks, really appreciate the feedback. I went through the tests in more detail to determine why they were failing and if they might have any relation to the issues folks were reporting. The results:
However, there remains the issue that the user configuration, specifically keyboard shortcuts and |
Thanks you very much @CAM-Gerlach for further investigating this issue! Just to be sure I understood correctly, you are skipping some of the tests because they are failing on your machine, even if they are passing on our CI servers? When I look at our currently opened PR, everything seems to pass just fine on our CI. We can always investigate and re-enable the tests later, so it's not a big deal. I'll try to investigate this on my end also, so I just want to understand well what is going on. IMO, the solution for failing tests should not be to skip it ideally :D. But well... we are unfunded and short-handed, so I understand this is probably the best temporary fix for now... |
Not all of them :)
Indeed! So at the moment, I'm only now skipping two additional ones on windows, that were already skipping on CI or certain windows builds, and fixed the rest that were failing so they'll at least pass on pure stock Spyder. To summarize, the remaining proximate issues that should probably be fixed, not necessarily in this PR, in rough (IMHO) priority order:
I will hopefully push a new version with a fix for 2. assuming my suggestion above works, and rolling back the targeted skip of the breakpoint check shortly. Thanks! |
Well, that's what I get for not running the full test suite; silly me. Would it be best to just create a new script file (e.g. |
Hello @CAM-Gerlach! Thanks for updating the PR.
CAM-Gerlach EDIT: Should be able to ignore these ^ as they are intentional. |
Thank you very much @CAM-Gerlach. I understand better what you want to do now. |
@ccordoba12 Any thoughts on how 1. above (several tests have hardcoded default keybindings and fail if they have been changed) should be addressed? That's the final thing remaining I'd like to see fixed or at least worked around in this PR (the others are minor/secondary or not directly related), as otherwise I or anyone else using custom keyboard shortcuts still can't run the tests locally, as the first non-skipped test ( |
Yes, we should run our tests with a clean configuration. I tried to do that some time ago, but I couldn't make it work reliably. Perhaps you could give it a try. |
But this is not for this PR, please leave it for another one. |
@ccordoba12 Okay; thanks much for your insight. Anything else you need me to do for this PR? Should I create a new issue for the clean config problem and we can discuss it there? |
Nop, I'll merge it right away :-)
Yes, please. |
Per @ccordoba12 's request, I went through the tests and disabled the 4 that were currently consistently failing, at least for me on Windows. Also, I skipped a test that randomly failed on a specific Travis build for that build (Py3.5 with PyQT5, Py2 and PyQT4 were already skipped; it didn't fail on any of the other CIs or builds, had nothing to do with the code I added and succeeded when I merely changed some whitespace to my next otherwise identical commit so I was almost sure it was spurious; it already had flaky(3) but failed all three times). I ran them about 20 times in total over the course of my testing.
A few key points to look at for in reviewing:
skipif
s (skipping the test for Windows, but not drilling down into my specific Python or PyQT version, nor generalizing to all builds)? This was more or less consistent with the otherskipif
s, but wasn't sure exactly what @ccordoba12 wanted.flaky
runs once out of many builds, as I did?Thanks!