-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel tests (UNIX) #1709
Merged
Merged
Parallel tests (UNIX) #1709
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…orater single TestCase classes)
It turns out on CI we save just a bunch of secs but there are occasional false positives. |
giampaolo
added a commit
that referenced
this pull request
Apr 30, 2020
Refactor test runner.py with a saner unittest-based class hierarchy so that --parallel args affects all test suites (all, by-name, failed). Also change Makefile which now can be used like this: make test-process ARGS=--parallel
giampaolo
added a commit
that referenced
this pull request
May 2, 2020
Despite I recently implemented parallel tests on UNIX (#1709), TestFetchAllProcesses class is the slowest one to run because it gets all possible info for all processes in one go. In fact it's a singe unit-test, so it's not parallelized by the test runner. In here I used multiprocessing.Pool to do the trick. On my main linux box (8 cores): Before: ---------------------------------------------------------------------- Ran 1 test in 2.511s After: ---------------------------------------------------------------------- Ran 1 test in 0.931s On Windows (virtualized, 4 cores): Before: ---------------------------------------------------------------------- Ran 1 test in 13.752s After: ---------------------------------------------------------------------- Ran 1 test in 3.951s
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
OK, this is big. We are now able to run tests in parallel on UNIX (and we keep using unittest module). On my Linux box with 8 logical cores all 551 unit-tests complete in half the time: around 6 seconds instead of 12. Same on FreeBSD. This is great news also because hopefully the CI builds should complete sooner. Currently we have 10 builds running on each commit:
We now depend on the super cute concurrencytest lib which takes care of the internal fork/parallelization details.
The tricky part though was the fact that some tests needed to be run serially. In order to achieve that I added a
@serialrun
decorator which can be used to mark TestCase classes. On start, the test runner will split the test suite in 2 based on that, and finally run parallel and serial suites separately but as a single test run.Prior to this I also had to refactor quite a lot of stuff and get rid of TESTFN global variable (#1734), since a global test file name doesn't play nice when accessed by multiple tests at the same time (and no, including
os.getpid()
as part of the file name wasn't enough).We now have:
And here's the test output (an actual one):