-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
too many worker processes spawned in multi-project repo #9236
Comments
Looking at this quickly, it seems like only scanning for haste map is affected, tests should still be scheduled to run on Short-term, I think dividing workers about evenly per project should help, but that heuristic is obviously not perfect and needs heavy testing under various circumstances. Imagine having a small project A and and big project B, both would get 4 cores – project A will be collected sooner, which would leave only 4 cores for project B, while we have 4 more available, such a waste. And I don't think we currently have a way to dynamically change worker count for haste map or even |
Yes, that's right.
Maybe I don't understand the idea, but does that even make sense when there are more projects than cores? |
Or we could run these promises in a sequence instead of in parallel (Promise.all), I think that was the original intention. It also should get rid of my concerns about optimal solution |
Would it make sense to hoist the worker pool such that we can give the same pool to every HastMap instance? I'm thinking that This way there will never be too many workers and they will all be fully utilized (they won't be sitting around doing nothing while they wait for other workers to finish, which is what would happen if we just chained the promises in jest-core). |
Seems like a good idea to create worker farm in I didn't want to spend too much time on this, as this solution already works better than the current one, and there's some ongoing work (@scotthovestadt should have more details) to improve stuff around haste maps in future anyway. |
@scotthovestadt, will the changes you're making address this issue or should I start working on my own PR to fix this? I'm not very familiar with this codebase but It's a pretty big issue for my team. |
This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue was closed because it has been stalled for 7 days with no activity. Please open a new issue if the issue is still relevant, linking to this one. |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
🐛 Bug Report
I've been dealing with out of memory and/or EAGAIN errors for a few months now, ever since I upgraded to Jest 20 & started using the multi-project runner. I've been running with
--runInBand
as a workaround.I believe this is happening because Jest creates too many worker processes when building the haste-map cache if you have multiple projects & maxWorkers > 1.
maxWorkers
workers. See https://github.com/facebook/jest/blob/master/packages/jest-haste-map/src/index.ts#L736-L740This means that instead of restricting the number of workers to
maxWorkers
overall, it restricts it tomaxWorkers
per project (maxWorkers * numProjects
). In my case that's 226 projects and 8 cpu cores, so8 * 226 = 1808
worker processes. When I run jest it chugs for about a minute, then either runs out of memory or hits the OS's process limit.Note: this is only a problem while it is building the cache. Once the cache is built, jest will start actually testing the files and does so with the right number of workers.
To Reproduce
You should be able to reproduce this in any multi-project setup. Just run the tests in multi-project mode and watch how many worker processes get spawned while it's building the haste-map cache, before the tests start running.
I was able to reproduce it with the jest repo:
When I did this with the jest, I saw approximately 25 worker processes running. I expected no more than 8.
Expected behavior
It should never spawn more than
maxWorkers
processes.Link to repl or repo (highly encouraged)
See above. I was able to reproduce this with the jest repo.
envinfo
The text was updated successfully, but these errors were encountered: