-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove Jest test matrix #14088
Comments
Job added to Upwork: https://www.upwork.com/jobs/~0172a4a7d9f592a3ac |
Current assignee @kevinksullivan is eligible for the External assigner, not assigning anyone new. |
Triggered auto assignment to Contributor-plus team member for initial proposal review - @Santhosh-Sellavel ( |
Triggered auto assignment to @thienlnam ( |
proposal
|
ProposalWe can remove the matrix. I noticed @roryabraham added a method to get the number of CPU cores and assign maximum workers accordingly, "because it seemed like there might’ve been zombie threads sitting around never finishing even though all tests had passed", as noted in this comment. If that is an intended feature to implement, we can add that. A sample run with this action can be found at - https://github.com/Prince-Mendiratta/expensify-app/actions/runs/3859244680. Also, why are we not using actions/cache@v3? I tried it and it works well. I propose that we use the latest version of cache, further considering that Is there a specific reason why we do not cache npm modules as well? We could cache npm modules and if the hash of the diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml
index 72dde5fe38..72099e68c9 100644
--- a/.github/workflows/test.yml
+++ b/.github/workflows/test.yml
@@ -6,37 +6,17 @@ on:
types: [opened, synchronize]
branches-ignore: [staging, production]
-env:
- # Number of parallel jobs for jest tests
- CHUNKS: 3
jobs:
- config:
- runs-on: ubuntu-latest
- name: Define matrix parameters
- outputs:
- MATRIX: ${{ steps.set-matrix.outputs.MATRIX }}
- JEST_CHUNKS: ${{ steps.set-matrix.outputs.JEST_CHUNKS }}
- steps:
- - name: Set Matrix
- id: set-matrix
- uses: actions/github-script@v6
- with:
- # Generate matrix array i.e. [0, 1, 2, ...., CHUNKS - 1] for test job
- script: |
- core.setOutput('MATRIX', Array.from({ length: Number(process.env.CHUNKS) }, (v, i) => i + 1));
- core.setOutput('JEST_CHUNKS', Number(process.env.CHUNKS) - 1);
-
test:
- needs: config
if: ${{ github.actor != 'OSBotify' || github.event_name == 'workflow_call' }}
runs-on: ubuntu-latest
- name: test (job ${{ fromJSON(matrix.chunk) }})
+ name: test (shard ${{ fromJSON(matrix.chunk) }})
env:
CI: true
strategy:
fail-fast: false
matrix:
- chunk: ${{fromJson(needs.config.outputs.MATRIX)}}
+ chunk: [1, 2, 3]
steps:
# This action checks-out the repository, so the workflow can access it.
@@ -55,7 +35,11 @@ jobs:
echo "Error: Automatic provisioning style is not allowed!"
exit 1
fi
-
+
+ - name: Get number of CPU cores
+ id: cpu-cores
+ uses: SimenB/github-actions-cpu-cores@31e91de0f8654375a21e8e83078be625380e2b18
+
- name: Cache Jest cache
id: cache-jest-cache
uses: actions/cache@v1
@@ -64,11 +48,9 @@ jobs:
key: ${{ runner.os }}-jest
- name: All Unit Tests
- if: ${{ fromJSON(matrix.chunk) < fromJSON(env.CHUNKS) }}
# Split the jest based test files in multiple chunks/groups and then execute them in parallel in different jobs/runners.
- run: npx jest --listTests --json | jq -cM '[_nwise(length / ${{ fromJSON(needs.config.outputs.JEST_CHUNKS) }} | ceil)]' | jq '[[]] + .' | jq '.[${{ fromJSON(matrix.chunk) }}] | .[] | @text' | xargs npm test
+ run: npx jest --shard=${{ fromJSON(matrix.chunk) }}/${{ strategy.job-total }} --max-workers ${{ steps.cpu-cores.outputs.count }}
- name: Pull Request Tests
# Pull request related tests will be run in separate runner in parallel.
- if: ${{ fromJSON(matrix.chunk) == fromJSON(env.CHUNKS) }}
run: tests/unit/getPullRequestsMergedBetweenTest.sh |
sorry for the lack of communication on my part here. I have a draft PR which reimplements the matrix in a much simpler way using Jest's |
Updated ProposalI've been working and testing different kinds of configurations and have added my understandings through my tests. We can choose either of these workflows depending on what we're looking for. BackgroundIn the current implementation, the OP of the PR manually divided the jest tests into chunks and then executed them as batches/groups. This is a approach that was used in lower versions of jest, where the Testing workflowI've forked the repo and create new tests with different configurations. I then created a PR to the repo through another account, commiting just text files to run the Github Actions. Then, I analyse how the results differ. The two major contributing factors to the time are-
Both of them take around 1:30 - 3:00 minutes each, depending on the optimization. Base
Workflow log - https://github.com/Prince-Mendiratta/expensify-app/actions/workflows/test.yml Jest Optimisation
Workflow log - https://github.com/Prince-Mendiratta/expensify-app/actions/workflows/test2.yml 2 shards, Jest optimised
Workflow log - https://github.com/Prince-Mendiratta/expensify-app/actions/workflows/test5.yml 3 shards, cached node_modules
Workflow log - https://github.com/Prince-Mendiratta/expensify-app/actions/workflows/test6.yml 3 shards, cached node_modules, jest cached
Workflow log - https://github.com/Prince-Mendiratta/expensify-app/actions/workflows/test7.yml Side - by - side comparisonOverall, I believe that with the use of cache, we can reduce the time highly whilst still maintaining the simplicity with either of my solutions. We can further improve times to under 1 minute by using large-runner by increasing the number of cores, we can increase the number of shards as well as maxworkers per shard, optimizing the speed by a huge amount. However, that is something that needs to be discussed properly first since the maintenance of such an implementation will require proper planning. |
discussed 1:1 with @thienlnam and I'll take this over as I have more context. |
I'll go ahead and unassign @Santhosh-Sellavel and I, but let me know if I am missing something here @roryabraham . |
@roryabraham Just wondering, is it a bad idea to cache node_modules as well as suggested in my proposal, considering the use case for Expensify? |
@roryabraham Uh oh! This issue is overdue by 2 days. Don't forget to update your issues! |
@roryabraham Eep! 4 days overdue now. Issues have feelings too... |
This issue has not been updated in over 14 days. @roryabraham eroding to Weekly issue. |
Triggered auto assignment to Contributor Plus for review of internal employee PR - @mananjadhav ( |
PR ready for review: #13943 |
|
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
No payments due here, so we're done |
There are a number of problems with the current Jest testing workflow:
123
exit code ofxargs
The text was updated successfully, but these errors were encountered: