-
Notifications
You must be signed in to change notification settings - Fork 676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: try out the sccache for caching rust code #10445
Conversation
e287d9f
to
e91041a
Compare
23e7b18
to
6fbf8e4
Compare
cc @Ekleog It looks like we’re hitting Github’s rate limits (HTTP 429 code) when interacting with the GH Cache this way :( This significantly impacts the cache utilization (note: cache write errors)
A big benefit to this approach on the other hand is the fact that the cache is shared between different parallel workflows, so if one job gets to and finishes compiling a particular crate first, all of the follow up jobs will be able to immediately re-use the artifact. Relevant reading: mozilla/sccache#1485 |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #10445 +/- ##
==========================================
- Coverage 71.98% 71.94% -0.05%
==========================================
Files 720 720
Lines 146559 146747 +188
Branches 146559 146747 +188
==========================================
+ Hits 105507 105570 +63
- Misses 36181 36307 +126
+ Partials 4871 4870 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
So I fiddled with this a bit more, hoping to improve my local setup and I'm finding that even locally |
nevermind the previous message, sccache does in fact work but gains in my environment do leave a lot to be desired. Definitely should set it up for CI though, but that will require gcs bucket. |
Hmm… is this actually something that’d go away after running 3-4 builds, once the cache is hot? After everything is cached there should be much fewer cache writes (because reads would be hot), so hopefully we could just ignore that and not have to go the full-blown GCS cache way? |
I believe both reads and writes are rate limited. |
Edit: Actually seeing the paragraph just below the one I linked, the issue probably comes from the "Create too much content on GitHub in a short amount of time" secondary rate-limit. So I’d guess we’d probably still have a chance to be fine even with just a hot cache, because cache writes are more rate-limited than cache reads. |
15k requests don't sound nearly enough to me (if we do indeed have that as the limit) -- a single fresh By my estimation 15k cache lookups would be enough for, like, 2 builds even on a fully primed cache without any writes. |
Hmm WDYT about just testing it out, given how unclear the github documentation is? I’d suggest we land this, wait for a few days, look at what happened, and rollback if it didn’t actually work out — in this case we can then work on a proper GCS backend |
I don’t have a good reason to oppose that. |
6fbf8e4
to
381a5bf
Compare
- run: pip3 install --user -r pytest/requirements.txt | ||
- run: cargo llvm-cov show-env | grep -v RUSTFLAGS | tr -d "'" >> "$GITHUB_ENV" | ||
- run: echo "RUSTC_WORKSPACE_WRAPPER=$PWD/scripts/rustc-coverage-wrapper.sh" >> "$GITHUB_ENV" | ||
- run: echo "RUSTC_WORKSPACE_WRAPPER=$PWD/scripts/coverage-wrapper-rustc" >> "$GITHUB_ENV" | ||
- run: echo "CARGO=1" >> "$GITHUB_ENV" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my understanding, what’s the CARGO=1
variable about?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mozilla/sccache#2040 is the issue to read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Looks like we're consistently taking +5 minutes overall after this has landed. Unfortunately the cache failure statistics do not appear to have improved over time, though there are no "read failures" that I can see. |
Yup, that’s what it looks like to me too :’( Want to make the rollback PR or should I? |
No description provided.