-
Notifications
You must be signed in to change notification settings - Fork 559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.sccache_check is on the hot path and causes rate limiting errors #2070
Comments
The check only happens when the server starts. How is that the hot path? |
I'm also surprised to see it from AWS. We have dozens of worker nodes and thousands of builds per day but it's not a crazy number. But I frequently see that error in the logs. I see that others also reported the same or similar issues: |
S3 has rate limits: many reads and writes to a single key can hit rate limits far before the underlying partition is rate limited. Even 20-30 PUTs on a single key within a very short period of time will exhaust it. On versioned buckets this is lower, especially if there are many millions of versions may exist with this key. |
Seeing the same on GCS (also for |
My sccache error log shows this:
So I think in my case at least, this is happening because my Rust compiler (via On GCS, I believe you are only allowed one mutation per second, so even though this is not really what you would call a "hot path", it still hits the rate limit. |
In case anyone else runs into this: Running |
Hey, I'm seeing a lot rate limiting errors at storage check (s3 backend). The
".sccache_check"
file that is used for that check is on the hot path. What do you think if we make it configurable and expose it as an environment variable? Each actor can have it's own file that checks for read/write access. That would help to mitigate the issue. WDYT?Example of the error:
The code:
sccache/src/cache/cache.rs
Lines 481 to 544 in 69be532
The text was updated successfully, but these errors were encountered: