-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
403: You have exceeded a secondary rate limit #192
Comments
Hm, I'm not sure if this is the rate limit of your org or the rate limit of the changesets action bot, @Andarist do you happen to know? |
I don't really think it's possible for me to go over my personal rate limit, so I assume it's due to changesets action bot. I'm also experiencing this error, for the second time now and I only have like 4 repos and barely ran the action in like days. Maybe related to this? semantic-release/semantic-release#2204 |
To the best of my understanding it's related to this specific token. Looking into the GitHub docs we can see that the primary rate limit for this kind of a token is 1000 requests / hour: https://docs.github.com/en/rest/overview/resources-in-the-rest-api#requests-from-github-actions However, the report here is about the "secondary rate limit" and we can read here that:
So my line of thinking is that one should always first receive an error about the primary rate limit being exceeded and only if they continue to make requests then the secondary rate limit can kick in. This doesn't quite match the provided information about this issue. I'm not sure what to do here because I lack the required information. Perhaps somebody could create a GitHub support ticket about this to get a better understanding of what could have caused that? |
I'm seeing this too. Might make sense to include retries, respecting the Retry-After header? |
That's probably a good idea - it would be great to add some logs for this error first and log the value returned in this |
We are hitting a secondary rate limit for this action as well, and it looks related to search API usage. We have a theory that it relates to this part of the code: The query it builds on L294 looks like the same one that triggers the request hit by secondary rate limits:
The full log is attached before for reference[1]. Absolutely looks like the same issue mentioned above[2]. Haven't looked into how to avoid using the search API to figure out if there is an existing PR to update or if a new one needs to be created, and doesn't look like semantic-release has worked out a solution either. [1] Full error log:
[2] #192 (comment) |
Given the action uses Octokit already, could this be solved by just using https://github.com/octokit/plugin-retry.js/? |
Was anyone able to figure out the underlying cause? It seems that most* people facing this issue encounter the following headers:
These headers imply that we are far from reaching the api rate limit. Although, the error message does say that it is a secondary rate limit we're exceeding. I'm not sure what might be causing that either, since this issue happens randomly, sometimes even when invoking the api for the first time in days. Reading through the docs and linked comments leads me to believe that this is a bug in the Github search api. *sample 403 responses when encountering this issue:
|
I'm seeing many instances of this error as well. @DanRigby maybe can shed some light on this? |
Yeah, I've also seen it more often lately - and also at very suspicious times (when there was almost no other activity in the whole organization). I'd very much like to do something about it - but quite frankly, I'm not sure how to handle this in Changesets because it mostly seems to be out of our control. |
@Andarist What if we simply retry the search query multiple times whenever we encounter a Admittedly, this isn't a great solution. We don't really know if it will solve the underlying issue, one which does not lie in this repo at that. Maybe I'm just desperate because the publishing workflow is my most vulnerable ci pipeline. |
Octokit provides a "throttle and retry" plugin[1] that:
It would be interesting if it is possible to wrap the Octokit client[2] with the plugin using the method outlined in the [1] https://github.com/octokit/plugin-throttling.js edit: I saw a comment discussing |
We reached the secondary limit as well three times in aws-sdk-js-codemod repo. Example run: https://github.com/awslabs/aws-sdk-js-codemod/actions/runs/3858163665/jobs/6576401193 |
@Andarist do you want to try out using https://github.com/octokit/plugin-throttling.js/ and https://github.com/octokit/plugin-retry.js/ ? I can help with that |
Any updates regarding this issue? I'm seeing this as well. |
I've been seeing this recently too - is there an update on a potential fix? Or has anyone found a workaround (other than a manual retry of the action)? |
Any updates on this, it is very troublesome for our release schedule. |
I just hit this same issue today. PS hi @danieldelcore ! |
…s after hitting secondary rate limits (#286) * fix: prevent hitting github secondary rate limits Adds the octokit plugin for throttling / rate-limiting to fix the problem where action runs get blocked with a 403 error[1]. The `github.getOctokit`[2] function accepts a list of plugins, so this passes in the `throttling` plugin[3] to be hooked into the octokit instance. It also needs some configuration to setup the `throttle` mechanisms, passed in to the `getOctokit` function. [1]: #192 [2]: https://github.com/actions/toolkit/blob/main/packages/github/src/github.ts#LL18C40-L18C40 [3]: https://github.com/octokit/plugin-throttling.js * refactor: change rate limit callback signatures Based on additional docs, the callbacks seem to have changes to their signatures[1]. This change aligns this implementation with the docs[2]. [1]: https://octokit.github.io/rest.js/v19#throttling [2]: https://github.com/octokit/plugin-throttling.js/blob/v5.1.1/src/index.ts#L90-L91 * chore: add changeset * wire up typed throttle options * Upgrade TS * refactor: use console based logging * Update .changeset/rotten-carrots-pump.md --------- Co-authored-by: Mateusz Burzyński <[email protected]>
Can confirm I updated to new version, and the limit was hit again unfortunately 😢 @Andarist |
Please provide the CI logs for the run, this is a hard issue to reproduce at will and it would help immensely to get the logs for a failure. |
@varl I have a public repo where I test this stuff. This is the job where it failed again https://github.com/blissful-group/math-lib/actions/runs/4913661788/jobs/8774090637 edit added step configuration
|
Hm, weirdly there are no logs at all coming from the throttling plugin that we added - as if that logic was never called. |
Note that it is still possible to hit secondary rate limits since this is setup so that it only retries twice before failing -- this is what I was "hoping" to see in the logs, that it retried and then failed. But I don't see any of the console-based logging from the changesets action in the job. @Andarist, I think we'd need to hook up logging from core1 to see if the throttling plugin is doing what it should. Should I start doing that or would you prefer to do it ? Either way is cool with me. edit: First pass could be to do it for the throttling-plugin specifically and one other place that is always run just to be able to verify that logging is functioning as expected, and then if logging behaves as intended, we'd implement it throughout in a second pass. |
If you could do this that would definitely help us to ship this faster.
Sounds fine, although we don't have a lot of logs really so I think it's doable in a single pass without much more time put into it. |
@Andarist alright, I did it in one pass. I skipped |
@Blissful89 If you manage to spot another error with 1.4.3 of the action, I'd love to see the logs. |
Will try it out soon again! |
@varl I've included a snippet, cannot link it like last time because this is a failing prod release on a private repo. Hopefully it helps
|
Great, I see the problem. Thanks ! |
There were additional uses of octokit that I missed the first time around (:disappointed:), I've fixed it in #291. |
Nice. Unfortunately github is having issues right now see status page... so we can't really check it right now I guess? |
Yeah, and I broke a test on CI. Looking into it. |
Seems to be coming back online, you can probably just rerun the failed jobs |
@marcovanharten-cpi @Blissful89 1.4.4 is live, if you want to try it again. :) |
Did and so far so good! Maybe you can finally put this one to rest 😉 |
Hitting this in 2024. The relevant line:
What does this action do with issues? |
Lines 345 to 348 in c62ef97
|
Of course, pull requests and issues are the same thing 😊 |
@WoodyWoodsta did you note if it retried the action in the logs ? It should log according to this:
It will only make 3 attempts (initial, then two retries) before failing. If it fails after 3 attempts, you need to manually re-run the job and it should go through (unless it's still secondary rate limited by github .. in that case it's a wait and play situation). The fixes applied to this issue are mitigation strategies to make the issue less likely to appear, but it's not a perfect fix and as more people rely on this action, the more common it becomes. Changing the retry count is another mitigation however it needs to be balanced, and we should respect the API responses for rate limiting as well1:
|
@varl Yep, it's doing 3 retries and then failing. Do I understand correctly that your API limits are used, not mine? The use of this action in our organisation is low such that if it were contributing to our API request limits we shouldn't be rate limited. If so, is there a way the requests can be made via our limits? |
Alright, then it's working as expected. That is good, at least.
I just contributed with some work to help mitigate this specific issue. To your point, the rate limits on the GitHub API are strict by design. If this was a "primary rate limit", using a different However, the issue at hand is the "secondary rate limit", and from their docs I don't see mentions of secondary rate limits behaving differently in that case2.
If you are on GitHub Enterprise the API limits are different, I presume that includes the secondary rate limits. You could try forking the action to your organisation, and running it from there, e.g. cc: @WoodyWoodsta |
This was the basis for my question. But looking at the secondary rate limit docs, it's a bit hazy and I can see how we'd possibly be hitting the limit if this action is performing a lot of api requests. |
@WoodyWoodsta Yeah, I'm just not sure if it's actually shared, or if it's based on something other than the It seems to me that it is probable is that we observe a usage effect: more people use it, so more people trip over the issue. Thinking about it some more, if the I dunno, man. I just hit that re-run button in the failed workflow. 😅 |
This is what solved |
We use now a forked version of changesets/action to workaround: - changesets/action#192
We use now a forked version of changesets/action to workaround: - changesets/action#192
Hey folks, I seem to be running into some rate limits when running
Create Release PR or Publish to npm
It looks my Org/Account hasn't hit a rate limit so I'm wondering if it has done so on the changesets org.
My action config looks like so: https://github.com/CodeshiftCommunity/CodeshiftCommunity/blob/main/.github/workflows/release.yml
Currently using changesets/cli 2.6.2
Failing build: https://github.com/CodeshiftCommunity/CodeshiftCommunity/runs/6961550448?check_suite_focus=true
The text was updated successfully, but these errors were encountered: