-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
429 too many requests or unknown state #8295
Comments
It was reported in #synapse-dev:matrix.org that this was seen on both v1.20.0rc2 and rc3. Some logs:
|
Here are some logs for me trying to send to @tulir
|
I did notice a small amount of After upgrading to rc3 I see loads of |
Are people still seeing this out of interest? I think I saw someone mention that the problem had gone away for them |
It did seem to go away briefly, but now my logs have tons of 429's again. In the past ~14 hours, I have >500 429's from asra.gr, colab.de, riot.firechicken.net and utwente.io. Also between 10 and 100 from a bunch of other servers |
Cool, can you send me your full logs for today then please? |
So looking at the logs it appears that the following happens:
However, if the remote host is performing slowing step 3 can take a long time to return, causing the room lock to be taken out for extended periods of time. At some point the server sending the transaction times out the requests and resends it. Since it has the same transaction ID it gets queued up in the transaction linearizer (not a The net result here is that when fetching the prev events from the remote host takes too long we end up stacking up transaction requests from the remote, which fills up the federation ratelimiter (it only allows 3 active requests at a time), thus causing the server to reject requests with 429. I'm not at all sure why this is an issue in v1.20.0 RCs, I don't think anything has changed in this area. It's possible there is some performance regression somewhere that is causing this failure mode to be hit more often I guess. Things that we can do to "fix" this:
|
I believe this ca be closed. @erikjohnston please re-open if this is incorrect! |
since upgrading to rc we noticed that federation is sometimes very slow, and noticed a number of responses with 429 headers
The text was updated successfully, but these errors were encountered: