-
-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unstable API #13
Comments
What would be your way of dealing with it, should we wait a little longer and retry or give up? |
I would probably ask the API maintainer if that is a rate limit, a server capacity issue or HAFAS beeing HAFAS. |
Okay, so this issue is two fold. Detect the HAFAS errors and find a better strategy of dealing with them. |
Does tagging @derhuerst work? |
It does! 🎉 There are three different aspects here:
You could do that. On the other hand, I've wanted to add this to
That is yet another improvement on my todo list: Access HAFAS via a range of IP addresses, to prevent rate-limiting. Let me know if you can help with this.
Unfortunately that's almost impossible for now:
|
🎉
We should implement this either way. Even if transport.rest handles retries, it may still encounter a situation where HAFAS just doesn't want to respond.
Do you log those errors? Maybe we can find a pattern which allows us to better time the retries.
I'd like that :)
This makes a lot of sense. This could also prevent
Sure, we could set up a number of relays for the API, we just need to create a protocol for that.
I think what he meant is to circumvent transport.rest and access HAFAS directly. That would be a lot of work but may prevent rate limiting issues. It doesn't solve the underlying issue, though. I would push that faaaaar back on the project plan. Thank you for your answers.
|
I do keep those logs. I'been planning to publish them somewhere for a while now, but haven't managed to so far. Not sure how well you can time them. The downtimes of HAFA APIs seem pretty much random.
PR welcome!
I could, but this just has an advisory function, and most people won't obey it. I like the idea of the API being able to handle this much more. Let's see.
I already did (read from derhuerst/vbb-rest#29 (comment) onwards), but the setup was overly complex and annoying. Now it's using Caddy, using There's two more options I want to evaluate though: making use of the IPv6 address pool of the VPS it's currently running on, and SOCKS proxies. Both have the advantage that they don't increase the maintenance effort much, in contrast to the HTTP-based load balancing setup. Let me know if you want to help, via doing PRs or by running the VPS.
You can always spawn |
I have opened a thread in the |
An update:
|
This should be solved now. The application uses v5 and I've seen no problems since. |
Sometimes
departures
can not query the departures and responds with:Could not query departures Error: json: cannot unmarshal object into Go value of type []main.result
I looked up with
curl
what the actual error is:In my opinion
departures
should be a bit clearer what the issue is e.g.HAFAS error: Service Unavailable
and/or use a more stable API (I'm unsure how feasible it is to switch the API)The text was updated successfully, but these errors were encountered: