-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue syncing past 12th June - incorrect blockflowhost #572
Comments
Do you have the If it doesn't work, can you share your docker and/or docker-compose config? |
Hi we do use following ENV variables
I do not think there is This is the application configuration settings: |
The env variable BLOCKFLOW_DIRECT_CLIQUE_ACCESS=false did not resolve the issue. It is still reaching out over 127.0.0.1 |
Hey, could you please try to set the
|
you can maybe take some inspiration from our alephium-stack docker-compose as well as the corresponding user.conf |
Tried from scratch and observed the same issue - synced till 12th June and then stopped with the error. However managed get it to sync using your suggestion Needless to say I have
and tried before using fluxdaemon_alphexplorer:39973 - that is resulting in an error. Only using daemon:39973 works properly |
that's too strange that suddenly it stop to sync at the exact same date. I suspect that the error is actually not a node not reachable. |
Yes always stops on the 12th of June, all the nodes. No I can't fetch it over 127.0.0.1:12973. That is the thing. It should not be reaching out over 127.0.0.1:12973 but should be reaching out over what is configured in the blockflow fluxdaemon_alphexplorer:12973. |
@TheTrunk could you solve your issue? do you need more help? |
While running under docker we configure env var "BLOCKFLOW_HOST=fluxdaemon_alphexplorer"
The log of backend confirms that on init with
However we fail as it is reaching out to 127.0.0.1
Doing curl to fluxdaemon_alphexplorer from our backend container works well, doing it to 127.0.0.1 fails as expected.
There most likely is some bug of not respecting configured env variable and falling back to 127.0.0.1 as default.
The text was updated successfully, but these errors were encountered: