Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Statistics are not saved, starting from zero after restart of the docker image #133

Closed
daniel-l opened this issue Apr 27, 2024 · 24 comments · Fixed by #135
Closed
Assignees
Labels
bug Something isn't working

Comments

@daniel-l
Copy link

Describe the bug
Statistics are not saved, starting from zero after restart of the docker image.

To Reproduce
Steps to reproduce the behavior:

  1. Use Zoraxy, accessing reverse proxied services.
  2. Statistics get generated (number of "Req. Today" etc.)
  3. Restart Zoraxy's docker image
  4. Statistics are reset and start from zero.

Expected behavior
Statistics get saved and are loaded after every restart of the docker image.

Additional context
The docker image is setup with a volume to save data/config across sessions:

    volumes:
      - ./zoraxy:/opt/zoraxy/config/

Configs like the set up proxy hosts and custom access lists get saved there; stats not so much.

@daniel-l daniel-l added the bug Something isn't working label Apr 27, 2024
@PassiveLemon
Copy link
Collaborator

I believe this would depend on how Zoraxy itself caches statistics. If it caches to a specific location, you could create a volume for it. I'll have to wait and see what Toby says to know for sure

@tobychui
Copy link
Owner

Cannot reproduce this bug while running Zoraxy natively without docker.

@PassiveLemon For your information, the statistic is stored in memory and only write to sys.db on either the daily ticker ticks (everyday at midnight), or when the application receives an os.Interrupt or syscall.SIGTERM signal (e.g. Ctrl +C on a running terminal window). You might want to check if the application is shutdown correctly or if the sys.db is persistence after restart.

@daniel-l
Copy link
Author

daniel-l commented Apr 28, 2024

The file sys.db already existed yesterday; I just checked if the modified time updated since:

$ ls -al sys.db
-rw------- 1 root root 12288028. Apr 02:00 sys.db

The file also grew a bit since yesterday.

Also, I just shut down and restarted the container - yesterday's stats were indeed saved. Everything since (today's stats) were thrown away - the file did not grow a bit. File info after the restart:

$ ls -al sys.db
-rw------- 1 root root 12288028. Apr 11:43 sys.db

@tobychui
Copy link
Owner

Ok, this surely seems like a "docker not gracefully closing zoraxy" issue. I will let @PassiveLemon take over from here. Thanks for the bug report!

@PassiveLemon
Copy link
Collaborator

That would make sense, there is no graceful shutdown implemented in the container. Docker sends out a SIGTERM to the running process (shell in this case) but it doesn't respond to it so, after the grace period, it SIGKILLs everything which would result in an improper save.

@CorneliusCornbread
Copy link

This seems to be happening again with Zoraxy 3.0.4, works fine on 3.0.3

@PassiveLemon PassiveLemon reopened this May 18, 2024
@PassiveLemon
Copy link
Collaborator

Just to revisit this, @CorneliusCornbread, is this still a problem on the latest version?

@CorneliusCornbread
Copy link

Just to revisit this, @CorneliusCornbread, is this still a problem on the latest version?

Yup, upgrading just wiped all my settings again :/

@PassiveLemon
Copy link
Collaborator

Settings themselves, or statistics? If its settings, try to update each by version until 3.0.8, which should help resolve future updates

@CorneliusCornbread
Copy link

Settings themselves, or statistics? If its settings, try to update each by version until 3.0.8, which should help resolve future updates

Everything, it's completely fresh. Reverting solves nothing all my settings are permanently lost.

Worse yet, trying to recreate my proxies on the new version simply results in cloudflare always saying my service is down despite using the exact same certs and settings.

Frankly I'm not sure what updating version by version would do, the zoraxy config folder has always been completely empty no matter the version.

@PassiveLemon
Copy link
Collaborator

Would you mind sending your dockerfile/run command? If the config folder has always been empty, then its possible you have configured the volume wrong

@CorneliusCornbread
Copy link

Would you mind sending your dockerfile/run command? If the config folder has always been empty, then its possible you have configured the volume wrong

I'm using the unraid image, the zoraxy folder can create itself no issue.

image

I restarted the container and it seems to allow me to proxy with my certs again. It also seems to have saved my information this time. However I still need to recreate all my proxies from scratch

@PassiveLemon
Copy link
Collaborator

I dont know what Unraid image you are using but the one I know of is very out of date. I can't seem to find it though

@CorneliusCornbread
Copy link

I dont know what Unraid image you are using but the one I know of is very out of date. I can't seem to find it though

I found it on the app store, comparing my current image the only difference is the repository: passivelemon/zoraxy-docker:latest

@PassiveLemon
Copy link
Collaborator

PassiveLemon commented Jul 18, 2024

Yep, thats the outdated image. The new one is at zoraxydocker/zoraxy

@CorneliusCornbread
Copy link

Yep, thats the outdated image. The new one is at zoraxydocker/zoraxy

I remember I had to change the image a while ago, I've been using that one ever since.

It seems like changing the listen port requires zoraxy to be restarted for it to start behaving for proxying. It all seems to be working now. I can only hope that it won't also break between future updates.

@PassiveLemon
Copy link
Collaborator

Ah sorry i misinterpreted your message, glad to hear its working now. A new update system was added in 3.0.8 which should hopefully reduce breakages by updating your config file for you. I don't know how it works so you'd have to take Tobys word for it.

@CorneliusCornbread
Copy link

Just updated today, and this is still happening. Entire configuration was reset again

@PassiveLemon
Copy link
Collaborator

PassiveLemon commented Aug 6, 2024

#133 (comment)
I just looked at this image again and I noticed that the config is at /zoraxy/config and not /opt/zoraxy/config. As a result of this, every time the config is modified, it's modified on the config in the container and not the host so when the container is restarted, it's lost. If you are using that Unraid image, that would be the issue.

@CorneliusCornbread
Copy link

#133 (comment) I just looked at this image again and I noticed that the config is at /zoraxy/config and not /opt/zoraxy/config. As a result of this, every time the config is modified, it's modified on the config in the container and not the host so when the container is restarted, it's lost. If you are using that Unraid image, that would be the issue.

That sounds exactly like the problem, so you're saying if I change the config path in the image that'll solve this problem?

@PassiveLemon
Copy link
Collaborator

If you are able to, then yes, I am pretty sure that should resolve it

@CorneliusCornbread
Copy link

If you are able to, then yes, I am pretty sure that should resolve it

Looks like it's fixed to me
image

@CorneliusCornbread
Copy link

Also I reached out on the Ibracorp Discord, since they're listed as the authors of the unraid image. Normally I don't bother if an image is a little wrong, but the image for zoraxy hasn't been updated in almost a year and it's very wrong, so hopefully they see my message and update the image.

@PassiveLemon
Copy link
Collaborator

Thank you for contacting them, hopefully they update it so this issue can be avoided

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants