-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracker restarted when memory is full #566
Comments
Hi @WarmBeer I've been thinking about this problem. I want to share some thoughts. Is the tracker actually crashing?The tracker was restarted on the demo environment but I suppose it was because of the container healthcheck. we are using this HEALTHCHECK --interval=5s --timeout=5s --start-period=3s --retries=3 \
CMD /usr/bin/http_health_check http://localhost:${HEALTH_CHECK_API_PORT}/health_check \
|| exit 1 And this compose configuration: tracker:
image: torrust/tracker:develop
container_name: tracker
tty: true
restart: unless-stopped
environment:
- USER_ID=${USER_ID}
- TORRUST_TRACKER_DATABASE=${TORRUST_TRACKER_DATABASE:-sqlite3}
- TORRUST_TRACKER_DATABASE_DRIVER=${TORRUST_TRACKER_DATABASE_DRIVER:-sqlite3}
- TORRUST_TRACKER_API_ADMIN_TOKEN=${TORRUST_TRACKER_API_ADMIN_TOKEN:-MyAccessToken}
networks:
- backend_network
ports:
- 6969:6969/udp
- 7070:7070
- 1212:1212
volumes:
- ./storage/tracker/lib:/var/lib/torrust/tracker:Z
- ./storage/tracker/log:/var/log/torrust/tracker:Z
- ./storage/tracker/etc:/etc/torrust/tracker:Z
logging:
options:
max-size: "10m"
max-file: "10" Notice the It might be that the tracker continues allocating more memory (using swap) without panicking as you mentioned yesterday in the meeting. It could be the case that the container is restarted due to the compose configuration because the container becomes unhealthy. I have not checked it but if you want you can check it using docker. You can run the tracker limiting the memory with:
NOTE: I do not know why the You can set the option Anyway it seems (from what I've read in a quick research) that Rust panics when It can't allocate more memory. I've also seen a way to "capture" that event: use std::alloc::set_alloc_error_hook;
fn main() {
set_alloc_error_hook(|| {
// Custom action on allocation error, like logging
std::process::abort();
});
// Rest of your code
} Limit memory consumption by limiting concurrent requestYou are now working on this you are controlling the amount of used memory and deleting torrents when you reach the limit. I've been thinking about an alternative. Instead of directly controlling memory consumption we could control concurrent requests. In theory if we limit the number of concurrent requests that would indirectly limit the amount of memory used. We could check the processing time for each request and set a maximum time. When we go over the maximum response time we can start rejecting new requests. It could be something similar to what @da2ce7 did by limiting active requests to 50. However, we could set that limit dynamically. In theory, if we stop accepting new requests the memory consumption should not increase. This proposal has other advantages:
Disadvantages:
But this would work only if reducing the load means reducing memory consumption. The question is: can we increase the memory consumption limiting the number of requests? In theory, peers should announce themselves every 2 minutes. We can have different scenarios for normal cases (that means peers behaving well) like:
If we limit the concurrent request to 1 per second we have in the torrent repository:
Assuming:
With 1 request per second (type 1):
With 1 request per second (type 2):
With N requests per second (type 1):
With N requests per second (type 2):
If we find out the worst scenario (the one that consumes more memory) we can limit the number of concurrent requests or indirectly limit the concurrent requests when the response time is high. This approach assumes:
Does the option
|
I'm going to close this issue. I assume the tracker was restarted after the docker healthcheck. We could apply memory consumption limits in the future if we consider it useful but for other reasons. The system should only degrade the more memory consumed. |
Relates to: #567
We are running the live demo in a droplet with 1GB of RAM.
The tracker domain is https://tracker.torrust-demo.com/health_check.
On the 29th of December I changed the tracker configuration from
To:
With this change, the tracker does not clean up torrents without peers. That makes the data structure containing the torrent data grow indefinitely.
That makes the process to run out of memory every 5 hours approx:
I guess, the Tracker container crashes and it's restarted again, otherwise I suppose the process would simply die.
We should limit the memory usage. This could happen even with the option
remove_peerless_torrents
enabled.cc @WarmBeer @da2ce7
Last 24 hours
Last 7 days
The text was updated successfully, but these errors were encountered: