-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
udp: handle udp requests concurrently #644
udp: handle udp requests concurrently #644
Conversation
Hi @da2ce7 did you read my comment about dynamically limiting the number of concurrent requests? |
7d8ec06
to
72c8348
Compare
This is not related to udp requests, as udp requests are non-blocking. |
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## develop #644 +/- ##
===========================================
+ Coverage 77.03% 77.61% +0.57%
===========================================
Files 130 131 +1
Lines 8519 8555 +36
===========================================
+ Hits 6563 6640 +77
+ Misses 1956 1915 -41 ☔ View full report in Codecov by Sentry. |
@josecelano I mean, I doubt there is any real improvement above 5 requests at the same time. The limit is the locking speed of our data-structures, and the kernel sending the udp packets. |
@da2ce7 my point is:
The idea is to reject requests at the high level when they detect the core level is taking too long to respond. This should balance the load and avoid server degradation. Anyway, that limit could be implemented on top of this PR once is merged. I think it can be easily changed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @da2ce7 I think it's a good functionality and good implementation but it's not compatible with dynamically changing the number of active requests since the 50 limit is hardcoded in the type.
I would merge it but if we want to make it dynamic in the future we will need to change this code.
match result { | ||
Ok(udp_request) => { | ||
trace!("Received Request from: {}", udp_request.from); | ||
Self::make_response(tracker.clone(), socket.clone(), udp_request).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @da2ce7 I would rename make_response
to handle_request
.
ACK 72c8348 |
Hi @da2ce7 another concern is whether we should spawn a new thread for each request or it would be better to have a fixed amount of workers each one processing many requests (sequentially). See #611 (comment) If I'm not wrong this is the approach for some webservers like Nginx and Apache. Below ChatGPT feedback: Spawning a New Thread per Request (up to 50)Advantages:
Disadvantages:
Using a Fixed Number of Worker ThreadsAdvantages:
Disadvantages:
ConclusionThe choice between these two approaches depends on your specific requirements and constraints:
In many modern applications, especially those that are I/O-bound like network applications, an asynchronous or event-driven model (such as using an event loop and non-blocking I/O) can often provide a more scalable solution than either of these threading models. This might be worth considering depending on the specifics of your application and its environment. |
@josecelano Tokio has a light-weight threading model, with a global thread pool that matches the number of execution cores of the system. While there is some overhead in creating tokio tasks, they are far lighter-weight than system-threads. I think that having a fixed set of worker threads dedicated to udp, while more optimal, would not give the significant benefit above light-weight threads used here. The "costly" part in my code is the "yield_now()" call, this happens when there is around 50 active tasks, it will temporally pause the handling of new requests, and give an opportunity for the other tasks to finish up... one attack/abuse would be that the task-pool is deliberately exhausted, causing many unnecessarily pauses for the yield... ... however the answer for this is to have a two-level task-pool, where the tasks are grouped by client that created the request:
With such a design you would need to connect with many different clients to fill the pool, each client gets its own pool... the client local pool would be blocking if full... but the task grouped by client would yield when full. |
Hi @da2ce7 I did not remember that Tokyo tasks are not equal to threads. Thank you for your explanation. I'm going to merge it, my other concern it's not clear yet. @WarmBeer is working on limiting memory consumption. I have the feeling that we need a global solution for limiting both the CPU and memory consumption. If @WarmBeer solution works we can later limit the CPU consumption to avoid system degradation. |
Extracted from #557
Upgrades the udp implementation to handle requests concurrently, no timeout is needed as udp is non-blocking.
Hard-coded limit of 50 concurrent requests. With trivial changes can make this limit configurable, however I don't think that it would really be much benefit for it, the main concern is memory usage, and since udp is no blocking each request should return very quickly, could set to 5000 and it should still be fine.