Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: return bound endpoints after server started #113
base: main
Are you sure you want to change the base?
fix: return bound endpoints after server started #113
Changes from all commits
eb1767a
41fd3d7
a46c7f0
4648e46
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I'm personally not a big fan of taking
&mut self
for this as it leads to confusing internal state and needing to wrap things inOption
just for this method to take their contents.What if we renamed this
start
method tospawn
and in addition to returning the endpoints, we return a type wrapping both theJoinSet
representing the spawned server's tasks and theCancellationToken
that can be used to stop the server. Thus, the type could have astop
method on it to stop the server and also implementFuture
that a caller can await on to join the tasks in the set?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, I was just about to ask why required fields were becoming optional. I'd much rather have spawning produce a dedicated type that represents the info for a started service.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was just trying to minimize the patch since existing code was
&mut self
and encountered issues like #112 when working on #111. Also wasn't sure why there was a struct calledServer
if it wasn't meant to be used after calling a start-like method.Things became
Option
-like in config soself
wasn't partially moved. If I had continued in same direction as the PR was going, I'd have flattened theConfig
's members intoServer
and not kept aconfig
member in theServer
struct.See other comment about providing a
CancellationToken
to chain a shutdown procedure rather than explicitly telling dependencies to shut down.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The existing code for
bind
method is&mut self
, but the formerrun
method wasself
, both are now combined into yourstart
method.That said, I think perhaps we could better model our
Server
off of hyper's Server; theirs is already a bound and listening server upon construction thus there doesn't need to be astart
(orspawn
) method.That is, you contruct/build a server, it internally binds based on its configuration; the binding can be inspected via a method; in the case of hype there is only a single bound address for the server, but our implementation might have multiple bound addresses as there might be multiple hyper servers internally (e.g. API, monitoring, etc).
Finally, it implements
Future
, so that you simply await it for it to run to graceful completion.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, yes, I'd follow some patterns used on some existing servers already like the
hyper
used here already.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this task be merged with the next task? That is to say, after the
server
future completes in the next task, we simply wait on the stop handle for the core service.Given this design, I would expect a 1:1 between a task in the task set and a running Hyper server (API in this case, but soon a monitoring server, etc). Each would shutdown in their own way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd think due to the nature of fanning out a cancelation signal chain, a cancellation token should be cloned for each owned depedency, somewhat similar to how Go Context works. Otherwise there virtually will be for-loop boilerplate being written to cancel "children" all separately as a server grows. That allows things to shut down in parallel easily if needed.
But yes, could keep things simple for now and merge the shutdown tasks instead of renaming chaining of them through the tokens themselves. However I'd either want to use a broadcast channel instead, or an additional token for draining in #111.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want the core service to be stopped in parallel to the hyper server shutting down as it might race.
If we call
stop
on the core service, it will break out of its loop and the task will complete; note that theCoreService
instance itself won't be dropped as a result of this as there are references to it from the hyper server.If this is done in parallel to the hyper server shutting down, then it's possible a request being drained attempts to enqueue work for the core service to do; if it so happens that the core service already stopped and the send limit for the message queue is reached, it'll block waiting for the core service to remove a message which will never occur since its task has ended.
We also don't want to close the message queue's sender as part of stopping the core service as this would result in a panic (even a graceful error back to the client here is not ideal) and a reset of the connection for any request attempting to enqueue work for the core service to do after it has stopped.
I believe the correct order of operations here is to shutdown the hyper server then join on the core service, like we had before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that eventually the core service won't be part
warg-server
, but instead another backend service for it to talk to via IPC (i.e. a real distributed and reliable message queue) for processing new record entries; it's the limiting factor of why we can't spin up multiplewarg-server
instances right now.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was merely suggesting making parallel shutdown possible, but did not point out a use case yet where to apply it now. I'm well aware about needing to keep dependencies going, like a core service, while a user of it drains, like the API server.
We share the same view of the shutdown order, but what is your thoughts on using a
CancellationToken
to signal some thing(s) to shut down, versus explicit function call on a dependency to shut down? But please note I'm not suggesting a change in this PR to inject a token into intoCoreService
itself. Thehandle.stop().await
would remain. This helps a bit when/if drain needs to be handled before an actual shutdown.