-
Notifications
You must be signed in to change notification settings - Fork 420
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a minimum connections configuration to the ConnectionPool #1822
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the moment this doesn't solve the issue outlined in the motivation, it just stops some connections from idling.
When connections are added to the pool they are initially in the idle state and will only be connected once other connections have enough load, i.e. lazily.
We need to bring up min connections when we initialize the pool and (if necessary) when connections are closed. We'll also need some tests to validate this behaviour.
@@ -324,6 +332,9 @@ internal final class ConnectionManager: @unchecked Sendable { | |||
/// attempts should be made at all. | |||
private let connectionBackoff: ConnectionBackoff? | |||
|
|||
/// Whether this connection should be allowed to go idle (and thus be closed when the idle timer fires). | |||
internal let idleBehavior: IdleBehavior |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't look like this property is ever used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's used from the GRPCIdleHandler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we store it here if it's only used in the idle handler? Can't we just pass it the idle handler on init
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thing is the min connections configuration lives/is passed in via the ConnectionPool
: we need to have the context of the pool to understand how many connections are currently being kept open and decide whether we need to keep the connection from going idle or not.
The idle handler is created by the ConnectionManager
in startConnecting(backoffIterator:muxPromise:connectTimeoutOverride:)
for each connection, thus the manager has to be aware of the idle behaviour to be able to pass it down to the handler.
e141ff4
to
3606b44
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two nits, looks good otherwise!
/// The number of active (i.e. connecting or ready) connections in the pool. | ||
internal var activeConnections: Int { | ||
self.pool.eventLoop.assertInEventLoop() | ||
return self.pool._connections.values.reduce(0) { | ||
$0 &+ (($1.manager.sync.isReady || $1.manager.sync.isConnecting) ? 1 : 0) | ||
} | ||
} | ||
|
||
/// The number of connections in the pool in transient failure state. | ||
internal var transientFailureConnections: Int { | ||
self.pool.eventLoop.assertInEventLoop() | ||
return self.pool._connections.values.reduce(0) { | ||
$0 &+ ($1.manager.sync.isTransientFailure ? 1 : 0) | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these are unused now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I'm using them in tests.
ca94621
to
b066219
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks Gus!
Motivation
Sometimes, depending on the characteristics of the traffic, we may need to execute requests in spikes.
It's possible that when we get a large spike of requests, a lot of connections are opened, which are subsequently closed by the idle timer. However, when another spike comes next, because we don't have any available open connections, latency will suffer as a result of us having to open new connections.
Modifications
Add a ConnectionPool configuration to allow a minimum number of connections to remain open, that is, make a number of connections not go idle.
Result
A number of connections will remain open even when there are no open streams on them.