-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mplex salvage operations, part II #102
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Partial review.
horrible api; what can you do?
I pushed an additional commit, make timer channel draining robust. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned that this is going to be really slow.
- Can we benchmark it?
- Maybe we can lazily try to reserve more memory (on demand)? That is, if we're under pressure, we use the pre-reserved buffers. If we're not under pressure, we temporarily reserve more, then free the larger buffer.
multiplex.go
Outdated
|
||
if !recvTimeout.Stop() { | ||
select { | ||
case <-recvTimeout.C: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't do what you think it does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically, the timeout can fire immediately after this check and then we'll timeout on the read immediately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, it has been stopped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- If we can guarantee that the timer was started before we called stop, we should read and block.
- If we can't guarantee that, we should find a way to guarantee that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- We cant gurantee that the timer channel has been drained.
- We could use a boolean variable indicating whether it drained and decide on that.
side note: i really hate timers, they are a .45 footgun.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed, using a variable to track whether the channel has been drained.
the benchmark test says about 10% from raw performance. We could try to get more buffers and return them if unused i guess, we can do that in get/put buffer. |
multiplex.go
Outdated
var err error | ||
for i := 0; i < MaxBuffers; i++ { | ||
// up-front reserve memory for the essential buffers (1 input, 1 output + the reader buffer) | ||
if err := mp.memoryManager.ReserveMemory(3*BufferSize, 255); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where's the input/output buffer? I see the reader buffer, but that's it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we get it from the pool and return it.
See getBuffer*/putBuffer*.
multiplex.go
Outdated
|
||
if !recvTimeout.Stop() { | ||
select { | ||
case <-recvTimeout.C: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically, the timeout can fire immediately after this check and then we'll timeout on the read immediately.
10% single stream? I guess we do have double-buffering so maybe that's helping. But I would consider allocating the memory if we have it. I expect we'll want to use the same technique for yamux, etc. as well. None of the issues we're running into here are mplex specific. |
I want to take a look tomorrow morning, but don't block on my review. It looks mostly correct (modulo the timer issue), but I want to re-assure myself that we're correctly counting buffers. I'm fine merging this, then testing on the gateways, then maybe consider reserving more memory on-demand if there are any performance issues. |
fixed the timer issue in 152fa80 Let's sleep on it, and take another look tomorrow. |
No significant performance impact on a cross-atlantic single-stream data transfer. As we know, this is a totally insufficient way to measure muxer performance, but at least this one metric is not getting worse. |
Eh, that's probably good enough for mplex. I was mostly concerned that it would slow to a crawl on any non-loopback connection. |
This is part two of the salvage operations in #99.
We limit the buffers used by mplex to 4k, so that we can cut down our memory precommit to just 36Kb per connection.