-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pool the transient maps used for Msg Pack, Truncate and Len #1006
Conversation
This improves runtime by 20-40% and more importantly significantly reduces memory usage and allocations.
Codecov Report
@@ Coverage Diff @@
## master #1006 +/- ##
==========================================
+ Coverage 54.63% 54.75% +0.11%
==========================================
Files 41 41
Lines 9842 9859 +17
==========================================
+ Hits 5377 5398 +21
+ Misses 3438 3435 -3
+ Partials 1027 1026 -1
Continue to review full report at Codecov.
|
thanks! this is somewhat concerning:
As this does include some (localhost) networking it might not be conclusive, is |
I don’t believe this is a correct usage of sync.Pool, see golang/go#23199. |
There is a ton of variability in timing of the Results from
|
I think the specific issue that you're referring to is that we are not setting an upper bound on the size of objects that we store in the pool. I'm not terribly familiar with this part of DNS, but maybe @miekg could weigh in on what an appropriate upper limit would be (for reference |
This does look to have real world impact... ?
We storing compression pointers in the Pool; there is an upper limit to this (before we know we need them), so every different label (list) in a reply is stored. Its kinda hard to say what the exact limit is, but lets say:
So roughly 21845.333 elements in the map |
Oh I forgot the null byte in my calculation. So it's divided by 4, not 3
…On Sat, 21 Sep 2019, 21:43 Charlie Vieth, ***@***.***> wrote:
I don’t believe this is a correct usage of sync.Pool, see golang/go#23199
<golang/go#23199>.
I think the specific issue that you're referring to is that we are not
setting an upper bound on the size of objects that we store in the pool.
I'm not terribly familiar with this part of DNS, but maybe @miekg
<https://github.com/miekg> could weigh in on what an appropriate upper
limit would be (for reference BenchmarkPackMsgMassive stores 164 elements
in the map)?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1006?email_source=notifications&email_token=AACWIW2Z65CQFQIJ7KV2T6LQK2BQDA5CNFSM4IY4Q2B2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7IZKNA#issuecomment-533828916>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AACWIW7WALBEYVKSOZSTUDTQK2BQDANCNFSM4IY4Q2BQ>
.
|
That just shows that there is tremendous variability (+/- ~50-65%) in the timing of the |
So using a max map size of 16,384 (64K B / 4) could lead to storing maps of the following sizes (this is based on how much memory was allocated during map creation):
Which isn't terrible, but could lead to a large amount of data residing in the This could be handled by bucketing the Pools, but this would be difficult as we don't know how big of a map each request will require. A compromise might be to use a limit of 8k instead - though this would decrease the effectiveness of the pool when and where it is needed most. That said, it was CoreDNS that led me here and I don't think this will be an issue in its use case. The pool always presents the risk of caching large objects, but I think in a steady-state environment this really won't be an issue. Additionally, re-running the benchmarks on Ubuntu 18.04 (instead of my Mac with its garbage networking) shows an increase in
|
Thanks for the benchmarking. Let's merge. The compression pointer |
I really do think we'll hit both golang/go#23199 and golang/go#27735 with this change and end up causing memory issues. It's definitely not the correct usage of sync.Pool. I've actually personally moved away from using sync.Pool in many cases because it's quite finicky. |
I'm fine w/ reverting... |
@tmthrgd what are you proposing. Keep? Revert? Change? |
@miekg I think revert. While we may be able to do something here to reduce allocations, this isn’t correct as it stands. |
@tmthrgd The concerns around It really only becomes and issue when the system is heavily overloaded with goroutines (since this increases the size of the pool - can't steal from There is a lot of benefit here and its hard to see how CoreDNS or any other responsible consumer of this library will run into the issue. |
So the main problem with `sync.Pool` is that we need to bucket the things we put
in to it. Those need to be roughly of the same size? And this is true even if
you put empty `map`s in there? Is that in a nutshell what's going wrong here?
The need for a map just to do compression right has long annoyed me (I added it)
as being very wasteful, so doing something in this area is very much +1
|
And also having it fail right at the time where we need it most is bad.
Overloading DNS servers is relatively easy so this opens the door to
cascading failure a of all DNS instances. (The current behaviour might be
worse, dunno, but it is more predictable)
…On Thu, 26 Sep 2019, 07:17 Miek Gieben, ***@***.***> wrote:
So the main problem with `sync.Pool` is that we need to bucket the things
we put
in to it. Those need to be roughly of the same size? And this is true even
if
you put empty `map`s in there? Is that in a nutshell what's going wrong
here?
The need for a map just to do compression right has long annoyed me (I
added it)
as being very wasteful, so doing something in this area is very much +1
|
I've made a local change where I have 3 maps (small, medium and large), duplicated again the length. so this should make all elements in there of ~roughly the same size. Unless I'm holding it wrong the benchcmp are atrocious:
|
This adds several pools to cache compression maps. It uses 3 buckets to cache item of roughly the same size. This improves upon: #1006, specifcally fixes the use of sync.Pool Signed-off-by: Miek Gieben <[email protected]>
see https://github.com/miekg/dns/pull/new/mappy , the |
This improves runtime by 20-40% and more importantly significantly reduces memory usage and allocations.
…iekg#1006)" (miekg#1017) This reverts miekg#1006, see discussion on the PR. Def. worth exploring this furhter and pushing a more correct approach. This reverts commit 9578cae.
This improves runtime by 20-40% and more importantly significantly reduces memory usage and allocations. The goal here is to reduce allocations.
Benchmarks: