-
Notifications
You must be signed in to change notification settings - Fork 787
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pinned Heap Constantly Grows and Never Goes Down when Buffering Feature is Enabled For The Streaming Responses #2591
Comments
Test 3 Client UsedI also tested with the client using DI DI-registered GRPC service
As I expected it reuses the connection, however sometime you can see a double allocation at the beginning. |
What are you expecting to happen? |
@JamesNK, Why is it constantly growing? The client app starts, does some work (server allocates X pinned memory) and shuts down. This X Pinned Memory is not released. If it is preserved for later usage, then why when In an hour another client app starts to do the same work (so the same buffer size should be enough) the server allocates the same X memory again (so now it is 2X) ? Repeat it a few more times and you used up all resources |
Hmm, ok. When the client shuts down, is the server request ending? It looks like you're using the cancellation to stop the request on the server side, but it's worth double checking. I'll try out your code in a couple of days when I have time if you haven't figured it out. |
@JamesNK, it will be great. Thanks. |
This is an issue in Kestrel. See dotnet/aspnetcore#27394 and dotnet/aspnetcore#55490 The fix is to flush between each message. That means either not using buffer hint or periodically flushing the response yourself. |
Problem definition
Pinned Heap Constantly Grows and Never Goes Down when Buffering Feature is Enabled For The Streaming Responses

Application Used
For the test I'm using Simple Grpc Servier
My Grpc Server has two methods SimpleStreaming and SimpleStreamingWithBuffer
Test 1 (simulate jobs) Client Used
The first test simulates a job which starts periodically.
Run client to call plain method
>

dotnet GrpcDebug.ClientTest.dll --Runs=5 --RepeatsInRun=1
Run client to call method with enabled buffering
>

dotnet GrpcDebug.ClientTest.dll --Runs=5 --RepeatsInRun=1 --Type=buf
Comparision
As we can see, every new job run allocates more and more on POH. The memory is not released even after 30 minutes, and new runs allocate more and more.
Test2 Client Used
The same client as for Test1, however instead of creating new connections every time we repeat with the same connection.
Run client to call plain method
> 'dotnet GrpcDebug.ClientTest.dll --Runs=1 --RepeatsInRun=5'

Run client to call method with enabled buffering
> 'dotnet GrpcDebug.ClientTest.dll --Runs=1 --RepeatsInRun=5 --Type=buf'

Comparision
As we can see, reusing the existing connection partially solves the problem, but the buffered solution still allocates a lot.
Why? It is unclear to me, to be honest.
The problem I see here, is that bad clients can break my server, and I have no options to protect against it.
If you know the reason for these awful allocations, or if there are some configurations I'm missing, you are more than welcome to share.
The text was updated successfully, but these errors were encountered: