-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node.js process rss keeps rising when heapUsed/heapTotal does not (native memory leak)? #1942
Comments
I ran the exact same code with the same load with node v6.16.0 and rss never is always stable and the application is always stable. |
@raj-esh -
this is not well aligned with the I see the memory builds up in around 24 hours, and then gets cleared up instantly. This sharp decline usually is a manifestation of gc, so would you collect gc log using |
ping @raj-esh |
Sorry, I missed to revert, the graph and the rss are of different times, hence the discrepancies. My intention was to show that RSS increases without any load on the server. We tried the same on nodeV8 and i do not see any issues over there. |
@raj-esh - thanks. Could you pls collect couple of heapdumps, when you observe the buildup and one when there is none? |
Actually I am having the same issue as well. Node version v10.16.0. Will follow up on this thread. |
I think I am seeing similar behavior with v10.16.3. Are there any guidelines to inspect non heap memory usage or some recommended tooling how to debug native allocations? Will try with mentioned v6 LTS(Boron) and see how it behaves. |
Did you notice this behaviour on your local machine? Cloud? Preproduction?
Production?
…On Thu, Sep 12, 2019 at 7:41 AM abregar ***@***.***> wrote:
I think I am seeing similar behavior with v10.16.3. Are there any
guidelines to inspect non heap memory usage or some recommended tooling how
to debug native allocations? Will try with mentioned v6 LTS(Boron) and see
how it behaves.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1942?email_source=notifications&email_token=ABXXLSY3ESZA3ZM56JEUAZDQJITGPA5CNFSM4HSBW3U2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6RSUOI#issuecomment-530786873>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABXXLS2MGRUNZYMC6Y3ID3TQJITGPANCNFSM4HSBW3UQ>
.
|
On stress tested stage. Currently having two leads - there are some usages of buffer splice I need to review and the other is to try stress same codebase but built with Boron node version. I am doing both today. But still I am missing some hints how to inspect non heap usage. I would be very glad if you may share some ideas or tooling on that. |
FYI after some digging into my app behavior I have figured out the reason. By using node-mtrace I have identified that my app uses excessive small _Znmw allocations with cumulative memory usage steady over time. But rss growing. Seeing the idea in nodejs/node#21973 I have tried jemalloc
And for my case it worked out perfectly, performance remained nearly the same while rss went steady over time. |
inactive, closing |
We have a service deployed in cloud foundry with not much of real time traffic. The only requests that were happening on the service are health checks, 1-2 requests per sec, but we still see memory usage of the service grow over time and crashes with OOM every 3 days.
I was testing and monitoring this for over a week and couldnt conclude actual issue. Can this be considered a native memory leak?
vcap@64280246-76bc-4c8b-66fa-b6d8:~$ ps -aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
vcap 153 0.0 0.0 9276 904 ? S Jun01 0:00 sh -c node app.js
vcap 154 0.1 1.5 1691384 510256 ? Sl Jun01 1:29 node app.js
Heap stats:
{
"processMemoryUsage": {
"rss": "496.01 MB",
"heapTotal": "63.61 MB",
"heapUsed": "51.52 MB",
"external": "17.82 MB"
},
"heapStatistics": {
"total_heap_size": "63.61 MB",
"total_heap_size_executable": "4 MB",
"total_physical_size": "62.38 MB",
"total_available_size": "368.87 MB",
"used_heap_size": "51.52 MB",
"heap_size_limit": "423.02 MB",
"malloced_memory": "0.01 MB",
"peak_malloced_memory": "4.2 MB"
}
}
The text was updated successfully, but these errors were encountered: