Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Big ram usage #2267

Closed
ferdi2005 opened this issue Sep 3, 2019 · 15 comments
Closed

Big ram usage #2267

ferdi2005 opened this issue Sep 3, 2019 · 15 comments
Labels
support Questions or unspecific problems

Comments

@ferdi2005
Copy link

I use Hatchbox to deploy my apps to a 2GB server. Webpacker has a very high usage so the deploy stops because of lack of ram.
There are a very few JS file and with sprockets it compiles successfully without high ram usage.

@jakeNiemiec
Copy link
Member

jakeNiemiec commented Sep 3, 2019

It looks like you set everything up properly in that repo. Can you post the deploy log to a https://gist.github.com?

Edit: If you set this line to false, it will save you some RAM: https://github.com/ferdi2005/concorsi-locali/blob/c82bf52eed056df0d1ad9f7ae567d1afa57b5f1b/config/webpacker.yml#L92

Edit: It looks like that limit might be a sales tactic for that platform.
image

@jakeNiemiec jakeNiemiec added the support Questions or unspecific problems label Sep 3, 2019
@ferdi2005
Copy link
Author

I don't think @excid3 can do something like this, also because the server is a 2GB server and he doesn't sell servers, but only deploying service!
I'll try setting that line to false and redeploy and tell you.

https://gist.github.com/ferdi2005/556e7d9c6b33641ae392bcd5e747929c

@excid3
Copy link
Contributor

excid3 commented Sep 3, 2019

Not sales tactic, just what I've seen with many people's apps. You've got Rails, Passenger/Puma, Postgres, and Redis using RAM and then a new deploy compiling assets all at the same time and you can easily get your webpacker compile killed by the OS. Plus, these VPSes don't have swap for the RAM to bleed into temporarily since they're SSD backed and that'd cause heavy wear on the disk.

This is likely because Javascript doesn't get run through babel in the asset pipeline. I don't think it's necessarily something that can be fixed in Webpacker, nor Hatchbox. Just the nature of how the two Javascript pipelines are different.

@ferdi2005
Copy link
Author

Thanks @excid3, but I had no doubt that this is a Webpacker problem, not an Hatchbox one.

@jakeNiemiec
Copy link
Member

jakeNiemiec commented Sep 3, 2019

@excid3 Not sales tactic
"*Consider upgrading* your server to the next tier of RAM to *solve this problem*."

Up-selling like this in the VPS biz is perfectly legit, but crashing the process instead of letting garbage collection do it's thing makes for a bad customer experience.

...compiling assets all at the same time and you can easily get your webpacker compile killed by the OS. Plus, these VPSes don't have swap for the RAM to bleed into temporarily since they're SSD backed and that'd cause heavy wear on the disk.

Surely you can set up the JS heap size limits on the VPS to respect the amount of RAM that the user has available from their plan? As memory consumption approaches the limit, it should spend more time on garbage collection in an effort to free unused memory. For all I know you might already do that, but, from the log, you can see that it cannot get past the yarn install stage. I suspect that this is a problem for other users.

Edit: @ferdi2005 I am unsure of how you deploy with Hatchbox, but you can add the argument --max-old-space-size=1536 when compiling. It should keep you under the 2GB limit with spare space for other things. More on this here: #1189 (comment)

@excid3
Copy link
Contributor

excid3 commented Sep 3, 2019

I've not configured anything to crash the process, Ubuntu is killing the process. I also don't have anything in tuning things. It's simply installing node and yarn and running assets:precompile on deploy without any tweaks.

I'll do some testing with the --max-old-space-size flag and see if that helps. 👍

@jakeNiemiec
Copy link
Member

Quick note: That flag is a nodeJS flag that is passed to V8. If you want to globally add the flag, I have read that you can do something like: export NODE_OPTIONS=--max-old-space-size=1536. Best of luck!

I've not configured anything to crash the process, Ubuntu is killing the process.

Thanks for clarifying. To be clear, this was not meant to be hostile to yourself, but rather to working with VPSes in general. Any service that reduces that burden is a welcome one 😅.

For anyone else who happens upon this, these types of problems are endemic to a) any system that has hard RAM limits and b) single-threaded engines that require garbage collection.

@inopinatus
Copy link

I recently switched to using Node 12 (which is almost LTS) in production and memory usage halved during asset compilation.

@rromanchuk
Copy link

@inopinatus just did the same as well and so far seeing the same, but only anecdotally atm.

I'm just using nodesource's package

# curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
# sudo apt-get install -y nodejs

Also, in case it might help others... I have a single staging machine that i'm really pushing the limits on, especially in terms of memory pressure. For reference: t3a.medium/Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1056-aws x86_64) During rails deploys/asset compilation I was getting memory pressure events that would trigger cannot allocate memory exceptions crashing puma workers, sidekiq, and dropping requests, and unresponsive docker containers, hung ssh, failed ansible playhooks.

I'm not sure how I missed this, but I realized I didn't have any swap configured. I added 2G of swap to my EBS volume and now it's a worlds difference. Now, even when compilation pushes my memory utilization over the edge, the system doesn't become entirely crippled. This is just a staging environment, so I'm much happier with a bunch paging/swapping vs hung ssh/failed deploy/etc

@medicharlachiranjeevi
Copy link

I recently switched to using Node 12 (which is almost LTS) in production and memory usage halved during asset compilation.

Hi i have updated to node12 but it start eating ram upto 5GB while deploy to production pls help me out.

@dfabreguette
Copy link

Hi ! I'm using node v12 also on OSX latest version.
I have same issues, things are getting really complicated as we have to restart server every hour and clean memory.
I though problem could be on our side, but we have exact same problem on brand new apps...
How can we debug such a thing ?

@catsofmath
Copy link

We upgraded to node12.13.1 and it's really eating up ram too! If anyone knows what to do, please let use know!

@dfabreguette
Copy link

dfabreguette commented Dec 14, 2020

for info, I fixed my problem with that : #2803 @stomachfat

@masato-hi
Copy link
Contributor

If your server has many CPU cores, this support may improve.
#2143

@guillaumebriday
Copy link
Member

We can't do really do anything I guess, It's more a Webpack issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
support Questions or unspecific problems
Projects
None yet
Development

No branches or pull requests

10 participants