Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Hint] compile fails due to lack of memory in production. #2143

Closed
masato-hi opened this issue Jun 21, 2019 · 12 comments
Closed

[Hint] compile fails due to lack of memory in production. #2143

masato-hi opened this issue Jun 21, 2019 · 12 comments

Comments

@masato-hi
Copy link
Contributor

Webpacker is parallelized during production build.

See below for more information on parallelization.
https://github.com/webpack-contrib/terser-webpack-plugin#parallel

Parallelization may cause memory shortages because each process consumes memory.

The following solutions can be used as a temporary solution.
Change config/webpack/production.js as below.

process.env.NODE_ENV = process.env.NODE_ENV || 'production'

const environment = require('./environment')
const TerserPlugin = require('terser-webpack-plugin')

environment.config.optimization.minimizer.forEach(function(minimizer) {
  if(minimizer instanceof TerserPlugin) {
    minimizer.options.parallel = false
  }
})

module.exports = environment.toWebpackConfig()

I will fix this issue with this pull request.
#2093

@BillyParadise
Copy link

BillyParadise commented Jun 24, 2019

I fear this is not the golden solution. I'm still getting silent compile errors after updating c/webpack/production.js:

   00:08 deploy:assets:precompile
      01 ~/.rvm/bin/rvm default do bundle exec rake assets:precompile
      01 yarn install v1.16.0
      01 [1/4] Resolving packages...
      01 [2/4] Fetching packages...
      01 info [email protected]: The platform "linux" is incompatible with this module.
      01 info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
      01 [3/4] Linking dependencies...
      01 [4/4] Building fresh packages...
      01 Done in 24.62s.
      01 Compiling…
      01 Compilation failed:
      01
   #<Thread:0x00007fb288288010@/Users/x/.rvm/gems/ruby-2.6.3/gems/sshkit-1.18.2/lib/sshkit/runners/parallel.rb:10 run> terminated with exception (report_on_exception is true):
Traceback (most recent call last):
	13: from /Users/x/.rvm/gems/ruby-2.6.3/gems/sshkit-1.18.2/lib/sshkit/runners/parallel.rb:12:in `block (2 levels) in execute'
	12: from /Users/x/.rvm/gems/ruby-2.6.3/gems/sshkit-1.18.2/lib/sshkit/backends/abstract.rb:29:in `run'
	11: from /Users/x/.rvm/gems/ruby-2.6.3/gems/sshkit-1.18.2/lib/sshkit/backends/abstract.rb:29:in `instance_exec'
	10: from /Users/x/.rvm/gems/ruby-2.6.3/gems/capistrano-rails-1.4.0/lib/capistrano/tasks/assets.rake:67:in `block (4 levels) in <top (required)>'
	 9: from /Users/x/.rvm/gems/ruby-2.6.3/gems/sshkit-1.18.2/lib/sshkit/backends/abstract.rb:89:in `within'
	 8: from /Users/x/.rvm/gems/ruby-2.6.3/gems/capistrano-rails-1.4.0/lib/capistrano/tasks/assets.rake:68:in `block (5 levels) in <top (required)>'
	 7: from /Users/x/.rvm/gems/ruby-2.6.3/gems/sshkit-1.18.2/lib/sshkit/backends/abstract.rb:97:in `with'
	 6: from /Users/x/.rvm/gems/ruby-2.6.3/gems/capistrano-rails-1.4.0/lib/capistrano/tasks/assets.rake:69:in `block (6 levels) in <top (required)>'
	 5: from /Users/x/.rvm/gems/ruby-2.6.3/gems/sshkit-1.18.2/lib/sshkit/backends/abstract.rb:78:in `execute'
	 4: from /Users/x/.rvm/gems/ruby-2.6.3/gems/sshkit-1.18.2/lib/sshkit/backends/abstract.rb:145:in `create_command_and_execute'
	 3: from /Users/x/.rvm/gems/ruby-2.6.3/gems/sshkit-1.18.2/lib/sshkit/backends/abstract.rb:145:in `tap'
	 2: from /Users/x/.rvm/gems/ruby-2.6.3/gems/sshkit-1.18.2/lib/sshkit/backends/abstract.rb:145:in `block in create_command_and_execute'
	 1: from /Users/x/.rvm/gems/ruby-2.6.3/gems/sshkit-1.18.2/lib/sshkit/backends/netssh.rb:169:in `execute_command'

I feel like when I was first starting rails development years ago - totally lost with this webpacker stuff. Feels half baked. Would be nice to be able to deploy something.

@jakeNiemiec
Copy link
Member

jakeNiemiec commented Jun 24, 2019

@BillyParadise In order to help debug, could you please post as much as you can of:

  • ./package.json
  • ./babel.config.js
  • ./config/webpack/environment.js
  • ./config/webpacker.yml
  • ./app/javascript/packs/application.js
  • the full error message (even the parts you wouldn’t think is relevant)

—to a https://gist.github.com. Please tag me via @jakeNiemiec in a comment and I’ll try to troubleshoot any problems directly in the gist.

The issue here is about virtualization limitations.

@BillyParadise
Copy link

Jake, I feel stupid... I found another issue in which it describes how to turn on logging, and found it was just a missing fontawesome file. HOWEVER, the lack of logging totally exacerbated the issue. Webpacker is alien territory, likely not only for me. We need our hands held during the transition.

I was about to delete my comment but you got here first. PLEASE make logging more verbose! It's easy enough for me to look stupid without help from others :)

@salimane
Copy link

@BillyParadise how to turn on logging that helped you solve the problem ?

@bb
Copy link

bb commented Oct 2, 2019

@salimane I guess he referred to #955 (comment):

I had to set webpack_compile_output: true in webpacker.yml for rails assets:precompile to output anything on failure.

@rromanchuk
Copy link

I've been using webpacker for about a year now, i am a horrible end user, where I know very little about the internals of the compilation process, so I always assume i misconfigured something/somewhere. Over the past year have burned multiple times now by memory pressure events. There isn't anything unique about the environments, pretty vanilla web servers, on a t3a.medium.

One such occurrence was a deploy, during peak time at 70% mem utilization, because of a cache miss, the precompile step of the deploy caused a cascading failure. Load balancer removing hosts, services being terminated, puma threads being dropped. Which then lead spot flee instances being spun up and linear deploys/provisioning blocked on asset compilation.

I'm seeing some solutions, but it would be great if there were opinionated defaults that would prevent rails assets:precompile from releasing bulls into our china shops. Should be reasonable to assume production deploys are going to be on machines that are being utilized with other important services servicing.

Just now, my quiet staging server imploded because of memory pressure events, and my solution was to reduce/recover my total memory footprint by reducing my puma worker count to give the bull more space to do it's thang. I'm moving my chinaware to the back of the shop, but it would be cool, if instead, no bulls were released in my shop.

I dont think this is unique to this specific project, i feel like people have been trying to tame this for years.
https://www.google.com/search?q=rails+precompile+assets+memory

just a hot take from someone who should not be giving one.

@jakeNiemiec
Copy link
Member

jakeNiemiec commented Nov 20, 2019

@rromanchuk Judging from your mis-post at 1:45 to your post now at 2:21, I am confident that posting some specific details would have been a more prudent use of that time (hot bull/chinaware analogy notwithstanding).

There are libraries & errors known for hanging processes. There are known solutions for high memory usage on virtual private servers.

(If you are not seeing any errors, see if setting this helps: #2316)

@medicharlachiranjeevi
Copy link

How can i use --max_old_space_size while running production bundle exec rake assets:precompile.

duleorlovic added a commit to trkin/movebase that referenced this issue Jan 9, 2020
duleorlovic added a commit to trkin/movebase that referenced this issue Jan 9, 2020
duleorlovic added a commit to trkin/movebase that referenced this issue Jan 9, 2020
@duleorlovic
Copy link

You can use environment variable as in #2033 (comment)
In my case with capistrano , rbenv and 1 GB of memory (about half is already used)

# /home/deploy/myapp/.rbenv-vars
NODE_OPTIONS=--max-old-space-size=460

@jakeNiemiec
Copy link
Member

@medicharlachiranjeevi Also, you need dashes instead of underscores. Here is a good discussion on the topic with examples: #1189 (comment)

@bishosilwal
Copy link

I had same issue and using swap memory solved the issue. use swap memory guide for digitalocean

@guillaumebriday
Copy link
Member

Can this issue be closed?

Feel free to reopen this issue if needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants