Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Falcon no longer respects count, new likely increases memory usage on shared hosts #233

Closed
trevorturk opened this issue Apr 3, 2024 · 8 comments · Fixed by #234
Closed
Milestone

Comments

@trevorturk
Copy link
Contributor

I think there's an issue with Falcon 0.45.0 where it uses significantly more memory as opposed to 0.43.0.

I've isolated a diff of the minimal Gemfile(.lock) changes for my app here:

diff --git a/Gemfile b/Gemfile
index a72feb297..126e77f35 100644
--- a/Gemfile
+++ b/Gemfile
@@ -12,7 +12,7 @@ gem "chartkick"
 gem "coffee-rails"
 gem "connection_pool"
 gem "dogapi"
-gem "falcon", "0.43.0"
+gem "falcon"
 gem "geokit"
 gem "graphql"
 gem "hashie"
diff --git a/Gemfile.lock b/Gemfile.lock
index 7be649138..8ccc9c4b5 100644
--- a/Gemfile.lock
+++ b/Gemfile.lock
@@ -85,7 +85,7 @@ GEM
       fiber-annotation
       io-event (~> 1.5, >= 1.5.1)
       timers (~> 4.1)
-    async-container (0.16.13)
+    async-container (0.18.0)
       async
       async-io
     async-http (0.64.0)
@@ -102,6 +102,9 @@ GEM
       async
     async-pool (0.4.0)
       async (>= 1.25)
+    async-service (0.10.0)
+      async
+      async-container (~> 0.16)
     barnes (0.0.9)
       multi_json (~> 1)
       statsd-ruby (~> 1.1)
@@ -109,7 +112,6 @@ GEM
     bcrypt (3.1.20)
     bigdecimal (3.1.7)
     bindex (0.8.1)
-    build-environment (1.13.0)
     builder (3.2.4)
     bundler-audit (0.9.1)
       bundler (>= 1.2.0, < 3)
@@ -148,19 +150,19 @@ GEM
       railties (>= 5.0.0)
     faker (3.3.1)
       i18n (>= 1.8.11, < 2)
-    falcon (0.43.0)
+    falcon (0.45.0)
       async
-      async-container (~> 0.16.0)
+      async-container (~> 0.18)
       async-http (~> 0.57)
       async-http-cache (~> 0.4.0)
       async-io (~> 1.22)
-      build-environment (~> 1.13)
+      async-service (~> 0.10.0)
       bundler
       localhost (~> 1.1)
       openssl (~> 3.0)
       process-metrics (~> 0.2.0)
-      protocol-rack (~> 0.1)
-      samovar (~> 2.1)
+      protocol-rack (~> 0.5)
+      samovar (~> 2.3)
     ffi (1.16.3)
     fiber-annotation (0.2.0)
     fiber-local (1.0.0)
@@ -444,7 +446,7 @@ DEPENDENCIES
   dogapi
   factory_bot_rails
   faker
-  falcon (= 0.43.0)
+  falcon
   gem_updater
   geokit
   graphql

Screenshot of my memory usage metrics on Heroku:

Screenshot 2024-04-03 at 11 02 18 AM

Please let me know if there's anything I can do to help debug! Thank you!

@evkorotkov
Copy link

I've noticed the same, but looks like the problem is not in the memory consumption but in wrong instances count.
Falcon 0.44.0+ started ignoring count option in configuration and runs forks count = processors count (as if option is empty). As a workaround you can explicitly override instances count using ASYNC_CONTAINER_PROCESSOR_COUNT environment variable.

cc: @ioquatix

@trevorturk
Copy link
Contributor Author

trevorturk commented Apr 3, 2024

Ah, perhaps in the move to async-service in #226 that was (inadvertently?) dropped, but I'm not sure if the plan is to migrate into a new config style? I think we should probably still support count, or raise an error if it's being used and shouldn't be?

I dug a bit and see Etc.nprocessors is used, but this reminded me of the discussion around Puma in Rails here: rails/rails#50450 (comment) so I checked and imo we should consider not using that system and instead defaulting to 1 or something. See for example on my Heroku Standard-1x Dyno which I believe should only run 1 Falcon process where I'm likely being assigned 1 virtual CPU on a machine that has 8?

$ heroku run rails console
Running rails console (Standard-1X)
irb(main):001> Etc.nprocessors
=> 8

Somewhat of a separate issue, but if we need to move over to the new configuration style, we'll need to figure out what changes are needed from the old style. For example my config for Heroku is currently:

#!/usr/bin/env -S falcon host

load :rack

hostname = File.basename(__dir__)
port = ENV["PORT"] || 3000

rack hostname do
  append preload "preload.rb"
  cache false
  count ENV.fetch("FALCON_COUNT", 1).to_i
  endpoint Async::HTTP::Endpoint.parse("http://0.0.0.0:#{port}").with(protocol: Async::HTTP::Protocol::HTTP11)
end

...and I'd want to make sure to disable caching etc.

@trevorturk trevorturk changed the title Memory usage increase from 0.43.0 -> 0.45.0 Falcon no longer respects count, default may cause memory issues on shared hosts Apr 3, 2024
@trevorturk trevorturk changed the title Falcon no longer respects count, default may cause memory issues on shared hosts Falcon no longer respects count, default may spike memory use on shared hosts Apr 3, 2024
@trevorturk trevorturk changed the title Falcon no longer respects count, default may spike memory use on shared hosts Falcon no longer respects count, new likely increases memory usage on shared hosts Apr 3, 2024
@ioquatix
Copy link
Member

ioquatix commented Apr 3, 2024

This looks like a bug I will investigate it today.

@ioquatix
Copy link
Member

ioquatix commented Apr 3, 2024

The count was not used when creating the container:

def setup(container)
container_options = @evaluator.container_options
container.run(name: self.name, **container_options) do |instance|

It's using container_options[:count]. We can add count to the default container options which should fix this.

ioquatix added a commit that referenced this issue Apr 3, 2024
* Insert `count` into default `container_options`. Fixes #233.
@ioquatix ioquatix added this to the v0.45.1 milestone Apr 3, 2024
@ioquatix
Copy link
Member

ioquatix commented Apr 3, 2024

This should be fixed in v0.45.1 - please let me know if you have any issues.

@trevorturk
Copy link
Contributor Author

Thank you! ☺️

@trevorturk
Copy link
Contributor Author

Confirmed fixed for me. Thank you for the quick fix!

@evkorotkov
Copy link

Confirmed fixed for me too. Many thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants