-
-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Producing not working + crash for forked processes #19
Comments
@thijsc I'm trying producing from a forked process, but apparently nothing is written to the socket (i.e. nothing is produced). require "rdkafka"
puts Rdkafka::LIBRDKAFKA_VERSION
puts Rdkafka::VERSION
config = {:"bootstrap.servers" => "kafka-a.vm.skroutz.gr:9092"}
producer = Rdkafka::Config.new(config).producer
fork do
puts "Producing message"
producer.produce(topic: "test-rb", payload: "foo").wait
puts "this is never printed"
end
puts Process.wait The script blocks at the call to
The forked process writes nothing to the socket (verified with tcpdump) and the consumer never sees any messages. The forked process seems to be stuck in a loop of
Is my script supposed to work or am I missing something? Thanks! P.S. I'm on Linux Debian with ruby 2.6.3p62. |
This indeed does not work, rdkafka does not survive a fork. It will work if you move Not sure exactly sure still how to make this clear and user friendly, any thoughts on that? |
So there is no way to reuse a producer object across multiple forks? 9882ce4 gave me the impression that it should work (isn't that spec doing the same thing essentially?) If it doesn't, then that part of the README shouldn't be removed I guess(?) Creating the producer after the fork defeats the purpose of what I'm trying to achieve: create a producer in the parent and reuse it across the children (e.g. this would be an ideal use case for Resque). |
After reading @edenhill's comment on https://github.com/edenhill/librdkafka/blob/master/tests/0079-fork.c#L37-L42, it's apparent that this use-case is not possible in librdkafka. Perhaps we should bring back the relevant README section informing that the client must be created after forking? |
That's impossible I'm afraid. When you fork you create a separate Unix process. That process does not share state with the original one. Even if the producer survived the fork it would still be a copy in a separate process that's not being reused. I actually wrote a blog post about this a few years ago that might be good reading :-). You'd have to create some setup with unix sockets for example to be able to communicate between them and reuse a single producer. |
A C rdkafka instance does not survive a fork. Producing or polling does not work and
rd_kafka_destroy
has a failing assertion that crashes the process when it is called via https://github.com/appsignal/rdkafka-ruby/blob/master/lib/rdkafka/config.rb#L149A fix to not let
rd_kafka_destroy
crash in this scenario was added in librdkafka: confluentinc/librdkafka@8c67e42We should move to this release when it's there end of Februari. For the failing produces and polls I see a few options:
The text was updated successfully, but these errors were encountered: