-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in onPartitionsAssigned in parallel consumer #326
Comments
Hi there! :) Welcome to the project… |
Thank you so much @astubbs. Yes the consumer group id was used earlier with kstream. But now I am getting below error and after this error my application get closed.
|
Are you using KEY ordering and have records with NULL keys? If so: #318 is also fixed in: Which is at the top of the merge queue (I just got back from PTO and some other stuff, so there's a bit of domino queue of new stuff to get merged :) |
Progress? |
Yes @astubbs The issue has been resolved after making an UNORDERED ordering. |
Ok, good to hear! Do you know if you're using null keys? Just want to confirm it is indeed the same issue. |
@astubbs yes you were right. Key was null in my case. |
If we enable PC and reuse the existing consumer group, is that possible to avoid the error, such as Any ideas or suggestions? Thanks! |
What's the use case for reusing the group? Is it to try to use the same offsets? Interesting. So we have two situations:
How should we distinguish between the two? Since 2 is probably the exception, how about an option to explicitly "ignore" any existing data in the metadata payload? Which is off my default? cc @nachomdo |
Thanks for the quick response @astubbs Yes, we wanted to explore whether we can reuse the same offsets. I'd consider that It's similar to a new completely different Kafka client application using an existing consumer group. Even it's a different Kafka client application, it doesn't prevent the new application from reusing the offsets. PC uses Kafka client under the hood, which could be acting the same behavior like a native Kafka client. My two cents |
It's absolutely no problem to use the same offsets as far as PC is concerned. I just had assumed it would be a mistake :) We'll probably add an option that you have to turn on, for it to be ignored (although it'd only be encountered once). |
When you say to add an option for it to be ignored, does it mean the we don't see the error once we turn the option on. Also, PC will resume consuming the messages from the existing offsets of the consumer group? Thanks! |
yes, just a warning. and it'll only show the first time, going forward it wouldn't show after PC installs it's own metadata.
yup! |
That would be great! Thanks |
Hi Team,
I want to use parallel consumer in one of over spring service to process kafka stream.
I am using core parallel-consumer-core 0.5.1.0 but getting below exception. We are secured kafka clsuter.
The text was updated successfully, but these errors were encountered: