You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Started zoo keeper and a single broker on my MacBook
Create three topics each with one partition and a replication factor of one. The topic names differ only in the last character
Using a producer program pump 333 messages into each topic with almost no delay
Start up Java program that uses the parallel consumer to subscribe to and consume the messages from the three topics using KEY ordering.
Notice that not all of the messages show up in the polling lambda function and the Kafka topic details show an expected CURRENT-OFFSET for only one of the topics.
If one restarts the java program then additional (but not all) messages are consumed. Restart a couple more times and then all the messages have been consumed the topic details show the correct CURRENT-OFFSET for each topic.
If one deletes/purges the topics and pumps 333 messages into the first topic, then 333 more messages into the first and second topic then 333 each of the three topics then everything works fine.
I enabled trace logging for a subsequent run and see
Returning 3 fetched records at offset FetchPosition{offset=232, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[192.168.2.58:9092 (id: 0 rack: null)], epoch=0}} for assigned partition myTopicprefix-topic-1-0",
But later in the logs only see one
{"timestamp":"2022-01-05T16:45:43.143Z","message":"asyncPoll - Consumed a record (232)
And
{"timestamp":"2022-01-05T16:45:47.999Z","message":"asyncPoll - user function finished ok.
If the above is performed with just one topic there is no problem. If the above is done with two fresh topics the issue also occurs.
If UNORDERED is used instead of KEY then everything works fine.
The text was updated successfully, but these errors were encountered:
When you say "the Kafka topic details show an expected CURRENT -OFFSET for only one of the topics", do you mean that only one of the topics has an offset committed?
astubbs
added a commit
to astubbs/parallel-consumer
that referenced
this issue
May 21, 2022
Ok, thanks. This is fixed in the linked pr. Will be merging this week.
astubbs
changed the title
Load test with with two or more new topics results in missed messages
Sybscriving to two or more new topics results in missed messages
May 24, 2022
astubbs
changed the title
Sybscriving to two or more new topics results in missed messages
Subscribing to two or more topics with KEY ordering, results in messages of the same Key never being processed
May 24, 2022
…ic to shard key (#315)
- Try to reproduce issue #184 - run against multiple topics
- Introduce ShardKey
- Test for null keys in KEY ordering - records get queued into a null key queue - see #314
Steps to reproduce:
If one restarts the java program then additional (but not all) messages are consumed. Restart a couple more times and then all the messages have been consumed the topic details show the correct CURRENT-OFFSET for each topic.
If one deletes/purges the topics and pumps 333 messages into the first topic, then 333 more messages into the first and second topic then 333 each of the three topics then everything works fine.
I enabled trace logging for a subsequent run and see
Returning 3 fetched records at offset FetchPosition{offset=232, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[192.168.2.58:9092 (id: 0 rack: null)], epoch=0}} for assigned partition myTopicprefix-topic-1-0",
But later in the logs only see one
{"timestamp":"2022-01-05T16:45:43.143Z","message":"asyncPoll - Consumed a record (232)
And
{"timestamp":"2022-01-05T16:45:47.999Z","message":"asyncPoll - user function finished ok.
If the above is performed with just one topic there is no problem. If the above is done with two fresh topics the issue also occurs.
If UNORDERED is used instead of KEY then everything works fine.
The text was updated successfully, but these errors were encountered: