Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maxretry more flexible #194

Closed

Conversation

nevesenin
Copy link

Hi,

we are currently evaluating if we can switch from our own bunny wrapper gem to sneakers. To make sneakers fit to our existing infrastructure the MaxRetryHandler needs some more configuration options. I'd like to discuss the idea.

Merten

Merten Falk added 3 commits December 15, 2015 16:15
- To make the retry handler compatible to existing rabbitmq setups there are a few more config options for retry and error queues/exchanges
@jondot
Copy link
Owner

jondot commented Dec 17, 2015

Hi Merten,
Thanks for taking the time to try Sneakers out!. The handlers were designed in a modular way such that you can be sure you can use what ever handlers you'd like with Sneakers.

Generally there are 3 ways people can contribute handlers:

  1. Commit to core handlers (in this case, let's just get a thumbs up from one more person using MaxRetry)
  2. Add handlers as you see fit to core Sneakers (even if they are variants of existing ones, so you could call this one for example AdvancedMaxRetry)
  3. If the handlers you use contain sensitive logic / IP, you can maintain your own repo like a sneakers-handlers or put them into your worker repo in a folder which contains any handler you would like

So what I'm trying to say - any form of a handler contribution is legitimate, so feel free to keep them coming :)

Regarding current pull request - @rud @justin-yesware what do you think?

@rud
Copy link
Contributor

rud commented Dec 17, 2015

Thank you for the ping @jondot. A few comments:

  • The current MaxRetry is indeed quite rigid, and these changes do look like they would fix some of that rigidity. I'd say some integration test scenarios would be in order as this is tricky territory to modify, and I totally understand they're not the first order of business. The current code here looks like it pretty much defaults the the current behaviour, is that correctly understood?
  • I'm all for modernizing or cleaning up code style, but I'd personally prefer it as a separate pull-request from semantic changes. I'd suggest using rubocop for consistency, it's quite neat for this sort of cleanup work.

Hope this is helpful.

@nevesenin
Copy link
Author

Thanks for the feedback. Yes, the current behavior doesn't change with the defaults. I aggree to not include the style changes and I'll add an integration test scenario. I also like rubocop and I would like to make another pull request for style changes.

})
@retry_queue.bind(@retry_exchange, :routing_key => '#')
:'x-message-ttl' => @opts[:retry_timeout] || 60000,
:'x-dead-letter-routing-key' => requeue_routing_key
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will change the current behavior I believe. My understanding of this setting is that if it is unset, the routing-key from the originally published message will be used in the retry queue. When the TTL is hit and it goes to the dead-letter exchange (which will then forward it onto the original queue sneakers is subscribing to), it will again pass along the original message's routing key. By setting the value, you're over-writing the original message's routing key, potentially causing the retried message to be different than when it first went through the queue.

I'm not an expert in this area of Rabbit, as we never don't use routing keys, but I suspect this will cause some configurations to not work. I think a test case of this would be to setup your Sneakers worker to use a queue that binds to a topic exchange with a rule to only get messages that have a routing key of "hello". Then publish a message with that routing key, have your worker fail it so it goes through this retry logic, and set the value for this requeue routing key to be "world" and see if you get it again in your worker. If you do, I think a further validation would be to verify that that message you're getting has the routing key value of "hello" and not "world". My suspicion is that you'll never see the retry and if you do, it will have a routing key value of "world".

@justinmills
Copy link
Contributor

I like the additional configuration options this pull has, and I like the direction this is headed, but fear it will break cases that rely on routing key-based queue binding. An integration test should surface that so +1 to that.


# Finally, bind the worker queue to our requeue exchange
queue.bind(@requeue_exchange, :routing_key => '#')
queue.bind(@requeue_exchange, :routing_key => requeue_routing_key)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you get the messages with "world" routing key.

@nevesenin
Copy link
Author

I'll add a test as soon as possible.

@nevesenin nevesenin changed the title WIP: Maxretry more flexible Maxretry more flexible Dec 27, 2015
@nevesenin
Copy link
Author

Added integration tests.


require 'sneakers/handlers/maxretry'

# This worker ... works
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I would call these workers something more along the lines of how they behave: AlwaysAckWorker, AlwaysRejectWorker, RejectOnceWorker (which is what you have). It may make them more portable across other tests.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Done.

- Add more integration tests
- Remove comments
@adis-io
Copy link

adis-io commented Feb 16, 2016

Sorry for dummy question, but what should I do with messages in dead queue? I used to sidekiq before, so there it's easy to requeue failed jobs.

@nevesenin
Copy link
Author

I close this pull request in favor of #205.

@nevesenin nevesenin closed this Feb 18, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants