Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Catch the 'bean not found' error. Fixes #172 #180

Merged
merged 1 commit into from
Jan 25, 2016
Merged

Conversation

solarkennedy
Copy link
Contributor

I investigated why this happens, I do believe it is the race between killing a task in a previous test and reading it in the next. Here is some correlated logs:

  Scenario: The crossover bounce works
    Given a working paasta cluster
      And a new healthy app to be deployed
      And an old app to be destroyed

     When there are 2 old healthy tasks
      And deploy_service with bounce strategy "crossover" and drain method "noop" is initiated
crossover bounce creating new app with app_id bounce.test1.newapp.confighash
     Then the new app should be running
      And the old app should be running


     When there are 1 new healthy tasks
      And deploy_service with bounce strategy "crossover" and drain method "noop" is initiated
crossover bounce draining 1 old tasks with app_id /bounce.test1.oldapp.confighash
crossover bounce killing drained task bounce.test1.oldapp.confighash.a77520b5-bf9d-11e5-8713-0242ac11001c
     Then the old app should be running
      And the old app should be configured to have 1 instances

     When there are 2 new healthy tasks
      And deploy_service with bounce strategy "crossover" and drain method "noop" is initiated
crossover bounce draining 2 old tasks with app_id /bounce.test1.oldapp.confighash
crossover bounce killing drained task bounce.test1.oldapp.confighash.a77547c6-bf9d-11e5-8713-0242ac11001c
crossover bounce killing drained task bounce.test1.oldapp.confighash.a77520b5-bf9d-11e5-8713-0242ac11001c
      And we wait a bit for the old app to disappear
     Then the old app should be gone

So yea, killing a task that is already dead.

I thought we might fight this by adding a sleep, but I don't want to add a sleep :(
So swallowing this error I think is the next best thing, and let the the bounce system itself work around it eventually. (as it has been already, just spewing errors)

@solarkennedy
Copy link
Contributor Author

Haha. Different flake this time. Maybe I'll tackle it next:

Then it should show up in marathon_services_running_here    # steps/marathon_steps.py:43 0.007s
      Traceback (most recent call last):
        File "/tmp/local/lib/python2.7/site-packages/behave/model.py", line 1173, in run
          match.run(runner.context)
        File "/tmp/local/lib/python2.7/site-packages/behave/model.py", line 1589, in run
          self.func(context, *args, **kwargs)
        File "/work/paasta_itests/steps/marathon_steps.py", line 46, in marathon_services_running_here_works
          (discovered,) = paasta_tools.marathon_tools.marathon_services_running_here()
      ValueError: need more than 0 values to unpack

      Captured logging:
      INFO:__main__:Connecting to Marathon server at: http://172.17.0.28:8080



def kill_task(client, app_id, task_id, scale):
""" Wrapper to the official kill_task method that is idempotent """
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not say this is idempotent, since I don't think it actually is.

@EvanKrall
Copy link
Member

Other than my comment suggestions and the minor test nit, looks good. Ship plz, I look forward to greener itests.

@solarkennedy solarkennedy merged commit 0ae7cf6 into master Jan 25, 2016
@solarkennedy solarkennedy deleted the fix_172 branch January 25, 2016 22:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants