Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(core) disable balancer upstream passive health check #3305

Closed
wants to merge 1 commit into from

Conversation

mengskysama
Copy link
Contributor

@mengskysama mengskysama commented Mar 16, 2018

Full changelog

upstream passive health check can be disable follow doc https://getkong.org/docs/0.12.x/health-checks-circuit-breakers/

Issues resolved

Fix #3304

@hishamhm
Copy link
Contributor

hishamhm commented Mar 19, 2018

Hi @mengskysama! Thank you for reaching out. I'm still trying to understand the issue better, because I couldn't reproduce it locally, neither by hand or by writing a regression test (add this to the "#healthchecks" section of spec/02-integration/05-proxy/09-balancer_spec.lua):

      it("does not perform health checks when disabled", function()

        local timeout = 10
        -- server responds, then fails, then responds again
        local server = http_server(timeout, localhost, PORT, { 30, 30, 30 })

        -- disable all healthchecks
        local api_client = helpers.admin_client()
        assert(api_client:send {
          method = "PATCH",
          path = "/upstreams/" .. upstream.name,
          headers = {
            ["Content-Type"] = "application/json",
          },
          body = {
            healthchecks = healthchecks_config {
              active = {
                healthy = {
                  interval = 0,
                  successes = 0,
                },
                unhealthy = {
                  interval = 0,
                  tcp_failures = 0,
                  timeouts = 0,
                  http_failures = 0,
                },
              },
              passive = {
                healthy = {
                  successes = 0,
                },
                unhealthy = {
                  tcp_failures = 0,
                  timeouts = 0,
                  http_failures = 0,
                },
              },
            }
          },
        })
        api_client:close()

        local oks, fails, last_status = client_requests(30)
        assert.same(30, oks)
        assert.same(0, fails)
        assert.same(200, last_status)

        oks, fails, last_status = client_requests(30)
        assert.same(0, oks)
        assert.same(30, fails)
        assert.same(500, last_status)

        oks, fails, last_status = client_requests(30)
        assert.same(30, oks)
        assert.same(0, fails)
        assert.same(200, last_status)

        -- collect server results
        local _, server_oks, server_fails = server:join()
        assert.same(60, server_oks)
        assert.same(30, server_fails)

      end)

This works as expected on 0.12.3 and master. So, I'm afraid you're hitting into something else and the behavior you are observing is a side-effect.

Could you try to write a regression test similar to the above with the sequence of steps necessary to perform to trigger the issue?

@hishamhm hishamhm added the pending author feedback Waiting for the issue author to get back to a maintainer with findings, more details, etc... label Mar 19, 2018
@mengskysama
Copy link
Contributor Author

@hishamhm Thanks your reply. I took a moment to confirm the problem and I found that I misunderstood part of the code logic. Problems may occur within the event of cluster, I will give a reproduce code in issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pending author feedback Waiting for the issue author to get back to a maintainer with findings, more details, etc...
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bug: Passive healthcheck can't disable
2 participants