Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flaky test: "replication_set_should_not_change_if_ring_did_not_change,_when_num_of_instance_per_zone_is_inconsistent" #6513

Open
CharlieTLe opened this issue Jan 16, 2025 · 3 comments · May be fixed by #6535

Comments

@CharlieTLe
Copy link
Member

Noticed build failure: https://github.com/cortexproject/cortex/actions/runs/12798522991/job/35682746078?pr=6511

--- FAIL: TestRing_Get_Consistency (488.91s)
    --- FAIL: TestRing_Get_Consistency/replication_set_should_not_change_if_ring_did_not_change,_when_num_of_instance_per_zone_is_inconsistent (61.47s)
        ring_test.go:735: 
            	Error Trace:	/__w/cortex/cortex/pkg/ring/ring_test.go:735
            	Error:      	"1" is not less than or equal to "0"
            	Test:       	TestRing_Get_Consistency/replication_set_should_not_change_if_ring_did_not_change,_when_num_of_instance_per_zone_is_inconsistent
FAIL
FAIL	github.com/cortexproject/cortex/pkg/ring	704.951s
@dsabsay
Copy link
Contributor

dsabsay commented Jan 18, 2025

After several experiments, I can reliably (8 out of 9 times) get this test to fail by running 4,000,000 iterations of it. I did this a number of ways. The fastest way is launching runtime.NumCPU() goroutines each running the test 4000000 / num_cores times. This takes about 5 minutes on my laptop (utilizing all cores to near maximum).

In one failure I looked at, the "new" replication set contains 2 instances that were NOT in the initial set, and is missing two instances that WERE in the initial set. I haven't looked at enough results to know if that's the normal way it fails.

SPECULATION: After skimming the code under test, I don't see any use of concurrency, which rules out race conditions as a possible cause (unless I'm missing something in the ring code). Other possible causes of flakiness that I'm left pondering:

  1. Off by 1 or similar in ring.Get that is triggered when just the right key and token boundaries are used. The only things of (seeming) significance that change between test iterations are the tokens used as the lookup key AND the tokens for each ring instance (all randomly generated).

QUESTION: The name of this test is "replication set should not change if ring did not change, when num of instance per zone is inconsistent". I understand the first part, and I understand the role of the ring and the intent of consistent hashing, but I don't understand what "when num of instance per zone is inconsistent". Can anyone take a guess at what that means?

@dsabsay
Copy link
Contributor

dsabsay commented Jan 22, 2025

I observed what happens in ring.Get when this test fails. For the same token, the ringInstanceByToken map returns a different instance.

I believe the culprit is this line in ring.getTokensInfo():

for instanceID, instance := range d.Ingesters {

Map iteration order is undefined so if there are overlapping tokens (i.e. a given token is owned by multiple instances), then which instance is mapped to an overlapping token is non-deterministic. In this test, getTokensInfo() is called between the first and second call to ring.Get.

I propose to fix this by iterating over a sorted slice of keys instead of the map directly. I will submit a patch for us to discuss.

dsabsay pushed a commit to dsabsay/cortex that referenced this issue Jan 22, 2025
@dsabsay dsabsay linked a pull request Jan 22, 2025 that will close this issue
3 tasks
dsabsay pushed a commit to dsabsay/cortex that referenced this issue Jan 23, 2025
@CharlieTLe
Copy link
Member Author

Hi @dsabsay, thanks for looking into this!

I spent a couple of hours trying to figure out why this test would fail, but I'm not seeing how ordering the list of ingesters for creating a map of the tokens ring.getTokensInfo() would affect the output.

The test case is described below.

"replication set should not change if ring did not change, when num of instance per zone is inconsistent": {
	initialInstances:  11,
	addedInstances:    0,
	removedInstances:  0,
	numZones:          3,
	replicationFactor: 8,
	numDiff:           0,
},

The test seems to be failing because there's an inconsistency between the two Get operations. They should be returning the same set of ingesters, even when the number of instances per zone is inconsistent. The reason the number of instances per zone is inconsistent is because there are 11 ingesters and 3 zones in the test. 11 / 3 is not a whole number (3.67), which means that one zone will have 3 ingesters while the other zones will have 4 ingesters (3 + 4 + 4 = 11). If there were 12 ingesters and 3 zones in the test, then there would be a consistent number of ingesters per zone (4 + 4 + 4).

We know that both sets have a count of 8 instances since there are tests that asserts this.

set, err := ring.Get(testValues[i], Write, bufDescs, bufHosts, bufZones)
assert.NoError(t, err)
assert.Equal(t, testData.replicationFactor, len(set.Instances))
newSet, err := ring.Get(testValues[i], Write, bufDescs, bufHosts, bufZones)
assert.NoError(t, err)
assert.Equal(t, testData.replicationFactor, len(newSet.Instances))

In between the two tests there are some operations that seem to have mutated the ring to cause the second set to produce a different set of ingesters.

ring.ringTokens = ringDesc.GetTokens()
ring.ringTokensByZone = ringDesc.getTokensByZone()
ring.ringInstanceByToken = ringDesc.getTokensInfo()
ring.ringZones = getZones(ringDesc.getTokensByZone())

Our documentation indicates that this test case is proposing an unsupported setup.

Minimum number of zones

... It is safe to have more zones than the replication factor, but it cannot be less. Having fewer availability zones than the replication factor causes a replica write to be missed, and in some cases, the write fails if the availability zones count is too low.

We should remove the test cases where RF is greater than the number of zones. Perhaps we could add a condition in the ring creation that validates that when zone awareness is enabled that the RF should be <= number of zones.

@alanprot @justinjung04, could you help take a look at this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants