-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some thoughts after running and inspecting TiDB #134
Comments
I agree with you that checking |
Update:
|
There is never a hardware issue -- only SW issues... |
Just write down some thoughts after running and inspecting half of TiDB results so that everyone can see.
One possible solution is to randomize test cases to make the complexity/failing rate of cases similar on the two threads. I will try to implement this before starting the next run of tidb. But I am not sure how much this solution can help. Welcome any further suggestions on this.
replicas
are explicitly required in the tidb crd. However,storage
is also required, which is not in the crd. This causes more than 100 alarms in Acto, as Acto changes onlyreplicas
, which results in no change in the system state. Tidb also throws no errors in the log indicating thatstorage
is required. I have done some experiments running Acto with a seed that explicitly adds bothstorage
andreplicas
. And it seems that the change inreplicas
can be reflected in the system state under this setting. I remember @tylergu said before that implicit requirement is a grey zone and probably worth discussing with prof @tianyin before any conclusion is made.PS. It seems that one tidb’s developer has replied to the PR I submitted previously. However, I am not sure whether his suggestion is correct, since his suggestion will only raise an error if
MaxFailoverCount>0
. But even whenMaxFailoverCount=0
, we probably want to raise an error as long aspdDeletedFailureReplicas>0
. What do you think? @tylerguThe text was updated successfully, but these errors were encountered: