You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The threshold of schema apply diff/full is hard coded as 100. This value could be tight when users are doing highly frequent DDL operations. The problem of this value being small is that, once apply full is triggered, and unluckily there exists tons of tables, the apply diff process could take a long time (for a real world case, 1 to 2 minutes for 60k tables). And given that schema apply process will block read (from getting the schema version) during the whole applying period, this 1 to 2 minutes could result in many read failures.
We may be able to enhance this in some way below:
Make the global schema version an atomic variable to avoid the schema lock, so that schema apply won't block it for long. But we may need to verify the correctness carefully.
Make the threshold relative to current schema version, like 10%, so that for a cluster with too many tables, the apply diff becomes more eager.
The text was updated successfully, but these errors were encountered:
Enhancement
The threshold of schema apply diff/full is hard coded as 100. This value could be tight when users are doing highly frequent DDL operations. The problem of this value being small is that, once apply full is triggered, and unluckily there exists tons of tables, the apply diff process could take a long time (for a real world case, 1 to 2 minutes for 60k tables). And given that schema apply process will block read (from getting the schema version) during the whole applying period, this 1 to 2 minutes could result in many read failures.
We may be able to enhance this in some way below:
The text was updated successfully, but these errors were encountered: