-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[store/tikv] support batch coprocessor for TiFlash #16030
Conversation
Codecov Report
@@ Coverage Diff @@
## master #16030 +/- ##
===========================================
Coverage 80.5758% 80.5758%
===========================================
Files 506 506
Lines 137077 137077
===========================================
Hits 110451 110451
Misses 18133 18133
Partials 8493 8493 |
|
||
func (c *CopClient) sendBatch(ctx context.Context, req *kv.Request, vars *kv.Variables) kv.Response { | ||
if req.KeepOrder || req.Desc { | ||
return copErrorResponse{errors.New("batch coprocessor cannot prove keep order or desc property")} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since TiFlash can keep order for handle scan. When we enable the batch cop request, there is none guarantee to keep order for the handle column. So we should ban the order properties' satisfaction in the planner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not easy to consider allow_batch in planner. For safe and convient, I will check KeepOrder in executor/builder.go
, If it is true, batch_cop is forced to set flase.
This part of code is remained as proctection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, that's also another solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, please resolve the conflicts and fix ci.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rest LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/run-all-tests |
cherry pick to release-4.0 in PR #16226 |
What problem does this PR solve?
Send batch coprocessor request which contains multiple regions to TiFlash Engine
Problem Summary:
What is changed and how it works?
What's Changed:
In old time, tidb sends requests to tiflash in the way of
per region per requests
. But most queries that send to tiflash will involves massive regions, which have huge overhead. And tiflash cannot decide the concurrency of runtime by itself.How it Works:
coprocessor.BatchRequest
.Related changes
we update kvproto pingcap/kvproto#586
####Side effects
We use a system variable tidb_allow_batch_cop to switch the batch cop mode. It has no effect on requests that send to tikv. For TiFlash, it only effects queries like 'Aggregation' and 'TopN' by default. In conclusion, its risk is very low.
**TPCH 50G Benchmark result **
Release note