diff --git a/agg-distinct-optimization.md b/agg-distinct-optimization.md index 3c137ed3869b9..b6b9186771836 100644 --- a/agg-distinct-optimization.md +++ b/agg-distinct-optimization.md @@ -9,7 +9,7 @@ This document introduces the `distinct` optimization in the TiDB query optimizer ## `DISTINCT` modifier in `SELECT` statements -The `DISTINCT` modifier specifies removal of duplicate rows from the result set. `SELECT DISTINCT` is transformed to `GROUP BY`, for example: +The `DISTINCT` modifier specifies removal of duplicate rows from the result set. `SELECT DISTINCT` is transformed to `GROUP BY`, for example: ```sql mysql> explain SELECT DISTINCT a from t; diff --git a/alert-rules.md b/alert-rules.md index 08324bd5990fe..2c3f4c679ddb5 100644 --- a/alert-rules.md +++ b/alert-rules.md @@ -434,7 +434,7 @@ This section gives the alert rules for the TiKV component. 1. Perform `SELECT VARIABLE_VALUE FROM mysql.tidb WHERE VARIABLE_NAME = "tikv_gc_leader_desc"` to locate the `tidb-server` corresponding to the GC leader; 2. View the log of the `tidb-server`, and grep gc_worker tidb.log; - 3. If you find that the GC worker has been resolving locks (the last log is "start resolve locks") or deleting ranges (the last log is “start delete {number} ranges”) during this time, it means the GC process is running normally. Otherwise, contact [support@pingcap.com](mailto:support@pingcap.com) to resolve this issue. + 3. If you find that the GC worker has been resolving locks (the last log is "start resolve locks") or deleting ranges (the last log is "start delete {number} ranges") during this time, it means the GC process is running normally. Otherwise, contact [support@pingcap.com](mailto:support@pingcap.com) to resolve this issue. ### Critical-level alerts @@ -632,7 +632,7 @@ This section gives the alert rules for the TiKV component. * Alert rule: - `histogram_quantile(0.999, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type=’tick’}[1m])) by (le, instance, type)) > 2` + `histogram_quantile(0.999, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='tick'}[1m])) by (le, instance, type)) > 2` * Description: @@ -750,7 +750,7 @@ This section gives the alert rules for the TiKV component. * Solution: - The speed of splitting Regions is slower than the write speed. To alleviate this issue, you’d better update TiDB to a version that supports batch-split (>= 2.1.0-rc1). If it is not possible to update temporarily, you can use `pd-ctl operator add split-region --policy=approximate` to manually split Regions. + The speed of splitting Regions is slower than the write speed. To alleviate this issue, you'd better update TiDB to a version that supports batch-split (>= 2.1.0-rc1). If it is not possible to update temporarily, you can use `pd-ctl operator add split-region --policy=approximate` to manually split Regions. ## TiFlash alert rules diff --git a/benchmark/benchmark-tidb-using-sysbench.md b/benchmark/benchmark-tidb-using-sysbench.md index d408702986679..6a4eec2acbaef 100644 --- a/benchmark/benchmark-tidb-using-sysbench.md +++ b/benchmark/benchmark-tidb-using-sysbench.md @@ -26,7 +26,7 @@ There are multiple Column Families on TiKV cluster which are mainly used to stor Default CF : Write CF = 4 : 1 -Configuring the block cache of RocksDB on TiKV should be based on the machine’s memory size, in order to make full use of the memory. To deploy a TiKV cluster on a 40GB virtual machine, it is recommended to configure the block cache as follows: +Configuring the block cache of RocksDB on TiKV should be based on the machine's memory size, in order to make full use of the memory. To deploy a TiKV cluster on a 40GB virtual machine, it is recommended to configure the block cache as follows: ```yaml server_configs: diff --git a/benchmark/online-workloads-and-add-index-operations.md b/benchmark/online-workloads-and-add-index-operations.md index a034cdde819c9..38fe6f911112a 100644 --- a/benchmark/online-workloads-and-add-index-operations.md +++ b/benchmark/online-workloads-and-add-index-operations.md @@ -29,7 +29,7 @@ This test runs in a Kubernetes cluster deployed with 3 TiDB instances, 3 TiKV in | TiKV | `4151dc8878985df191b47851d67ca21365396133` | | PD | `811ce0b9a1335d1b2a049fd97ef9e186f1c9efc1` | -Sysbench version:1.0.17 +Sysbench version: 1.0.17 ### TiDB parameter configuration diff --git a/benchmark/v4.0-performance-benchmarking-with-tpcc.md b/benchmark/v4.0-performance-benchmarking-with-tpcc.md index 6938bc06dfc9e..dd5f116313514 100644 --- a/benchmark/v4.0-performance-benchmarking-with-tpcc.md +++ b/benchmark/v4.0-performance-benchmarking-with-tpcc.md @@ -104,7 +104,7 @@ set global tidb_disable_txn_auto_retry=0; 2. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data. - 1. Compile BenchmarkSQL: + 1. Compile BenchmarkSQL: {{< copyable "bash" >}} diff --git a/benchmark/v5.0-performance-benchmarking-with-tpcc.md b/benchmark/v5.0-performance-benchmarking-with-tpcc.md index 62dcdbddc451d..860260a7e7000 100644 --- a/benchmark/v5.0-performance-benchmarking-with-tpcc.md +++ b/benchmark/v5.0-performance-benchmarking-with-tpcc.md @@ -122,7 +122,7 @@ set global tidb_enable_clustered_index = 1; 2. Use BenchmarkSQL to import the TPC-C 5000 Warehouse data. - 1. Compile BenchmarkSQL: + 1. Compile BenchmarkSQL: {{< copyable "bash" >}} diff --git a/best-practices/java-app-best-practices.md b/best-practices/java-app-best-practices.md index ee05c7301c878..9c9427679c504 100644 --- a/best-practices/java-app-best-practices.md +++ b/best-practices/java-app-best-practices.md @@ -83,7 +83,7 @@ This section introduces parameters related to `Prepare`. ##### `useServerPrepStmts` -`useServerPrepStmts` is set to `false` by default, that is, even if you use the Prepare API, the “prepare” operation will be done only on the client. To avoid the parsing overhead of the server, if the same SQL statement uses the Prepare API multiple times, it is recommended to set this configuration to `true`. +`useServerPrepStmts` is set to `false` by default, that is, even if you use the Prepare API, the "prepare" operation will be done only on the client. To avoid the parsing overhead of the server, if the same SQL statement uses the Prepare API multiple times, it is recommended to set this configuration to `true`. To verify that this setting already takes effect, you can do: @@ -128,7 +128,7 @@ To verify that this setting already takes effect, you can do: While processing batch writes, it is recommended to configure `rewriteBatchedStatements=true`. After using `addBatch()` or `executeBatch()`, JDBC still sends SQL one by one by default, for example: ```java -pstmt = prepare(“insert into t (a) values(?)”); +pstmt = prepare("insert into t (a) values(?)"); pstmt.setInt(1, 10); pstmt.addBatch(); pstmt.setInt(1, 11); @@ -197,7 +197,7 @@ In addition, because of a [client bug](https://bugs.mysql.com/bug.php?id=96623), Through monitoring, you might notice that although the application only performs `INSERT` operations to the TiDB cluster, there are a lot of redundant `SELECT` statements. Usually this happens because JDBC sends some SQL statements to query the settings, for example, `select @@session.transaction_read_only`. These SQL statements are useless for TiDB, so it is recommended that you configure `useConfigs=maxPerformance` to avoid extra overhead. -`useConfigs=maxPerformance` configuration includes a group of configurations: +`useConfigs=maxPerformance` configuration includes a group of configurations: ```ini cacheServerConfiguration=true diff --git a/best-practices/pd-scheduling-best-practices.md b/best-practices/pd-scheduling-best-practices.md index 35ccab5d65ee3..ddfbf293cec12 100644 --- a/best-practices/pd-scheduling-best-practices.md +++ b/best-practices/pd-scheduling-best-practices.md @@ -138,8 +138,8 @@ You can use store commands of pd-ctl to query balance status of each store. The **Grafana PD/Statistics - hotspot** page shows the metrics about hot regions, among which: -- Hot write region’s leader/peer distribution: the leader/peer distribution in hot write regions -- Hot read region’s leader distribution: the leader distribution in hot read regions +- Hot write region's leader/peer distribution: the leader/peer distribution in hot write regions +- Hot read region's leader distribution: the leader distribution in hot read regions You can also query the status of hot regions using pd-ctl with the following commands: diff --git a/best-practices/three-nodes-hybrid-deployment.md b/best-practices/three-nodes-hybrid-deployment.md index 61d23578500ad..adcab1edc6ff8 100644 --- a/best-practices/three-nodes-hybrid-deployment.md +++ b/best-practices/three-nodes-hybrid-deployment.md @@ -41,7 +41,7 @@ tikv: gc.max-write-bytes-per-sec: 300K rocksdb.max-background-jobs: 3 rocksdb.max-sub-compactions: 1 - rocksdb.rate-bytes-per-sec: “200M” + rocksdb.rate-bytes-per-sec: "200M" tidb: performance.max-procs: 8 diff --git a/br/br-batch-create-table.md b/br/br-batch-create-table.md index b10e066f6185c..9b380379c527a 100644 --- a/br/br-batch-create-table.md +++ b/br/br-batch-create-table.md @@ -59,7 +59,7 @@ This section describes the test information about the Batch Create Table feature The test result is as follows: ``` -‘[2022/03/12 22:37:49.060 +08:00] [INFO] [collector.go:67] ["Full restore success summary"] [total-ranges=751760] [ranges-succeed=751760] [ranges-failed=0] [split-region=1h33m18.078448449s] [restore-ranges=542693] [total-take=1h41m35.471476438s] [restore-data-size(after-compressed)=8.337TB] [Size=8336694965072] [BackupTS=431773933856882690] [total-kv=148015861383] [total-kv-size=16.16TB] [average-speed=2.661GB/s]’ +'[2022/03/12 22:37:49.060 +08:00] [INFO] [collector.go:67] ["Full restore success summary"] [total-ranges=751760] [ranges-succeed=751760] [ranges-failed=0] [split-region=1h33m18.078448449s] [restore-ranges=542693] [total-take=1h41m35.471476438s] [restore-data-size(after-compressed)=8.337TB] [Size=8336694965072] [BackupTS=431773933856882690] [total-kv=148015861383] [total-kv-size=16.16TB] [average-speed=2.661GB/s]' ``` From the test result, you can see that the average speed of restoring one TiKV instance is as high as 181.65 MB/s (which equals to `average-speed`/`tikv_count`). \ No newline at end of file diff --git a/br/use-br-command-line-tool.md b/br/use-br-command-line-tool.md index c318af38c5cc2..7caee2c16406b 100644 --- a/br/use-br-command-line-tool.md +++ b/br/use-br-command-line-tool.md @@ -482,7 +482,7 @@ br restore full -f 'mysql.usertable' -s $external_storage_url --ratelimit 128 > Although you can back up system tables (such as `mysql.tidb`) using the BR tool, BR ignores the following system tables even if you use the `--filter` setting to perform the restoration: > > - Statistical information tables (`mysql.stat_*`) -> - System variable tables (`mysql.tidb`,`mysql.global_variables`) +> - System variable tables (`mysql.tidb`, `mysql.global_variables`) > - User information tables (such as `mysql.user` and `mysql.columns_priv`) > - [Other system tables](https://github.com/pingcap/tidb/blob/v5.4.0/br/pkg/restore/systable_restore.go#L31) diff --git a/choose-index.md b/choose-index.md index 66eae9d339c24..6d978e3738f89 100644 --- a/choose-index.md +++ b/choose-index.md @@ -74,7 +74,7 @@ mysql> SHOW WARNINGS; Skyline-pruning is a heuristic filtering rule for indexes, which can reduce the probability of wrong index selection caused by wrong estimation. To judge an index, the following three dimensions are needed: -- How many access conditions are covered by the indexed columns. An “access condition” is a where condition that can be converted to a column range. And the more access conditions an indexed column set covers, the better it is in this dimension. +- How many access conditions are covered by the indexed columns. An "access condition" is a where condition that can be converted to a column range. And the more access conditions an indexed column set covers, the better it is in this dimension. - Whether it needs to retrieve rows from a table when you select the index to access the table (that is, the plan generated by the index is IndexReader operator or IndexLookupReader operator). Indexes that do not retrieve rows from a table are better on this dimension than indexes that do. If both indexes need TiDB to retrieve rows from the table, compare how many filtering conditions are covered by the indexed columns. Filtering conditions mean the `where` condition that can be judged based on the index. If the column set of an index covers more access conditions, the smaller the number of retrieved rows from a table, and the better the index is in this dimension. diff --git a/clinic/clinic-data-instruction-for-tiup.md b/clinic/clinic-data-instruction-for-tiup.md index d04b293a747d3..6d19bb99da08d 100644 --- a/clinic/clinic-data-instruction-for-tiup.md +++ b/clinic/clinic-data-instruction-for-tiup.md @@ -65,7 +65,7 @@ This section lists the types of diagnostic data that can be collected by Diag fr | :------ | :------ |:-------- | | Log | `tiflash.log` | `--include=log` | | Error log | `tiflash_stderr.log` | `--include=log` | -| Configuration file | `tiflash-learner.toml`,`tiflash-preprocessed.toml`,`tiflash.toml` | `--include=config` | +| Configuration file | `tiflash-learner.toml`, `tiflash-preprocessed.toml`, `tiflash.toml` | `--include=config` | | Real-time configuration | `config.json` | `--include=config` | | Performance data | `cpu_profile.proto` | `--include=perf` | diff --git a/develop/dev-guide-connection-parameters.md b/develop/dev-guide-connection-parameters.md index d6b7cf855c5c3..9369ca6cd7443 100644 --- a/develop/dev-guide-connection-parameters.md +++ b/develop/dev-guide-connection-parameters.md @@ -149,7 +149,7 @@ This section introduces parameters related to `Prepare`. - **useServerPrepStmts** - **useServerPrepStmts** is set to `false` by default, that is, even if you use the Prepare API, the “prepare” operation will be done only on the client. To avoid the parsing overhead of the server, if the same SQL statement uses the Prepare API multiple times, it is recommended to set this configuration to `true`. + **useServerPrepStmts** is set to `false` by default, that is, even if you use the Prepare API, the "prepare" operation will be done only on the client. To avoid the parsing overhead of the server, if the same SQL statement uses the Prepare API multiple times, it is recommended to set this configuration to `true`. To verify that this setting already takes effect, you can do: @@ -265,7 +265,7 @@ In addition, because of a [client bug](https://bugs.mysql.com/bug.php?id=96623), Through monitoring, you might notice that although the application only performs `INSERT` operations to the TiDB cluster, there are a lot of redundant `SELECT` statements. Usually this happens because JDBC sends some SQL statements to query the settings, for example, `select @@session.transaction_read_only`. These SQL statements are useless for TiDB, so it is recommended that you configure `useConfigs=maxPerformance` to avoid extra overhead. -`useConfigs=maxPerformance` configuration includes a group of configurations: +`useConfigs=maxPerformance` configuration includes a group of configurations: ```ini cacheServerConfiguration=true diff --git a/develop/dev-guide-insert-data.md b/develop/dev-guide-insert-data.md index 830aac4889883..9e89dafc70b01 100644 --- a/develop/dev-guide-insert-data.md +++ b/develop/dev-guide-insert-data.md @@ -20,7 +20,7 @@ Before reading this document, you need to prepare the following: There are two ways to insert multiple rows of data. For example, if you need to insert **3** players' data. -- A **multi-line insertion statement**: +- A **multi-line insertion statement**: {{< copyable "sql" >}} @@ -28,7 +28,7 @@ There are two ways to insert multiple rows of data. For example, if you need to INSERT INTO `player` (`id`, `coins`, `goods`) VALUES (1, 1000, 1), (2, 230, 2), (3, 300, 5); ``` -- Multiple **single-line insertion statements**: +- Multiple **single-line insertion statements**: {{< copyable "sql" >}} @@ -160,7 +160,7 @@ In this case, you **cannot** use SQL like the following to insert: INSERT INTO `bookshop`.`users` (`id`, `balance`, `nickname`) VALUES (1, 0.00, 'nicky'); ``` -An error will occur: +An error will occur: ``` ERROR 8216 (HY000): Invalid auto random: Explicit insertion on auto_random column is disabled. Try to set @@allow_auto_random_explicit_insert = true. diff --git a/develop/dev-guide-join-tables.md b/develop/dev-guide-join-tables.md index aeeaf46d8dcf7..78588e7511827 100644 --- a/develop/dev-guide-join-tables.md +++ b/develop/dev-guide-join-tables.md @@ -5,7 +5,7 @@ summary: This document describes how to use multi-table join queries. # Multi-table Join Queries -In many scenarios,you need to use one query to get data from multiple tables. You can use the `JOIN` statement to combine the data from two or more tables. +In many scenarios, you need to use one query to get data from multiple tables. You can use the `JOIN` statement to combine the data from two or more tables. ## Join types diff --git a/develop/dev-guide-sql-development-specification.md b/develop/dev-guide-sql-development-specification.md index 7c364e5ff1262..8770276180325 100644 --- a/develop/dev-guide-sql-development-specification.md +++ b/develop/dev-guide-sql-development-specification.md @@ -29,7 +29,7 @@ This document introduces some general development specifications for using SQL. ```sql SELECT gmt_create FROM ... - WHERE DATE_FORMAT(gmt_create,'%Y%m%d %H:%i:%s') = '20090101 00:00:0' + WHERE DATE_FORMAT(gmt_create, '%Y%m%d %H:%i:%s') = '20090101 00:00:0' ``` Recommended: @@ -37,9 +37,9 @@ This document introduces some general development specifications for using SQL. {{< copyable "sql" >}} ```sql - SELECT DATE_FORMAT(gmt_create,'%Y%m%d %H:%i:%s') + SELECT DATE_FORMAT(gmt_create, '%Y%m%d %H:%i:%s') FROM .. . - WHERE gmt_create = str_to_date('20090101 00:00:00','%Y%m%d %H:%i:s') + WHERE gmt_create = str_to_date('20090101 00:00:00', '%Y%m%d %H:%i:s') ``` ## Other specifications diff --git a/develop/dev-guide-transaction-restraints.md b/develop/dev-guide-transaction-restraints.md index 20d306d29a115..ff086cbadb8bf 100644 --- a/develop/dev-guide-transaction-restraints.md +++ b/develop/dev-guide-transaction-restraints.md @@ -17,7 +17,7 @@ The isolation levels supported by TiDB are **RC (Read Committed)** and **SI (Sna The `SI` isolation level of TiDB can avoid **Phantom Reads**, but the `RR` in ANSI/ISO SQL standard cannot. -The following two examples show what **phantom reads** is. +The following two examples show what **phantom reads** is. - Example 1: **Transaction A** first gets `n` rows according to the query, and then **Transaction B** changes `m` rows other than these `n` rows or adds `m` rows that match the query of **Transaction A**. When **Transaction A** runs the query again, it finds that there are `n+m` rows that match the condition. It is like a phantom, so it is called a **phantom read**. @@ -154,7 +154,7 @@ public class EffectWriteSkew { } ``` -SQL log: +SQL log: {{< copyable "sql" >}} diff --git a/develop/dev-guide-transaction-troubleshoot.md b/develop/dev-guide-transaction-troubleshoot.md index c6cf9f68849be..6050d85d40520 100644 --- a/develop/dev-guide-transaction-troubleshoot.md +++ b/develop/dev-guide-transaction-troubleshoot.md @@ -38,7 +38,7 @@ In TiDB pessimistic transaction mode, if two clients execute the following state After client-B encounters a deadlock error, TiDB automatically rolls back the transaction in client-B. Updating `id=2` in client-A will be executed successfully. You can then run `COMMIT` to finish the transaction. -### Solution 1:avoid deadlocks +### Solution 1: avoid deadlocks To get better performance, you can avoid deadlocks at the application level by adjusting the business logic or schema design. In the example above, if client-B also uses the same update order as client-A, that is, they update books with `id=1` first, and then update books with `id=2`. The deadlock can then be avoided: diff --git a/develop/dev-guide-update-data.md b/develop/dev-guide-update-data.md index deba337a0abb3..89617f39dfa7d 100644 --- a/develop/dev-guide-update-data.md +++ b/develop/dev-guide-update-data.md @@ -277,7 +277,7 @@ In each iteration, `SELECT` queries in order of the primary key. It selects prim In Java (JDBC), a bulk-update application might be similar to the following: -**Code:** +**Code:** {{< copyable "" >}} diff --git a/develop/dev-guide-use-common-table-expression.md b/develop/dev-guide-use-common-table-expression.md index 5953fbc115082..da7304ef13b35 100644 --- a/develop/dev-guide-use-common-table-expression.md +++ b/develop/dev-guide-use-common-table-expression.md @@ -5,7 +5,7 @@ summary: Learn the CTE feature of TiDB, which help you write SQL statements more # Common Table Expression -In some transaction scenarios, due to application complexity, you might need to write a single SQL statement of up to 2,000 lines. The statement probably contains a lot of aggregations and multi-level subquery nesting. Maintaining such a long SQL statement can be a developer’s nightmare. +In some transaction scenarios, due to application complexity, you might need to write a single SQL statement of up to 2,000 lines. The statement probably contains a lot of aggregations and multi-level subquery nesting. Maintaining such a long SQL statement can be a developer's nightmare. To avoid such a long SQL statement, you can simplify queries by using [Views](/develop/dev-guide-use-views.md) or cache intermediate query results by using [Temporary tables](/develop/dev-guide-use-temporary-tables.md). @@ -183,7 +183,7 @@ WITH RECURSIVE AS ( SELECT ... FROM ; ``` -A classic example is to generate a set of [Fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number) with recursive CTE: +A classic example is to generate a set of [Fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number) with recursive CTE: {{< copyable "sql" >}} diff --git a/dm/dm-export-import-config.md b/dm/dm-export-import-config.md index 5e8d250993b4d..94a669a5b849e 100644 --- a/dm/dm-export-import-config.md +++ b/dm/dm-export-import-config.md @@ -40,7 +40,7 @@ config export [--dir directory] ### Parameter explanation -- `dir`: +- `dir`: - optional - specifies the file path for exporting - the default value is `./configs` @@ -73,7 +73,7 @@ config import [--dir directory] ### Parameter explanation -- `dir`: +- `dir`: - optional - specifies the file path for importing - the default value is `./configs` diff --git a/dm/dm-faq.md b/dm/dm-faq.md index d67570616bfd9..39d2a4e982a39 100644 --- a/dm/dm-faq.md +++ b/dm/dm-faq.md @@ -129,11 +129,11 @@ Since DM v2.0, if you directly run the `start-task` command with the task config This error can be handled by [manually importing DM migration tasks of a DM 1.0 cluster to a DM 2.0 cluster](/dm/manually-upgrade-dm-1.0-to-2.0.md). -## Why does TiUP fail to deploy some versions of DM (for example, v2.0.0-hotfix)? +## Why does TiUP fail to deploy some versions of DM (for example, v2.0.0-hotfix)? You can use the `tiup list dm-master` command to view the DM versions that TiUP supports to deploy. TiUP does not manage DM versions which are not shown by this command. -## How to handle the error `parse mydumper metadata error: EOF` that occurs when DM is replicating data? +## How to handle the error `parse mydumper metadata error: EOF` that occurs when DM is replicating data? You need to check the error message and log files to further analyze this error. The cause might be that the dump unit does not produce the correct metadata file due to a lack of permissions. diff --git a/dm/dm-open-api.md b/dm/dm-open-api.md index 69ac52926b4a9..bc20e1aaaa019 100644 --- a/dm/dm-open-api.md +++ b/dm/dm-open-api.md @@ -15,7 +15,7 @@ To enable OpenAPI, perform one of the following operations: openapi = true ``` -+ If your DM cluster has been deployed using TiUP, add the following configuration to the topology file: ++ If your DM cluster has been deployed using TiUP, add the following configuration to the topology file: ```yaml server_configs: diff --git a/dm/dm-tune-configuration.md b/dm/dm-tune-configuration.md index 09b1bd2d1090a..50df0f4616081 100644 --- a/dm/dm-tune-configuration.md +++ b/dm/dm-tune-configuration.md @@ -25,7 +25,7 @@ During full backup, DM splits the data of each table into multiple chunks accord > > - You cannot update the value of `mydumpers` after the migration task is created. Be sure about the value of each option before creating the task. If you need to update the value, stop the task using dmctl, update the configuration file, and re-create the task. > - `mydumpers`.`threads` can be replaced with the `mydumper-thread` configuration item for simplicity. -> - If `rows` is set,DM ignores the value of `chunk-filesize`. +> - If `rows` is set, DM ignores the value of `chunk-filesize`. ## Full data import diff --git a/dm/manually-upgrade-dm-1.0-to-2.0.md b/dm/manually-upgrade-dm-1.0-to-2.0.md index e3cb4ff87bf92..bbc68e57dbfde 100644 --- a/dm/manually-upgrade-dm-1.0-to-2.0.md +++ b/dm/manually-upgrade-dm-1.0-to-2.0.md @@ -108,7 +108,7 @@ For [data migration task configuration guide](/dm/dm-task-configuration-guide.md [Use TiUP](/dm/deploy-a-dm-cluster-using-tiup.md) to deploy a new v2.0+ cluster according to the required number of nodes. -## Step 3:Stop the v1.0.x cluster +## Step 3: Stop the v1.0.x cluster If the original v1.0.x cluster is deployed by DM-Ansible, you need to use [DM-Ansible to stop the v1.0.x cluster](https://docs.pingcap.com/tidb-data-migration/v1.0/cluster-operations#stop-a-cluster). diff --git a/dm/quick-start-create-source.md b/dm/quick-start-create-source.md index 0fa67bb57a1e8..34d0be309c322 100644 --- a/dm/quick-start-create-source.md +++ b/dm/quick-start-create-source.md @@ -31,7 +31,7 @@ A data source contains the information for accessing the upstream migration task 2. Write the configuration file of the data source - For each data source, you need an individual configuration file to create it. You can follow the example below to create a data source whose ID is "mysql-01". First create the configuration file `./source-mysql-01.yaml`: + For each data source, you need an individual configuration file to create it. You can follow the example below to create a data source whose ID is "mysql-01". First create the configuration file `./source-mysql-01.yaml`: ```yaml source-id: "mysql-01" # The ID of the data source, you can refer this source-id in the task configuration and dmctl command to associate the corresponding data source. diff --git a/dm/quick-start-create-task.md b/dm/quick-start-create-task.md index abea4c0f6b6ed..57771bf13e5ec 100644 --- a/dm/quick-start-create-task.md +++ b/dm/quick-start-create-task.md @@ -143,7 +143,7 @@ For MySQL2, replace the configuration file in the above command with that of MyS ## Create a data migration task -After importing [prepared data](#prepare-data), there are several sharded tables on both MySQL1 and MySQL2 instances. These tables have identical structure and the same prefix “t” in the table names; the databases where these tables are located are all prefixed with "sharding"; and there is no conflict between the primary keys or the unique keys (in each sharded table, the primary keys or the unique keys are different from those of other tables). +After importing [prepared data](#prepare-data), there are several sharded tables on both MySQL1 and MySQL2 instances. These tables have identical structure and the same prefix "t" in the table names; the databases where these tables are located are all prefixed with "sharding"; and there is no conflict between the primary keys or the unique keys (in each sharded table, the primary keys or the unique keys are different from those of other tables). Now, suppose that you need to migrate these sharded tables to the `db_target.t_target` table in TiDB. The steps are as follows. diff --git a/dm/shard-merge-best-practices.md b/dm/shard-merge-best-practices.md index 79854468d6cdc..f73838883183c 100644 --- a/dm/shard-merge-best-practices.md +++ b/dm/shard-merge-best-practices.md @@ -31,7 +31,7 @@ Instead, you can: Data from multiple sharded tables might cause conflicts between the primary keys or unique indexes. You need to check each primary key or unique index based on the sharding logic of these sharded tables. The following are three cases related to primary keys or unique indexes: - Shard key: Usually, the same shard key only exists in one sharded table, which means no data conflict is caused on shard key. -- Auto-increment primary key:The auto-increment primary key of each sharded tables counts separately, so their range might overlap. In this case, you need to refer to the next section [Handle conflicts of auto-increment primary key](/dm/shard-merge-best-practices.md#handle-conflicts-of-auto-increment-primary-key) to solve it. +- Auto-increment primary key: The auto-increment primary key of each sharded tables counts separately, so their range might overlap. In this case, you need to refer to the next section [Handle conflicts of auto-increment primary key](/dm/shard-merge-best-practices.md#handle-conflicts-of-auto-increment-primary-key) to solve it. - Other primary keys or unique indexes: you need to analyze them based on the business logic. If data conflict, you can also refer to the next section [Handle conflicts of auto-increment primary key](/dm/shard-merge-best-practices.md#handle-conflicts-of-auto-increment-primary-key) to solve it. ## Handle conflicts of auto-increment primary key diff --git a/dm/table-selector.md b/dm/table-selector.md index 263afe05521f0..815e4d692d2b0 100644 --- a/dm/table-selector.md +++ b/dm/table-selector.md @@ -31,8 +31,8 @@ Table selector uses the following two wildcard characters in `schema-pattern`/`t - Matching all schemas and tables that have a `schema_` prefix in the schema name: ```yaml - schema-pattern: "schema_*" - table-pattern: "" + schema-pattern: "schema_*" + table-pattern: "" ``` - Matching all tables that have a `schema_` prefix in the schema name and a `table_` prefix in the table name: diff --git a/dumpling-overview.md b/dumpling-overview.md index 1d8f4bca85753..dc0546697ef5d 100644 --- a/dumpling-overview.md +++ b/dumpling-overview.md @@ -374,7 +374,7 @@ Finally, all the exported data can be imported back to TiDB using [TiDB Lightnin | `--cert` | The address of the client certificate file for TLS connection | | `--key` | The address of the client private key file for TLS connection | | `--csv-delimiter` | Delimiter of character type variables in CSV files | '"' | -| `--csv-separator` | Separator of each value in CSV files. It is not recommended to use the default ‘,’. It is recommended to use ‘\|+\|’ or other uncommon character combinations| ',' | ',' | +| `--csv-separator` | Separator of each value in CSV files. It is not recommended to use the default ','. It is recommended to use '\|+\|' or other uncommon character combinations| ',' | ',' | | `--csv-null-value` | Representation of null values in CSV files | "\\N" | | `--escape-backslash` | Use backslash (`\`) to escape special characters in the export file | true | | `--output-filename-template` | The filename templates represented in the format of [golang template](https://golang.org/pkg/text/template/#hdr-Arguments)
Support the `{{.DB}}`, `{{.Table}}`, and `{{.Index}}` arguments
The three arguments represent the database name, table name, and chunk ID of the data file | '{{.DB}}.{{.Table}}.{{.Index}}' | diff --git a/filter-binlog-event.md b/filter-binlog-event.md index d5ae8f81e3108..a2e8b4186515f 100644 --- a/filter-binlog-event.md +++ b/filter-binlog-event.md @@ -50,7 +50,7 @@ filters: | drop index | DDL | Drop index event | | alter table | DDL | Alter table event | -- `sql-pattern`:Filters specified DDL SQL statements. The matching rule supports using a regular expression. +- `sql-pattern`: Filters specified DDL SQL statements. The matching rule supports using a regular expression. - `action`: `Do` or `Ignore` - `Do`: the allow list. A binlog event is replicated if meeting either of the following two conditions: diff --git a/functions-and-operators/aggregate-group-by-functions.md b/functions-and-operators/aggregate-group-by-functions.md index dc2ff5229d6d6..9cb89bfacc16b 100644 --- a/functions-and-operators/aggregate-group-by-functions.md +++ b/functions-and-operators/aggregate-group-by-functions.md @@ -21,7 +21,7 @@ This section describes the supported MySQL `GROUP BY` aggregate functions in TiD | [`MIN()`](https://dev.mysql.com/doc/refman/5.7/en/aggregate-functions.html#function_min) | Return the minimum value | | [`GROUP_CONCAT()`](https://dev.mysql.com/doc/refman/5.7/en/aggregate-functions.html#function_group-concat) | Return a concatenated string | | [`VARIANCE()`, `VAR_POP()`](https://dev.mysql.com/doc/refman/5.7/en/aggregate-functions.html#function_var-pop) | Return the population standard variance| -| [`STD()`,`STDDEV()`,`STDDEV_POP`](https://dev.mysql.com/doc/refman/5.7/en/aggregate-functions.html#function_std) | Return the population standard deviation | +| [`STD()`, `STDDEV()`, `STDDEV_POP`](https://dev.mysql.com/doc/refman/5.7/en/aggregate-functions.html#function_std) | Return the population standard deviation | | [`VAR_SAMP()`](https://dev.mysql.com/doc/refman/5.7/en/aggregate-functions.html#function_var-samp) | Return the sample variance | | [`STDDEV_SAMP()`](https://dev.mysql.com/doc/refman/5.7/en/aggregate-functions.html#function_stddev-samp) | Return the sample standard deviation | | [`JSON_OBJECTAGG(key, value)`](https://dev.mysql.com/doc/refman/5.7/en/aggregate-functions.html#function_json-objectagg) | Return the result set as a single JSON object containing key-value pairs | diff --git a/functions-and-operators/precision-math.md b/functions-and-operators/precision-math.md index 7b7c5dd1c9fe7..c69585839a256 100644 --- a/functions-and-operators/precision-math.md +++ b/functions-and-operators/precision-math.md @@ -106,7 +106,7 @@ The following results are returned in different SQL modes: The result of the `ROUND()` function depends on whether its argument is exact or approximate: -- For exact-value numbers, the `ROUND()` function uses the “round half up” rule. +- For exact-value numbers, the `ROUND()` function uses the "round half up" rule. - For approximate-value numbers, the results in TiDB differs from that in MySQL: ```sql diff --git a/grafana-performance-overview-dashboard.md b/grafana-performance-overview-dashboard.md index 36d0aa57fe767..f0b7b57dd13c8 100644 --- a/grafana-performance-overview-dashboard.md +++ b/grafana-performance-overview-dashboard.md @@ -84,7 +84,7 @@ Generally, dividing `tso - cmd` by `tso - request` yields the average batch size - CPU-Avg: Average CPU utilization of all TiKV instances - CPU-Delta: Maximum CPU utilization of all TiKV instances minus minimum CPU utilization of all TiKV instances - CPU-MAX: Maximum CPU utilization among all TiKV instances -- IO-Avg:Average MBps of all TiKV instances +- IO-Avg: Average MBps of all TiKV instances - IO-Delt: Maximum MBps of all TiKV instances minus minimum MBps of all TiKV instances - IO-MAX: Maximum MBps of all TiKV instances diff --git a/grafana-tidb-dashboard.md b/grafana-tidb-dashboard.md index 44ba48b20dc77..1dbd7ee2e90ea 100644 --- a/grafana-tidb-dashboard.md +++ b/grafana-tidb-dashboard.md @@ -26,7 +26,7 @@ To understand the key metrics displayed on the TiDB dashboard, check the followi - QPS: the number of SQL statements executed per second on all TiDB instances, which is counted according to `SELECT`, `INSERT`, `UPDATE`, and other types of statements - CPS By Instance: the command statistics on each TiDB instance, which is classified according to the success or failure of command execution results - Failed Query OPM: the statistics of error types (such as syntax errors and primary key conflicts) according to the errors occurred when executing SQL statements per second on each TiDB instance. It contains the module in which the error occurs and the error code - - Slow query: the statistics of the processing time of slow queries (the time cost of the entire slow query, the time cost of Coprocessor,and the waiting time for Coprocessor scheduling). Slow queries are classified into internal and general SQL statements + - Slow query: the statistics of the processing time of slow queries (the time cost of the entire slow query, the time cost of Coprocessor, and the waiting time for Coprocessor scheduling). Slow queries are classified into internal and general SQL statements - Connection Idle Duration: the duration of idle connections - 999/99/95/80 Duration: the statistics of the execution time for different types of SQL statements (different percentiles) diff --git a/information-schema/information-schema-metrics-summary.md b/information-schema/information-schema-metrics-summary.md index c6bbb19d3ff13..c3af7a807a677 100644 --- a/information-schema/information-schema-metrics-summary.md +++ b/information-schema/information-schema-metrics-summary.md @@ -137,8 +137,8 @@ The second and third rows of the query results above indicate that the `Select` In addition to the example above, you can use the monitoring summary table to quickly find the module with the largest change from the monitoring data by comparing the full link monitoring items of the two time periods, and quickly locate the bottleneck. The following example compares all monitoring items in two periods (where t1 is the baseline) and sorts these items according to the greatest difference: -* Period t1:`("2020-03-03 17:08:00", "2020-03-03 17:11:00")` -* Period t2:`("2020-03-03 17:18:00", "2020-03-03 17:21:00")` +* Period t1: `("2020-03-03 17:08:00", "2020-03-03 17:11:00")` +* Period t2: `("2020-03-03 17:18:00", "2020-03-03 17:21:00")` The monitoring items of the two time periods are joined according to `METRICS_NAME` and sorted according to the difference value. `TIME_RANGE` is the hint that specifies the query time. @@ -181,7 +181,7 @@ ORDER BY ratio DESC LIMIT 10; From the query result above, you can get the following information: * `tib_slow_query_cop_process_total_time` (the time consumption of `cop process` in TiDB slow queries) in the period t2 is 5,865 times higher than that in period t1. -* `tidb_distsql_partial_scan_key_total_num` (the number of keys to scan requested by TiDB’s `distsql`) in period t2 is 3,648 times higher than that in period t1. During period t2, `tidb_slow_query_cop_wait_total_time` (the waiting time of Coprocessor requesting to queue up in the TiDB slow query) is 267 times higher than that in period t1. +* `tidb_distsql_partial_scan_key_total_num` (the number of keys to scan requested by TiDB's `distsql`) in period t2 is 3,648 times higher than that in period t1. During period t2, `tidb_slow_query_cop_wait_total_time` (the waiting time of Coprocessor requesting to queue up in the TiDB slow query) is 267 times higher than that in period t1. * `tikv_cop_total_response_size` (the size of the TiKV Coprocessor request result) in period t2 is 192 times higher than that in period t1. * `tikv_cop_scan_details` in period t2 (the scan requested by the TiKV Coprocessor) is 105 times higher than that in period t1. diff --git a/migrate-aurora-to-tidb.md b/migrate-aurora-to-tidb.md index e1aca5a03dd9d..efd2ab67e902c 100644 --- a/migrate-aurora-to-tidb.md +++ b/migrate-aurora-to-tidb.md @@ -228,7 +228,7 @@ block-allow-list: # If the DM version is earlier than v2.0.0 # Configures the data source. mysql-instances: - - source-id: "mysql-01" # Data source ID,i.e., source-id in source1.yaml + - source-id: "mysql-01" # Data source ID, i.e., source-id in source1.yaml block-allow-list: "listA" # References the block-allow-list configuration above. # syncer-config-name: "global" # References the syncers incremental data configuration. meta: # When task-mode is "incremental" and the downstream database does not have a checkpoint, DM uses the binlog position as the starting point. If the downstream database has a checkpoint, DM uses the checkpoint as the starting point. diff --git a/migrate-from-sql-files-to-tidb.md b/migrate-from-sql-files-to-tidb.md index f2dbb8bc7769f..8325589fc13eb 100644 --- a/migrate-from-sql-files-to-tidb.md +++ b/migrate-from-sql-files-to-tidb.md @@ -44,8 +44,8 @@ level = "info" file = "tidb-lightning.log" [tikv-importer] -# "local":Default. The local backend is used to import large volumes of data (around or more than 1 TiB). During the import, the target TiDB cluster cannot provide any service. -# "tidb":The "tidb" backend can also be used to import small volumes of data (less than 1 TiB). During the import, the target TiDB cluster can provide service normally. For the information about backend mode, refer to https://docs.pingcap.com/tidb/stable/tidb-lightning-backends. +# "local": Default. The local backend is used to import large volumes of data (around or more than 1 TiB). During the import, the target TiDB cluster cannot provide any service. +# "tidb": The "tidb" backend can also be used to import small volumes of data (less than 1 TiB). During the import, the target TiDB cluster can provide service normally. For the information about backend mode, refer to https://docs.pingcap.com/tidb/stable/tidb-lightning-backends. backend = "local" # Sets the temporary storage directory for the sorted key-value files. The directory must be empty, and the storage space must be greater than the size of the dataset to be imported. For better import performance, it is recommended to use a directory different from `data-source-dir` and use flash storage and exclusive I/O for the directory. diff --git a/migrate-from-tidb-to-tidb.md b/migrate-from-tidb-to-tidb.md index 437bf1473f19b..6c8b355c22f48 100644 --- a/migrate-from-tidb-to-tidb.md +++ b/migrate-from-tidb-to-tidb.md @@ -127,7 +127,7 @@ After setting up the environment, you can use the backup and restore functions o MySQL [test]> SET GLOBAL tidb_gc_enable=FALSE; Query OK, 0 rows affected (0.01 sec) MySQL [test]> SELECT @@global.tidb_gc_enable; - +-------------------------+: + +-------------------------+: | @@global.tidb_gc_enable | +-------------------------+ | 0 | diff --git a/migrate-large-mysql-to-tidb.md b/migrate-large-mysql-to-tidb.md index f29354d47efe1..642cd8dd01a1f 100644 --- a/migrate-large-mysql-to-tidb.md +++ b/migrate-large-mysql-to-tidb.md @@ -211,7 +211,7 @@ If the import fails, refer to [TiDB Lightning FAQ](/tidb-lightning/tidb-lightnin # Configures the data source. mysql-instances: - - source-id: "mysql-01" # Data source ID,i.e., source-id in source1.yaml + - source-id: "mysql-01" # Data source ID, i.e., source-id in source1.yaml block-allow-list: "bw-rule-1" # You can use the block-allow-list configuration above. # syncer-config-name: "global" # You can use the syncers incremental data configuration below. meta: # When task-mode is "incremental" and the target database does not have a checkpoint, DM uses the binlog position as the starting point. If the target database has a checkpoint, DM uses the checkpoint as the starting point. diff --git a/migrate-small-mysql-shards-to-tidb.md b/migrate-small-mysql-shards-to-tidb.md index dc5e3e075cbc9..317a30eb6d852 100644 --- a/migrate-small-mysql-shards-to-tidb.md +++ b/migrate-small-mysql-shards-to-tidb.md @@ -20,7 +20,7 @@ Both MySQL Instance 1 and MySQL Instance 2 contain the following schemas and tab | store_01 | sale_01, sale_02 | | store_02 | sale_01, sale_02 | -Target schemas and tables: +Target schemas and tables: | Schema | Table | |:------|:------| diff --git a/optimistic-transaction.md b/optimistic-transaction.md index 569d5b5d02741..ac408eb06044e 100644 --- a/optimistic-transaction.md +++ b/optimistic-transaction.md @@ -74,7 +74,7 @@ If a write-write conflict occurs during the transaction commit, TiDB automatical # Whether to disable automatic retry. ("on" by default) tidb_disable_txn_auto_retry = OFF # Set the maximum number of the retires. ("10" by default) -# When “tidb_retry_limit = 0”, automatic retry is completely disabled. +# When "tidb_retry_limit = 0", automatic retry is completely disabled. tidb_retry_limit = 10 ``` diff --git a/optimizer-hints.md b/optimizer-hints.md index 72367364822d8..858b839c0251c 100644 --- a/optimizer-hints.md +++ b/optimizer-hints.md @@ -91,7 +91,7 @@ The `MERGE_JOIN(t1_name [, tl_name ...])` hint tells the optimizer to use the so {{< copyable "sql" >}} ```sql -select /*+ MERGE_JOIN(t1, t2) */ * from t1,t2 where t1.id = t2.id; +select /*+ MERGE_JOIN(t1, t2) */ * from t1, t2 where t1.id = t2.id; ``` > **Note:** @@ -105,7 +105,7 @@ The `INL_JOIN(t1_name [, tl_name ...])` hint tells the optimizer to use the inde {{< copyable "sql" >}} ```sql -select /*+ INL_JOIN(t1, t2) */ * from t1,t2 where t1.id = t2.id; +select /*+ INL_JOIN(t1, t2) */ * from t1, t2 where t1.id = t2.id; ``` The parameter(s) given in `INL_JOIN()` is the candidate table for the inner table when you create the query plan. For example, `INL_JOIN(t1)` means that TiDB only considers using `t1` as the inner table to create a query plan. If the candidate table has an alias, you must use the alias as the parameter in `INL_JOIN()`; if it does not has an alias, use the table's original name as the parameter. For example, in the `select /*+ INL_JOIN(t1) */ * from t t1, t t2 where t1.a = t2.b;` query, you must use the `t` table's alias `t1` or `t2` rather than `t` as `INL_JOIN()`'s parameter. @@ -125,7 +125,7 @@ The `HASH_JOIN(t1_name [, tl_name ...])` hint tells the optimizer to use the has {{< copyable "sql" >}} ```sql -select /*+ HASH_JOIN(t1, t2) */ * from t1,t2 where t1.id = t2.id; +select /*+ HASH_JOIN(t1, t2) */ * from t1, t2 where t1.id = t2.id; ``` > **Note:** @@ -139,7 +139,7 @@ The `HASH_AGG()` hint tells the optimizer to use the hash aggregation algorithm {{< copyable "sql" >}} ```sql -select /*+ HASH_AGG() */ count(*) from t1,t2 where t1.a > 10 group by t1.id; +select /*+ HASH_AGG() */ count(*) from t1, t2 where t1.a > 10 group by t1.id; ``` ### STREAM_AGG() @@ -149,7 +149,7 @@ The `STREAM_AGG()` hint tells the optimizer to use the stream aggregation algori {{< copyable "sql" >}} ```sql -select /*+ STREAM_AGG() */ count(*) from t1,t2 where t1.a > 10 group by t1.id; +select /*+ STREAM_AGG() */ count(*) from t1, t2 where t1.a > 10 group by t1.id; ``` ### USE_INDEX(t1_name, idx1_name [, idx2_name ...]) diff --git a/partition-pruning.md b/partition-pruning.md index 7206b39752f72..9ec079152b530 100644 --- a/partition-pruning.md +++ b/partition-pruning.md @@ -205,7 +205,7 @@ In the SQL statement above, it can be known from the `x in(1,13)` condition that ##### Scenario two -Partition pruning applies to the query condition of interval comparison,such as `between`, `>`, `<`, `=`, `>=`, `<=`. For example: +Partition pruning applies to the query condition of interval comparison, such as `between`, `>`, `<`, `=`, `>=`, `<=`. For example: {{< copyable "sql" >}} @@ -266,7 +266,7 @@ explain select * from t where id > '2020-04-18'; #### Inapplicable scenario in Range partitioned tables -Because the rule optimization of partition pruning is performed during the generation phase of the query plan, partition pruning is not suitable for scenarios where the filter conditions can be obtained only during the execution phase. For example: +Because the rule optimization of partition pruning is performed during the generation phase of the query plan, partition pruning is not suitable for scenarios where the filter conditions can be obtained only during the execution phase. For example: {{< copyable "sql" >}} diff --git a/pd-control.md b/pd-control.md index 12553ee99fa9d..0c016b4d41b3a 100644 --- a/pd-control.md +++ b/pd-control.md @@ -729,7 +729,7 @@ Description of various types: - miss-peer: the Region without enough replicas - extra-peer: the Region with extra replicas - down-peer: the Region in which some replicas are Down -- pending-peer:the Region in which some replicas are Pending +- pending-peer: the Region in which some replicas are Pending Usage: diff --git a/predicate-push-down.md b/predicate-push-down.md index a039d43cb62d4..339ddac78fc13 100644 --- a/predicate-push-down.md +++ b/predicate-push-down.md @@ -68,7 +68,7 @@ explain select * from t join s on t.a = s.a where t.a < 1; In this query, the predicate `t.a < 1` is pushed below join to filter in advance, which can reduce the calculation overhead of join. -In addition,This SQL statement has an inner join executed, and the `ON` condition is `t.a = s.a`. The predicate `s.a <1` can be derived from `t.a < 1` and pushed down to `s` table below the join operator. Filtering the `s` table can further reduce the calculation overhead of join. +In addition, This SQL statement has an inner join executed, and the `ON` condition is `t.a = s.a`. The predicate `s.a <1` can be derived from `t.a < 1` and pushed down to `s` table below the join operator. Filtering the `s` table can further reduce the calculation overhead of join. ### Case 4: predicates that are not supported by storage layers cannot be pushed down @@ -107,9 +107,9 @@ explain select * from t left join s on t.a = s.a where s.a is null; 6 rows in set (0.00 sec) ``` -In this query,there is a predicate `s.a is null` on the inner table `s`. +In this query, there is a predicate `s.a is null` on the inner table `s`. -From the `explain` results,we can see that the predicate is not pushed below join operator. This is because the outer join fills the inner table with `NULL` values when the `on` condition isn't satisfied, and the predicate `s.a is null` is used to filter the results after the join. If it is pushed down to the inner table below join, the execution plan is not equivalent to the original one. +From the `explain` results, we can see that the predicate is not pushed below join operator. This is because the outer join fills the inner table with `NULL` values when the `on` condition isn't satisfied, and the predicate `s.a is null` is used to filter the results after the join. If it is pushed down to the inner table below join, the execution plan is not equivalent to the original one. ### Case 6: the predicates which contain user variables cannot be pushed down @@ -127,11 +127,11 @@ explain select * from t where a < @a; 3 rows in set (0.00 sec) ``` -In this query,there is a predicate `a < @a` on table `t`. The `@a` of the predicate is a user variable. +In this query, there is a predicate `a < @a` on table `t`. The `@a` of the predicate is a user variable. As can be seen from `explain` results, the predicate is not like case 2, which is simplified to `a < 1` and pushed down to TiKV. This is because the value of the user variable `@a` may change during the computation, and TiKV is not aware of the changes. So TiDB does not replace `@a` with `1`, and does not push down it to TiKV. -An example to help you understand is as follows: +An example to help you understand is as follows: ```sql create table t(id int primary key, a int); diff --git a/privilege-management.md b/privilege-management.md index 7835f22d21ee3..c38bed48a207d 100644 --- a/privilege-management.md +++ b/privilege-management.md @@ -406,7 +406,7 @@ User identity is based on two pieces of information: `Host`, the host that initi When the connection is successful, the request verification process checks whether the operation has the privilege. -For database-related requests (`INSERT`, `UPDATE`), the request verification process first checks the user’s global privileges in the `mysql.user` table. If the privilege is granted, you can access directly. If not, check the `mysql.db` table. +For database-related requests (`INSERT`, `UPDATE`), the request verification process first checks the user's global privileges in the `mysql.user` table. If the privilege is granted, you can access directly. If not, check the `mysql.db` table. The `user` table has global privileges regardless of the default database. For example, the `DELETE` privilege in `user` can apply to any row, table, or database. diff --git a/production-deployment-using-tiup.md b/production-deployment-using-tiup.md index 978f06c099cc5..0e0d99ed65100 100644 --- a/production-deployment-using-tiup.md +++ b/production-deployment-using-tiup.md @@ -70,7 +70,7 @@ Log in to the control machine using a regular user account (take the `tidb` user tiup update --self && tiup update cluster ``` - Expected output includes `“Update successfully!”`. + Expected output includes `"Update successfully!"`. 5. Verify the current version of your TiUP cluster: diff --git a/quick-start-with-tidb.md b/quick-start-with-tidb.md index fc2f321ef93ed..691439f5b11d6 100644 --- a/quick-start-with-tidb.md +++ b/quick-start-with-tidb.md @@ -101,7 +101,7 @@ As a distributed system, a basic TiDB test cluster usually consists of 2 TiDB in > > + Since v5.2.0, TiDB supports running `tiup playground` on the machine that uses the Apple M1 chip. > + For the playground operated in this way, after the test deployment is finished, TiUP will clean up the original cluster data. You will get a new cluster after re-running the command. - > + If you want the data to be persisted on storage,run `tiup --tag playground ...`. For details, refer to [TiUP Reference Guide](/tiup/tiup-reference.md#-t---tag). + > + If you want the data to be persisted on storage, run `tiup --tag playground ...`. For details, refer to [TiUP Reference Guide](/tiup/tiup-reference.md#-t---tag). 4. Start a new session to access TiDB: @@ -220,7 +220,7 @@ As a distributed system, a basic TiDB test cluster usually consists of 2 TiDB in > **Note:** > > For the playground operated in this way, after the test deployment is finished, TiUP will clean up the original cluster data. You will get a new cluster after re-running the command. - > If you want the data to be persisted on storage,run `tiup --tag playground ...`. For details, refer to [TiUP Reference Guide](/tiup/tiup-reference.md#-t---tag). + > If you want the data to be persisted on storage, run `tiup --tag playground ...`. For details, refer to [TiUP Reference Guide](/tiup/tiup-reference.md#-t---tag). 4. Start a new session to access TiDB: diff --git a/releases/release-4.0.0-beta.md b/releases/release-4.0.0-beta.md index e4d1e3b28f0dd..6f34a418fcc27 100644 --- a/releases/release-4.0.0-beta.md +++ b/releases/release-4.0.0-beta.md @@ -58,7 +58,7 @@ TiDB Ansible version: 4.0.0-beta - [#12623](https://github.com/pingcap/tidb/pull/12623) [#11989](https://github.com/pingcap/tidb/pull/11989) + Output the detailed `backoff` information of TiKV RPC in the slow log to facilitate troubleshooting [#13770](https://github.com/pingcap/tidb/pull/13770) + Optimize and unify the format of the memory statistics in the expensive log [#12809](https://github.com/pingcap/tidb/pull/12809) -+ Optimize the explicit format of `EXPLAIN` and support outputting information about the operator’s usage of memory and disk [#13914](https://github.com/pingcap/tidb/pull/13914) [#13692](https://github.com/pingcap/tidb/pull/13692) [#13686](https://github.com/pingcap/tidb/pull/13686) [#11415](https://github.com/pingcap/tidb/pull/11415) [#13927](https://github.com/pingcap/tidb/pull/13927) [#13764](https://github.com/pingcap/tidb/pull/13764) [#13720](https://github.com/pingcap/tidb/pull/13720) ++ Optimize the explicit format of `EXPLAIN` and support outputting information about the operator's usage of memory and disk [#13914](https://github.com/pingcap/tidb/pull/13914) [#13692](https://github.com/pingcap/tidb/pull/13692) [#13686](https://github.com/pingcap/tidb/pull/13686) [#11415](https://github.com/pingcap/tidb/pull/11415) [#13927](https://github.com/pingcap/tidb/pull/13927) [#13764](https://github.com/pingcap/tidb/pull/13764) [#13720](https://github.com/pingcap/tidb/pull/13720) + Optimize the check for duplicate values in `LOAD DATA` based on the transaction size and support setting the transaction size by configuring the `tidb_dml_batch_size` parameter [#11132](https://github.com/pingcap/tidb/pull/11132) + Optimize the performance of `LOAD DATA` by separating the data preparing routine and the commit routine and assigning the workload to different Workers [#11533](https://github.com/pingcap/tidb/pull/11533) [#11284](https://github.com/pingcap/tidb/pull/11284) diff --git a/releases/release-5.0.0.md b/releases/release-5.0.0.md index aebaa3b9fcd00..572f188d029c9 100644 --- a/releases/release-5.0.0.md +++ b/releases/release-5.0.0.md @@ -35,7 +35,7 @@ In v5.0, the key new features or improvements are as follows: > > The scope of the variable is changed from session to global, and the default value is changed from `20000` to `0`. If the application relies on the original default value, you need to use the `set global` statement to modify the variable to the original value after the upgrade. -+ Control temporary tables’ syntax compatibility using the [`tidb_enable_noop_functions`](/system-variables.md#tidb_enable_noop_functions-new-in-v40) system variable. When this variable value is `OFF`, the `CREATE TEMPORARY TABLE` syntax returns an error. ++ Control temporary tables' syntax compatibility using the [`tidb_enable_noop_functions`](/system-variables.md#tidb_enable_noop_functions-new-in-v40) system variable. When this variable value is `OFF`, the `CREATE TEMPORARY TABLE` syntax returns an error. + Add the following system variables to directly control the garbage collection-related parameters: - [`tidb_gc_concurrency`](/system-variables.md#tidb_gc_concurrency-new-in-v50) - [`tidb_gc_enable`](/system-variables.md#tidb_gc_enable-new-in-v50) @@ -56,10 +56,10 @@ In v5.0, the key new features or improvements are as follows: ### Configuration file parameters + Add the [`index-limit`](/tidb-configuration-file.md#index-limit-new-in-v50) configuration item for TiDB. Its value defaults to `64` and ranges between `[64,512]`. A MySQL table supports 64 indexes at most. If its value exceeds the default setting and more than 64 indexes are created for a table, when the table schema is re-imported into MySQL, an error will be reported. -+ Add the [`enable-enum-length-limit`](/tidb-configuration-file.md#enable-enum-length-limit-new-in-v50) configuration item for TiDB to be compatible and consistent with MySQL’s ENUM/SET length (ENUM length < 255). The default value is `true`. ++ Add the [`enable-enum-length-limit`](/tidb-configuration-file.md#enable-enum-length-limit-new-in-v50) configuration item for TiDB to be compatible and consistent with MySQL's ENUM/SET length (ENUM length < 255). The default value is `true`. + Replace the `pessimistic-txn.enable` configuration item with the [`tidb_txn_mode`](/system-variables.md#tidb_txn_mode) environment variable. + Replace the `performance.max-memory` configuration item with [`performance.server-memory-quota`](/tidb-configuration-file.md#server-memory-quota-new-in-v409) -+ Replace the `tikv-client.copr-cache.enable` configuration item with [`tikv-client.copr-cache.capacity-mb`](/tidb-configuration-file.md#capacity-mb). If the item’s value is `0.0`, this feature is disabled. If the item’s value is greater than `0.0`, this feature is enabled. Its default value is `1000.0`. ++ Replace the `tikv-client.copr-cache.enable` configuration item with [`tikv-client.copr-cache.capacity-mb`](/tidb-configuration-file.md#capacity-mb). If the item's value is `0.0`, this feature is disabled. If the item's value is greater than `0.0`, this feature is enabled. Its default value is `1000.0`. + Replace the `rocksdb.auto-tuned` configuration item with [`rocksdb.rate-limiter-auto-tuned`](/tikv-configuration-file.md#rate-limiter-auto-tuned-new-in-v50). + Delete the `raftstore.sync-log` configuration item. By default, written data is forcibly spilled to the disk. Before v5.0, you can explicitly disable `raftstore.sync-log`. Since v5.0, the configuration value is forcibly set to `true`. + Change the default value of the `gc.enable-compaction-filter` configuration item from `false` to `true`. @@ -81,7 +81,7 @@ In v5.0, the key new features or improvements are as follows: With the list partitioning feature, you can effectively query and maintain tables with a large amount of data. -With this feature enabled, partitions and how data is distributed among partitions are defined according to the `PARTITION BY LIST(expr) PARTITION part_name VALUES IN (...)` expression. The partitioned tables’ data set supports at most 1024 distinct integer values. You can define the values using the `PARTITION ... VALUES IN (...)` clause. +With this feature enabled, partitions and how data is distributed among partitions are defined according to the `PARTITION BY LIST(expr) PARTITION part_name VALUES IN (...)` expression. The partitioned tables' data set supports at most 1024 distinct integer values. You can define the values using the `PARTITION ... VALUES IN (...)` clause. To enable list partitioning, set the session variable [`tidb_enable_list_partition`](/system-variables.md#tidb_enable_list_partition-new-in-v50) to `ON`. @@ -123,7 +123,7 @@ This feature is disabled by default. To enable the feature, modify the value of Currently, this feature still has the following incompatibility issues: -+ Transaction’s semantics might change when there are concurrent transactions ++ Transaction's semantics might change when there are concurrent transactions + Known compatibility issue that occurs when using the feature together with TiDB Binlog + Incompatibility with `Change Column` @@ -301,13 +301,13 @@ Before v5.0, to balance the contention for I/O resources between background task You can disable this feature by modifying the `rate-limiter-auto-tuned` configuration item. -#### Enable the GC Compaction Filter feature by default to reduce GC’s consumption of CPU and I/O resources +#### Enable the GC Compaction Filter feature by default to reduce GC's consumption of CPU and I/O resources [User document](/garbage-collection-configuration.md#gc-in-compaction-filter), [#18009](https://github.com/pingcap/tidb/issues/18009) When TiDB performs garbage collection (GC) and data compaction, partitions occupy CPU and I/O resources. Overlapping data exists during the execution of these two tasks. -To reduce GC’s consumption of CPU and I/O resources, the GC Compaction Filter feature combines these two tasks into one and executes them in the same task. This feature is enabled by default. You can disable it by configuring `gc.enable-compaction-filter = false`. +To reduce GC's consumption of CPU and I/O resources, the GC Compaction Filter feature combines these two tasks into one and executes them in the same task. This feature is enabled by default. You can disable it by configuring `gc.enable-compaction-filter = false`. #### TiFlash limits the compression and data sorting's use of I/O resources (**experimental feature**) diff --git a/releases/release-5.2.4.md b/releases/release-5.2.4.md index 163455c7fb6f5..039dd7944d8cb 100644 --- a/releases/release-5.2.4.md +++ b/releases/release-5.2.4.md @@ -195,7 +195,7 @@ TiDB version: 5.2.4 + TiDB Lightning - Fix the issue of wrong import result that occurs when TiDB Lightning does not have the privilege to access the `mysql.tidb` table [#31088](https://github.com/pingcap/tidb/issues/31088) - - Fix the checksum error “GC life time is shorter than transaction duration” [#32733](https://github.com/pingcap/tidb/issues/32733) + - Fix the checksum error "GC life time is shorter than transaction duration" [#32733](https://github.com/pingcap/tidb/issues/32733) - Fix a bug that TiDB Lightning may not delete the metadata schema when some import tasks do not contain source files [#28144](https://github.com/pingcap/tidb/issues/28144) - Fix the issue that TiDB Lightning does not report errors when the S3 storage path does not exist [#28031](https://github.com/pingcap/tidb/issues/28031) [#30709](https://github.com/pingcap/tidb/issues/30709) - Fix an error that occurs when iterating more than 1000 keys on GCS [#30377](https://github.com/pingcap/tidb/issues/30377) \ No newline at end of file diff --git a/releases/release-5.4.0.md b/releases/release-5.4.0.md index 4d9d1820e1a0b..5bd1af8bb0154 100644 --- a/releases/release-5.4.0.md +++ b/releases/release-5.4.0.md @@ -4,7 +4,7 @@ title: TiDB 5.4 Release Notes # TiDB 5.4 Release Notes -Release date:February 15, 2022 +Release date: February 15, 2022 TiDB version: 5.4.0 diff --git a/releases/release-5.4.1.md b/releases/release-5.4.1.md index 07fd4b5ff547a..20cf54569ce46 100644 --- a/releases/release-5.4.1.md +++ b/releases/release-5.4.1.md @@ -156,7 +156,7 @@ TiDB v5.4.1 does not introduce any compatibility changes in product design. But + TiDB Lightning - - Fix the checksum error “GC life time is shorter than transaction duration” [#32733](https://github.com/pingcap/tidb/issues/32733) + - Fix the checksum error "GC life time is shorter than transaction duration" [#32733](https://github.com/pingcap/tidb/issues/32733) - Fix the issue that TiDB Lightning gets stuck when it fails to check empty tables [#31797](https://github.com/pingcap/tidb/issues/31797) - Fix a bug that TiDB Lightning may not delete the metadata schema when some import tasks do not contain source files [#28144](https://github.com/pingcap/tidb/issues/28144) - Fix the issue that the precheck does not check local disk resources and cluster availability [#34213](https://github.com/pingcap/tidb/issues/34213) diff --git a/releases/release-6.0.0-dmr.md b/releases/release-6.0.0-dmr.md index 1b74070f71fc8..89d5fc1933d79 100644 --- a/releases/release-6.0.0-dmr.md +++ b/releases/release-6.0.0-dmr.md @@ -25,7 +25,7 @@ In 6.0.0-DMR, the key new features or improvements are as follows: - Provide PingCAP Clinic, an automatic diagnosis service for TiDB clusters (Technical Preview version). - Provide TiDB Enterprise Manager, an enterprise-level database management platform. -Also, as a core component of TiDB’s HTAP solution, TiFlashTM is officially open source in this release. For details, see [TiFlash repository](https://github.com/pingcap/tiflash). +Also, as a core component of TiDB's HTAP solution, TiFlashTM is officially open source in this release. For details, see [TiFlash repository](https://github.com/pingcap/tiflash). ## Release strategy changes @@ -161,7 +161,7 @@ TiDB v6.0.0 is a DMR, and its version is 6.0.0-DMR. - Enable all I/O checks (Checksum) by default - This feature was introduced in v5.4.0 as experimental. It enhances data accuracy and security without imposing an obvious impact on users’ businesses. + This feature was introduced in v5.4.0 as experimental. It enhances data accuracy and security without imposing an obvious impact on users' businesses. Warning: Newer version of data format cannot be downgraded in place to versions earlier than v5.4.0. During such a downgrade, you need to delete TiFlash replicas and replicate data after the downgrade. Alternatively, you can perform a downgrade by referring to [dttool migrate](/tiflash/tiflash-command-line-flags.md#dttool-migrate). @@ -320,7 +320,7 @@ TiDB v6.0.0 is a DMR, and its version is 6.0.0-DMR. | TiKV | [`quota`](/tikv-configuration-file.md#quota) | Newly added | Add configuration items related to Quota Limiter, which limit the resources occupied by frontend requests. Quota Limiter is an experimental feature and is disabled by default. New quota-related configuration items are `foreground-cpu-time`, `foreground-write-bandwidth`, `foreground-read-bandwidth`, and `max-delay-duration`. | | TiFlash | [`profiles.default.dt_compression_method`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Newly added | Specifies the compression algorithm for TiFlash. The optional values are `LZ4`, `zstd` and `LZ4HC`, all case insensitive. The default value is `LZ4`. | | TiFlash | [`profiles.default.dt_compression_level`](/tiflash/tiflash-configuration.md#configure-the-tiflashtoml-file) | Newly added | Specifies the compression level of TiFlash. The default value is `1`. | -| DM | [`loaders..import-mode`](/dm/task-configuration-file-full.md#task-configuration-file-template-advanced) | Newly added | The import mode during the full import phase. Since v6.0, DM uses TiDB Lightning’s TiDB-backend mode to import data during the full import phase; the previous Loader component is no longer used. This is an internal replacement and has no obvious impact on daily operations.
The default value is set to `sql`, which means using `tidb-backend` mode. In some rare cases, `tidb-backend` might not be fully compatible. You can fall back to Loader mode by configuring this parameter to `loader`. | +| DM | [`loaders..import-mode`](/dm/task-configuration-file-full.md#task-configuration-file-template-advanced) | Newly added | The import mode during the full import phase. Since v6.0, DM uses TiDB Lightning's TiDB-backend mode to import data during the full import phase; the previous Loader component is no longer used. This is an internal replacement and has no obvious impact on daily operations.
The default value is set to `sql`, which means using `tidb-backend` mode. In some rare cases, `tidb-backend` might not be fully compatible. You can fall back to Loader mode by configuring this parameter to `loader`. | | DM | [`loaders..on-duplicate`](/dm/task-configuration-file-full.md#task-configuration-file-template-advanced) | Newly added | Specifies the methods to resolve conflicts during the full import phase. The default value is `replace`, which means using the new data to replace the existing data. | | TiCDC | [`dial-timeout`](/ticdc/manage-ticdc.md#configure-sink-uri-with-kafka) | Newly added | The timeout in establishing a connection with the downstream Kafka. The default value is `10s`. | | TiCDC | [`read-timeout`](/ticdc/manage-ticdc.md#configure-sink-uri-with-kafka) | Newly added | The timeout in getting a response returned by the downstream Kafka. The default value is `10s`. | @@ -574,7 +574,7 @@ TiDB v6.0.0 is a DMR, and its version is 6.0.0-DMR. - Fix a bug that TiDB Lightning may not delete the metadata schema when some import tasks do not contain source files [#28144](https://github.com/pingcap/tidb/issues/28144) - Fix the panic that occurs when the table names in the source file and in the target cluster are different [#31771](https://github.com/pingcap/tidb/issues/31771) - - Fix the checksum error “GC life time is shorter than transaction duration” [#32733](https://github.com/pingcap/tidb/issues/32733) + - Fix the checksum error "GC life time is shorter than transaction duration" [#32733](https://github.com/pingcap/tidb/issues/32733) - Fix the issue that TiDB Lightning gets stuck when it fails to check empty tables [#31797](https://github.com/pingcap/tidb/issues/31797) + Dumpling diff --git a/sql-prepared-plan-cache.md b/sql-prepared-plan-cache.md index 0ff168b1a9aec..7d0a6b1f62592 100644 --- a/sql-prepared-plan-cache.md +++ b/sql-prepared-plan-cache.md @@ -54,7 +54,7 @@ There are several points worth noting about execution plan caching and query per Since v6.1.0 the execution plan cache is enabled by default. You can control prepared plan cache via the system variable [`tidb_enable_prepared_plan_cache`](/system-variables.md#tidb_enable_prepared_plan_cache-new-in-v610). -> **Note:** +> **Note:** > > The execution plan cache feature applies only to `Prepare` / `Execute` queries and does not take effect for normal queries. diff --git a/sql-statements/sql-statement-admin.md b/sql-statements/sql-statement-admin.md index c87c154b52f16..93361e617659d 100644 --- a/sql-statements/sql-statement-admin.md +++ b/sql-statements/sql-statement-admin.md @@ -96,7 +96,7 @@ To overwrite the metadata of the stored table in an untrusted way in extreme cas ADMIN REPAIR TABLE tbl_name CREATE TABLE STATEMENT; ``` -Here “untrusted” means that you need to manually ensure that the metadata of the original table can be covered by the `CREATE TABLE STATEMENT` operation. To use this `REPAIR` statement, enable the [`repair-mode`](/tidb-configuration-file.md#repair-mode) configuration item, and make sure that the tables to be repaired are listed in the [`repair-table-list`](/tidb-configuration-file.md#repair-table-list). +Here "untrusted" means that you need to manually ensure that the metadata of the original table can be covered by the `CREATE TABLE STATEMENT` operation. To use this `REPAIR` statement, enable the [`repair-mode`](/tidb-configuration-file.md#repair-mode) configuration item, and make sure that the tables to be repaired are listed in the [`repair-table-list`](/tidb-configuration-file.md#repair-table-list). ## `ADMIN SHOW SLOW` statement diff --git a/sql-statements/sql-statement-create-index.md b/sql-statements/sql-statement-create-index.md index 4494f47965721..87a6ad84a865f 100644 --- a/sql-statements/sql-statement-create-index.md +++ b/sql-statements/sql-statement-create-index.md @@ -221,7 +221,7 @@ If the same expression is included in the aggregate (`GROUP BY`) functions, the {{< copyable "sql" >}} ```sql -SELECT max(lower(col1)) FROM t; +SELECT max(lower(col1)) FROM t; SELECT min(col1) FROM t GROUP BY lower(col1); ``` diff --git a/sql-statements/sql-statement-show-stats-meta.md b/sql-statements/sql-statement-show-stats-meta.md index 6bb693b686d4f..2185f3b8d53c7 100644 --- a/sql-statements/sql-statement-show-stats-meta.md +++ b/sql-statements/sql-statement-show-stats-meta.md @@ -7,7 +7,7 @@ summary: An overview of the usage of SHOW STATS_META for TiDB database. You can use `SHOW STATS_META` to view how many rows are in a table and how many rows are changed in that table. When using this statement, you can filter the needed information by the `ShowLikeOrWhere` clause. -Currently, the `SHOW STATS_META` statement outputs 6 columns: +Currently, the `SHOW STATS_META` statement outputs 6 columns: | Column name | Description | | -------- | ------------- | @@ -18,7 +18,7 @@ Currently, the `SHOW STATS_META` statement outputs 6 columns: | modify_count | The number of rows modified | | row_count | The total row count | -> **注意:** +> **Note:** > > The `update_time` is updated when TiDB updates the `modify_count` and `row_count` fields according to DML statements. So `update_time` is not the last execution time of the `ANALYZE` statement. diff --git a/sync-diff-inspector/sync-diff-inspector-overview.md b/sync-diff-inspector/sync-diff-inspector-overview.md index 2e07f2311be9d..0618e740ab183 100644 --- a/sync-diff-inspector/sync-diff-inspector-overview.md +++ b/sync-diff-inspector/sync-diff-inspector-overview.md @@ -124,7 +124,7 @@ target-table = "t2" # The name of the target table # The downstream database. The value is the unique ID declared by data-sources. target-instance = "tidb0" # The tables of downstream databases to be compared. Each table needs to contain the schema name and the table name, separated by '.' - # Use "?" to match any character and “*” to match characters of any length. + # Use "?" to match any character and "*" to match characters of any length. # For detailed match rules, refer to golang regexp pkg: https://github.com/google/re2/wiki/Syntax. target-check-tables = ["schema*.table*", "!c.*", "test2.t2"] # (optional) Extra configurations for some tables, Config1 is defined in the following table config example. diff --git a/ticdc/ticdc-avro-protocol.md b/ticdc/ticdc-avro-protocol.md index 7f090df7a3756..37d6fd7147abe 100644 --- a/ticdc/ticdc-avro-protocol.md +++ b/ticdc/ticdc-avro-protocol.md @@ -9,7 +9,7 @@ Avro is a data exchange format protocol defined by [Apache Avro™](https://avro ## Use Avro -When using Message Queue (MQ) as a downstream sink, you can specify Avro in `sink-uri`. TiCDC captures TiDB DML events, creates Avro messages from these events, and sends the messages downstream. When Avro detects a schema change, it registers the latest schema with Schema Registry. +When using Message Queue (MQ) as a downstream sink, you can specify Avro in `sink-uri`. TiCDC captures TiDB DML events, creates Avro messages from these events, and sends the messages downstream. When Avro detects a schema change, it registers the latest schema with Schema Registry. The following is a configuration example using Avro: diff --git a/tidb-binlog/tidb-binlog-faq.md b/tidb-binlog/tidb-binlog-faq.md index 797962058d489..ae41e3c713bb9 100644 --- a/tidb-binlog/tidb-binlog-faq.md +++ b/tidb-binlog/tidb-binlog-faq.md @@ -35,7 +35,7 @@ To replicate data to the downstream MySQL or TiDB cluster, Drainer must have the 1. Check whether Pump's GC works well: - - Check whether the **gc_tso** time in Pump’s monitoring panel is identical with that of the configuration file. + - Check whether the **gc_tso** time in Pump's monitoring panel is identical with that of the configuration file. 2. If GC works well, perform the following steps to reduce the amount of space required for a single Pump: diff --git a/tidb-binlog/upgrade-tidb-binlog.md b/tidb-binlog/upgrade-tidb-binlog.md index 11677e6413888..f1c28f98201d3 100644 --- a/tidb-binlog/upgrade-tidb-binlog.md +++ b/tidb-binlog/upgrade-tidb-binlog.md @@ -51,7 +51,7 @@ If you want to resume replication from the original checkpoint, perform the foll 4. Reconnect the TiDB cluster to the service. 5. Make sure that the old version of Drainer has replicated the data in the old version of Pump to the downstream completely; - Query the `status` interface of Drainer,command as below: + Query the `status` interface of Drainer, command as below: {{< copyable "shell-regular" >}} diff --git a/tidb-lightning/migrate-from-csv-using-tidb-lightning.md b/tidb-lightning/migrate-from-csv-using-tidb-lightning.md index bbc215de12ea4..07129fefae10e 100644 --- a/tidb-lightning/migrate-from-csv-using-tidb-lightning.md +++ b/tidb-lightning/migrate-from-csv-using-tidb-lightning.md @@ -192,7 +192,7 @@ The default setting is already tuned for CSV following RFC 4180. ```toml [mydumper.csv] -separator = ',' # It is not recommended to use the default ‘,’. It is recommended to use ‘\|+\|‘ or other uncommon character combinations. +separator = ',' # It is not recommended to use the default ','. It is recommended to use '\|+\|' or other uncommon character combinations. delimiter = '"' header = true not-null = false diff --git a/tidb-lightning/tidb-lightning-faq.md b/tidb-lightning/tidb-lightning-faq.md index 3cce04d6572a4..7fc66b7612218 100644 --- a/tidb-lightning/tidb-lightning-faq.md +++ b/tidb-lightning/tidb-lightning-faq.md @@ -146,7 +146,7 @@ upload-speed-limit = "100MB" ## Why TiDB Lightning requires so much free space in the target TiKV cluster? -With the default settings of 3 replicas, the space requirement of the target TiKV cluster is 6 times the size of data source. The extra multiple of “2” is a conservative estimation because the following factors are not reflected in the data source: +With the default settings of 3 replicas, the space requirement of the target TiKV cluster is 6 times the size of data source. The extra multiple of "2" is a conservative estimation because the following factors are not reflected in the data source: - The space occupied by indices - Space amplification in RocksDB diff --git a/tidb-lightning/tidb-lightning-requirements.md b/tidb-lightning/tidb-lightning-requirements.md index a62fb0c4b5cbf..3620282672d53 100644 --- a/tidb-lightning/tidb-lightning-requirements.md +++ b/tidb-lightning/tidb-lightning-requirements.md @@ -70,7 +70,7 @@ Based on the import mode and features enabled, downstream database users should Optional - checkpoint.driver = “mysql” + checkpoint.driver = "mysql" checkpoint.schema setting SELECT,INSERT,UPDATE,DELETE,CREATE,DROP Required when checkpoint information is stored in databases, instead of files diff --git a/tidb-monitoring-framework.md b/tidb-monitoring-framework.md index f6a8e0d5dfd32..5f55c2f19a354 100644 --- a/tidb-monitoring-framework.md +++ b/tidb-monitoring-framework.md @@ -45,7 +45,7 @@ Grafana is an open source project for analyzing and visualizing metrics. TiDB us - {TiDB_Cluster_name}-TiKV-Details: Detailed monitoring metrics related to the TiKV server. - {TiDB_Cluster_name}-TiKV-Summary: Monitoring overview related to the TiKV server. - {TiDB_Cluster_name}-TiKV-Trouble-Shooting: Monitoring metrics related to the TiKV error diagnostics. -- {TiDB_Cluster_name}-TiCDC:Detailed monitoring metrics related to TiCDC. +- {TiDB_Cluster_name}-TiCDC: Detailed monitoring metrics related to TiCDC. Each group has multiple panel labels of monitoring metrics, and each panel contains detailed information of multiple monitoring metrics. For example, the **Overview** monitoring group has five panel labels, and each labels corresponds to a monitoring panel. See the following UI: diff --git a/tidb-storage.md b/tidb-storage.md index 65bf036890087..f62328c07719f 100644 --- a/tidb-storage.md +++ b/tidb-storage.md @@ -64,7 +64,7 @@ These two tasks are very important and will be introduced one by one. At the same time, in order to ensure that the upper client can access the needed data, there is a component (PD) in the system to record the distribution of Regions on the node, that is, the exact Region of a Key and the node of that Region placed through any Key. -* For the second task, TiKV replicates data in Regions, which means that data in one Region will have multiple replicas with the name “Replica”. Multiple Replicas of a Region are stored on different nodes to form a Raft Group, which is kept consistent through the Raft algorithm. +* For the second task, TiKV replicates data in Regions, which means that data in one Region will have multiple replicas with the name "Replica". Multiple Replicas of a Region are stored on different nodes to form a Raft Group, which is kept consistent through the Raft algorithm. One of the Replicas serves as the Leader of the Group and other as the Follower. By default, all reads and writes are processed through the Leader, where reads are done and write are replicated to followers. The following diagram shows the whole picture about Region and Raft group. diff --git a/tidb-troubleshooting-map.md b/tidb-troubleshooting-map.md index 753488bd66fd0..fc0bb9d9ccd92 100644 --- a/tidb-troubleshooting-map.md +++ b/tidb-troubleshooting-map.md @@ -55,11 +55,11 @@ Refer to [5 PD issues](#5-pd-issues). - 3.1.2 TiDB DDL job hangs or executes slowly (use `admin show ddl jobs` to check DDL progress) - - Cause 1:Network issue with other components (PD/TiKV). + - Cause 1: Network issue with other components (PD/TiKV). - - Cause 2:Early versions of TiDB (earlier than v3.0.8) have heavy internal load because of a lot of goroutine at high concurrency. + - Cause 2: Early versions of TiDB (earlier than v3.0.8) have heavy internal load because of a lot of goroutine at high concurrency. - - Cause 3:In early versions (v2.1.15 & versions < v3.0.0-rc1), PD instances fail to delete TiDB keys, which causes every DDL change to wait for two leases. + - Cause 3: In early versions (v2.1.15 & versions < v3.0.0-rc1), PD instances fail to delete TiDB keys, which causes every DDL change to wait for two leases. - For other unknown causes, [report a bug](https://github.com/pingcap/tidb/issues/new?labels=type%2Fbug&template=bug-report.md). @@ -82,13 +82,13 @@ Refer to [5 PD issues](#5-pd-issues). - Cause 3: The TiDB instance that is currently executing DML statements cannot load the new `schema information` (maybe caused by network issues with PD or TiKV). During this time, many DDL statements are executed (including `lock table`), which causes `schema version` changes to be more than 1024. - - Solution:The first two causes do not impact the application, as the related DML operations retry after failure. For cause 3, you need to check the network between TiDB and TiKV/PD. + - Solution: The first two causes do not impact the application, as the related DML operations retry after failure. For cause 3, you need to check the network between TiDB and TiKV/PD. - Background: The increased number of `schema version` is consistent with the number of `schema state` of each DDL change operation. For example, the `create table` operation has 1 version change, and the `add column` operation has 4 version changes. Therefore, too many column change operations might cause `schema version` to increase fast. For details, refer to [online schema change](https://static.googleusercontent.com/media/research.google.com/zh-CN//pubs/archive/41376.pdf). - 3.1.4 TiDB reports `information schema is out of date` in log - - Cause 1:The TiDB server that is executing the DML statement is stopped by `graceful kill` and prepares to exit. The execution time of the transaction that contains the DML statement exceeds one DDL lease. An error is reported when the transaction is committed. + - Cause 1: The TiDB server that is executing the DML statement is stopped by `graceful kill` and prepares to exit. The execution time of the transaction that contains the DML statement exceeds one DDL lease. An error is reported when the transaction is committed. - Cause 2: The TiDB server cannot connect to PD or TiKV when it is executing the DML statement. As a result, the TiDB server did not load the new schema within one DDL lease (`45s` by default), or the TiDB server disconnects from PD with the `keep alive` setting. @@ -116,12 +116,12 @@ Refer to [5 PD issues](#5-pd-issues). - In v2.1.8 or earlier versions, you can grep `fatal error: stack overflow` in the `tidb_stderr.log`. - - Monitor:The memory usage of tidb-server instances increases sharply in a short period of time. + - Monitor: The memory usage of tidb-server instances increases sharply in a short period of time. - 3.2.2 Locate the SQL statement that causes OOM. (Currently all versions of TiDB cannot locate SQL accurately. You still need to analyze whether OOM is caused by the SQL statement after you locate one.) - - For versions >= v3.0.0, grep “expensive_query” in `tidb.log`. That log message records SQL queries that timed out or exceed memory quota. - - For versions < v3.0.0, grep “memory exceeds quota” in `tidb.log` to locate SQL queries that exceed memory quota. + - For versions >= v3.0.0, grep "expensive_query" in `tidb.log`. That log message records SQL queries that timed out or exceed memory quota. + - For versions < v3.0.0, grep "memory exceeds quota" in `tidb.log` to locate SQL queries that exceed memory quota. > **Note:** > @@ -515,7 +515,7 @@ Check the specific cause for busy by viewing the monitor **Grafana** -> **TiKV** - Cause 3: If the data source is generated by the machine and not backed up by [Mydumper](https://docs.pingcap.com/tidb/v4.0/mydumper-overview), ensure it respects the constrains of the table. For example: - - `AUTO_INCREMENT` columns need to be positive, and do not contain the value “0”. + - `AUTO_INCREMENT` columns need to be positive, and do not contain the value "0". - UNIQUE and PRIMARY KEYs must not have duplicate entries. - Solution: See [Troubleshooting Solution](/tidb-lightning/tidb-lightning-faq.md#checksum-failed-checksum-mismatched-remote-vs-local). diff --git a/tispark-overview.md b/tispark-overview.md index 5cd7a88c36f18..390df9d8b21f9 100644 --- a/tispark-overview.md +++ b/tispark-overview.md @@ -37,9 +37,9 @@ The following table lists the compatibility information of the supported TiSpark | TiSpark version | TiDB, TiKV, and PD versions | Spark version | Scala version | | --------------- | -------------------- | ------------- | ------------- | -| 2.4.x-scala_2.11 | 5.x,4.x | 2.3.x,2.4.x | 2.11 | -| 2.4.x-scala_2.12 | 5.x,4.x | 2.4.x | 2.12 | -| 2.5.x | 5.x,4.x | 3.0.x,3.1.x | 2.12 | +| 2.4.x-scala_2.11 | 5.x, 4.x | 2.3.x, 2.4.x | 2.11 | +| 2.4.x-scala_2.12 | 5.x, 4.x | 2.4.x | 2.12 | +| 2.5.x | 5.x, 4.x | 3.0.x, 3.1.x | 2.12 | TiSpark runs in any Spark mode such as YARN, Mesos, and Standalone. @@ -364,7 +364,7 @@ Q: Can I mix Spark with TiKV? A: If TiDB and TiKV are overloaded and run critical online tasks, consider deploying TiSpark separately. You also need to consider using different NICs to ensure that OLTP's network resources are not compromised and affect online business. If the online business requirements are not high or the loading is not large enough, you can consider mixing TiSpark with TiKV deployment. -Q: What can I do if `warning:WARN ObjectStore:568 - Failed to get database` is returned when executing SQL statements using TiSpark? +Q: What can I do if `warning: WARN ObjectStore:568 - Failed to get database` is returned when executing SQL statements using TiSpark? A: You can ignore this warning. It occurs because Spark tries to load two nonexistent databases (`default` and `global_temp`) in its catalog. If you want to mute this warning, modify [log4j](https://github.com/pingcap/tidb-docker-compose/blob/master/tispark/conf/log4j.properties#L43) by adding `log4j.logger.org.apache.hadoop.hive.metastore.ObjectStore=ERROR` to the `log4j` file in `tispark/conf`. You can add the parameter to the `log4j` file of the `config` under Spark. If the suffix is `template`, you can use the `mv` command to change it to `properties`. @@ -378,6 +378,6 @@ A: By default, TiSpark searches for the Hive database by reading the Hive metada If you do not need this default behavior, do not configure the Hive metadata in hive-site. -Q: What can I do if `Error:java.io.InvalidClassException: com.pingcap.tikv.region.TiRegion; local class incompatible: stream classdesc serialVersionUID ...` is returned when TiSpark is executing a Spark task? +Q: What can I do if `Error: java.io.InvalidClassException: com.pingcap.tikv.region.TiRegion; local class incompatible: stream classdesc serialVersionUID ...` is returned when TiSpark is executing a Spark task? A: The error message shows a `serialVersionUID` conflict, which occurs because you have used `class` and `TiRegion` of different versions. Because `TiRegion` only exists in TiSpark, multiple versions of TiSpark packages might be used. To fix this error, you need to make sure the version of TiSpark dependency is consistent among all nodes in the cluster. diff --git a/tiup/tiup-cluster-topology-reference.md b/tiup/tiup-cluster-topology-reference.md index 6f2043039ac94..b5f1975663922 100644 --- a/tiup/tiup-cluster-topology-reference.md +++ b/tiup/tiup-cluster-topology-reference.md @@ -495,7 +495,7 @@ drainer_servers: - `deploy_dir`: Specifies the deployment directory. If it is not specified or specified as a relative directory, the directory is generated according to the `deploy_dir` directory configured in `global`. -- `data_dir`:Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to the `data_dir` directory configured in `global`. +- `data_dir`: Specifies the data directory. If it is not specified or specified as a relative directory, the directory is generated according to the `data_dir` directory configured in `global`. - `log_dir`: Specifies the log directory. If it is not specified or specified as a relative directory, the log is generated according to the `log_dir` directory configured in `global`. diff --git a/tiup/tiup-component-cluster-audit.md b/tiup/tiup-component-cluster-audit.md index abefeb0fbc031..44419f2bf9024 100644 --- a/tiup/tiup-component-cluster-audit.md +++ b/tiup/tiup-component-cluster-audit.md @@ -29,6 +29,6 @@ tiup cluster audit [audit-id] [flags] - If `[audit-id]` is not specified, a table with the following fields is output: - ID: the `audit-id` corresponding to the record - Time: the execution time of the command corresponding to the record - - Command:the command corresponding to the record + - Command: the command corresponding to the record [<< Back to the previous page - TiUP Cluster command list](/tiup/tiup-component-cluster.md#command-list) diff --git a/troubleshoot-hot-spot-issues.md b/troubleshoot-hot-spot-issues.md index d46b525c4acf2..a35f2ce280637 100644 --- a/troubleshoot-hot-spot-issues.md +++ b/troubleshoot-hot-spot-issues.md @@ -37,7 +37,7 @@ Value: rowID Index data has two types: the unique index and the non-unique index. -- For unique indexes, you can follow the coding rules above. +- For unique indexes, you can follow the coding rules above. - For non-unique indexes, a unique key cannot be constructed through this encoding, because the `tablePrefix{tableID}_indexPrefixSep{indexID}` of the same index is the same and the `ColumnsValue` of multiple rows might be the same. The encoding rule for non-unique indexes is as follows: ``` @@ -102,8 +102,8 @@ Statement example: {{< copyable "sql" >}} ```sql -CREATE TABLE:CREATE TABLE t (c int) SHARD_ROW_ID_BITS = 4; -ALTER TABLE:ALTER TABLE t SHARD_ROW_ID_BITS = 4; +CREATE TABLE: CREATE TABLE t (c int) SHARD_ROW_ID_BITS = 4; +ALTER TABLE: ALTER TABLE t SHARD_ROW_ID_BITS = 4; ``` The value of `SHARD_ROW_ID_BITS` can be dynamically modified. The modified value only takes effect for newly written data. diff --git a/troubleshoot-lock-conflicts.md b/troubleshoot-lock-conflicts.md index 14bf3ef550597..b06428bb53cdf 100644 --- a/troubleshoot-lock-conflicts.md +++ b/troubleshoot-lock-conflicts.md @@ -4,7 +4,7 @@ summary: Learn to analyze and resolve lock conflicts in TiDB. --- # Troubleshoot Lock Conflicts - + TiDB supports complete distributed transactions. Starting from v3.0, TiDB provides optimistic transaction mode and pessimistic transaction mode. This document introduces how to troubleshoot and resolve lock conflicts in TiDB. ## Optimistic transaction mode @@ -12,20 +12,20 @@ TiDB supports complete distributed transactions. Starting from v3.0, TiDB provid Transactions in TiDB use two-phase commit (2PC) that includes the Prewrite phase and the Commit phase. The procedure is as follows: ![two-phase commit in the optimistic transaction mode](/media/troubleshooting-lock-pic-01.png) - + For details of Percolator and TiDB's algorithm of the transactions, see [Google's Percolator](https://ai.google/research/pubs/pub36726). ### Prewrite phase (optimistic) - + In the Prewrite phase, TiDB adds a primary lock and a secondary lock to target keys. If there are lots of requests for adding locks to the same target key, TiDB prints an error such as write conflict or `keyislocked` to the log and reports it to the client. Specifically, the following errors related to locks might occur in the Prewrite phase. - + #### Read-write conflict (optimistic) - + As the TiDB server receives a read request from a client, it gets a globally unique and increasing timestamp at the physical time as the start_ts of the current transaction. The transaction needs to read the latest data before start_ts, that is, the target key of the latest commit_ts that is smaller than start_ts. When the transaction finds that the target key is locked by another transaction, and it cannot know which phase the other transaction is in, a read-write conflict happens. The diagram is as follows: ![read-write conflict](/media/troubleshooting-lock-pic-04.png) -Txn0 completes the Prewrite phase and enters the Commit phase. At this time, Txn1 requests to read the same target key. Txn1 needs to read the target key of the latest commit_ts that is smaller than its start_ts. Because Txn1’s start_ts is larger than Txn0's lock_ts, Txn1 must wait for the target key's lock to be cleared, but it hasn’t been done. As a result, Txn1 cannot confirm whether Txn0 has been committed or not. Thus, a read-write conflict between Txn1 and Txn0 happens. +Txn0 completes the Prewrite phase and enters the Commit phase. At this time, Txn1 requests to read the same target key. Txn1 needs to read the target key of the latest commit_ts that is smaller than its start_ts. Because Txn1's start_ts is larger than Txn0's lock_ts, Txn1 must wait for the target key's lock to be cleared, but it hasn't been done. As a result, Txn1 cannot confirm whether Txn0 has been committed or not. Thus, a read-write conflict between Txn1 and Txn0 happens. You can detect the read-write conflict in your TiDB cluster by the following ways: @@ -61,11 +61,11 @@ You can detect the read-write conflict in your TiDB cluster by the following way This message indicates that a read-write conflict occurs in TiDB. The target key of the read request has been locked by another transaction. The locks are from the uncommitted optimistic transaction and the uncommitted pessimistic transaction after the prewrite phase. - * primary_lock:Indicates that the target key is locked by the primary lock. - * lock_version:The start_ts of the transaction that owns the lock. - * key:The target key that is locked. - * lock_ttl: The lock’s TTL (Time To Live) - * txn_size:The number of keys that are in the Region of the transaction that owns the lock. + * primary_lock: Indicates that the target key is locked by the primary lock. + * lock_version: The start_ts of the transaction that owns the lock. + * key: The target key that is locked. + * lock_ttl: The lock's TTL (Time To Live) + * txn_size: The number of keys that are in the Region of the transaction that owns the lock. Solutions: @@ -75,7 +75,7 @@ Solutions: ```sh ./tidb-ctl decoder -f table_row -k "t\x00\x00\x00\x00\x00\x00\x00\x1c_r\x00\x00\x00\x00\x00\x00\x00\xfa" - + table_id: -9223372036854775780 row_id: -9223372036854775558 ``` @@ -94,7 +94,7 @@ The `KV Errors` panel in the TiDB dashboard has two monitoring metrics `Lock Res Solutions: * If there is a small amount of txnLock in the monitoring, no need to pay too much attention. The backoff and retry is automatically performed in the background. The first time of the retry is 200 ms and the maximum time is 3000 ms for a single retry. -* If there are too many “txnLock” operations in the `KV Backoff OPS`, it is recommended that you analyze the reasons to the write conflicts from the application side. +* If there are too many "txnLock" operations in the `KV Backoff OPS`, it is recommended that you analyze the reasons to the write conflicts from the application side. * If your application is a write-write conflict scenario, it is strongly recommended to use the pessimistic transaction mode. ### Commit phase (optimistic) @@ -125,7 +125,7 @@ You can check whether there is any "LockNotFound" error in the following ways: ```log Error: KV error safe to retry restarts txn: Txn(Mvcc(TxnLockNotFound)) [ERROR [Kv.rs:708] ["KvService::batch_raft send response fail"] [err=RemoteStoped] ``` - + Solutions: * By checking the time interval between start_ts and commit_ts, you can confirm whether the commit time exceeds the TTL time. @@ -148,12 +148,12 @@ Before v3.0.8, TiDB uses the optimistic transaction mode by default. In this mod The commit phase of the pessimistic transaction mode and the optimistic transaction mode in TiDB has the same logic, and both commits are in the 2PC mode. The important adaptation of pessimistic transactions is DML execution. ![TiDB pessimistic transaction commit logic](/media/troubleshooting-lock-pic-05.png) - + The pessimistic transaction adds an `Acquire Pessimistic Lock` phase before 2PC. This phase includes the following steps: -1. (same as the optimistic transaction mode) Receive the `begin` request from the client, and the current timestamp is this transaction’s start_ts. +1. (same as the optimistic transaction mode) Receive the `begin` request from the client, and the current timestamp is this transaction's start_ts. 2. When the TiDB server receives an `update` request from the client, the TiDB server initiates a pessimistic lock request to the TiKV server, and the lock is persisted to the TiKV server. -3. (same as the optimistic transaction mode) When the client sends the commit request, TiDB starts to perform the 2PC similar to the optimistic transaction mode. +3. (same as the optimistic transaction mode) When the client sends the commit request, TiDB starts to perform the 2PC similar to the optimistic transaction mode. ![Pessimistic transactions in TiDB](/media/troubleshooting-lock-pic-06.png) @@ -190,7 +190,7 @@ Solutions: * If the above error occurs frequently, it is recommended to adjust from the application side. #### Lock wait timeout exceeded - + In the pessimistic transaction mode, transactions wait for locks of each other. The timeout for waiting a lock is defined by the [innodb_lock_wait_timeout](/pessimistic-transaction.md#behaviors) parameter of TiDB. This is the maximum wait lock time at the SQL statement level, which is the expectation of a SQL statement Locking, but the lock has never been acquired. After this time, TiDB will not try to lock again and will return the corresponding error message to the client. When a wait lock timeout occurs, the following error message will be returned to the client: @@ -198,7 +198,7 @@ When a wait lock timeout occurs, the following error message will be returned to ```log ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction ``` - + Solutions: * If the above error occurs frequently, it is recommended to adjust the application logic. diff --git a/troubleshoot-write-conflicts.md b/troubleshoot-write-conflicts.md index a6f7f58de2ee5..aa182e3609c2f 100644 --- a/troubleshoot-write-conflicts.md +++ b/troubleshoot-write-conflicts.md @@ -59,10 +59,10 @@ If many write conflicts exist in the cluster, it is recommended to find out the The explanation of the log above is as follows: * `[kv:9007]Write conflict`: indicates the write-write conflict. -* `txnStartTS=416617006551793665`:indicates the `start_ts` of the current transaction. You can use the `pd-ctl` tool to convert `start_ts` to physical time. +* `txnStartTS=416617006551793665`: indicates the `start_ts` of the current transaction. You can use the `pd-ctl` tool to convert `start_ts` to physical time. * `conflictStartTS=416617018650001409`: indicates the `start_ts` of the write conflict transaction. * `conflictCommitTS=416617023093080065`: indicates the `commit_ts` of the write conflict transaction. -* `key={tableID=47, indexID=1, indexValues={string, }}`:indicates the write conflict key. `tableID` indicates the ID of the write conflict table. `indexID` indicates the ID of write conflict index. If the write conflict key is a record key, the log prints `handle=x`, indicating which record(row) has a conflict. `indexValues` indicates the value of the index that has a conflict. +* `key={tableID=47, indexID=1, indexValues={string, }}`: indicates the write conflict key. `tableID` indicates the ID of the write conflict table. `indexID` indicates the ID of write conflict index. If the write conflict key is a record key, the log prints `handle=x`, indicating which record(row) has a conflict. `indexValues` indicates the value of the index that has a conflict. * `primary={tableID=47, indexID=1, indexValues={string, }}`: indicates the primary key information of the current transaction. You can use the `pd-ctl` tool to convert the timestamp to readable time: