From 35bde56bee3c494085a796ffcec9d0f75a19d007 Mon Sep 17 00:00:00 2001 From: qiancai Date: Wed, 7 Dec 2022 17:05:02 +0800 Subject: [PATCH 01/83] add v6.5.0 release notes --- releases/release-6.5.0.md | 443 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 443 insertions(+) create mode 100644 releases/release-6.5.0.md diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md new file mode 100644 index 0000000000000..da826a04aef2d --- /dev/null +++ b/releases/release-6.5.0.md @@ -0,0 +1,443 @@ +--- +title: TiDB 6.5.0 Release Notes +--- + +# TiDB 6.5.0 Release Notes + +Release date: xx xx, 2022 + +TiDB version: 6.5.0 + +Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.5/quick-start-with-tidb) | [Production deployment](https://docs.pingcap.com/tidb/v6.5/production-deployment-using-tiup) | [Installation packages](https://www.pingcap.com/download/?version=v6.5.0#version-list) + +TiDB 6.5.0 is a Long-Term Support Release (LTS). + +相比于前一个 LTS (即 6.1.0 版本),6.5.0 版本包含 [6.2.0-DMR](/releases/release-6.2.0.md)、[6.3.0-DMR](/releases/release-6.3.0.md)、[6.4.0-DMR](/releases/release-6.4.0.md) 中已发布的新功能、提升改进和错误修复,并引入了以下关键特性: + +- 优化器代价模型 V2 GA +- TiDB 全局内存控制 GA +- 全局 hint 干预视图内查询的计划生成 +- 满足密码合规审计需求 [密码管理](/password-management.md) +- TiDB 添加索引的速度提升为原来的 10 倍 +- Flashback Cluster 功能兼容 TiCDC 和 PiTR +- JSON 抽取函数下推至 TiFlash + +## New features + +### SQL + +* TiDB 添加索引的性能提升为原来的 10 倍 [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) **tw@Oreoxmt** + + TiDB v6.3.0 引入了[添加索引加速](/system-variables.md#tidb_ddl_enable_fast_reorg-从-v630-版本开始引入)作为实验特性,提升了添加索引回填过程的速度。该功能在 v6.5.0 正式 GA 并默认打开,预期大表添加索引的性能提升约为原来的 10 倍。添加索引加速适用于单条 SQL 语句串行添加索引的场景,在多条 SQL 并行添加索引时仅对其中一条添加索引的 SQL 语句生效。 + +* 提供轻量级元数据锁,提升 DDL 变更过程 DML 的成功率 [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) **tw@Oreoxmt** + + TiDB v6.3.0 引入了[元数据锁](/metadata-lock.md)作为实验特性,通过协调表元数据变更过程中 DML 语句和 DDL 语句的优先级,让执行中的 DDL 语句等待持有旧版本元数据的 DML 语句提交,尽可能避免 DML 语句的 `Information schema is changed` 错误。该功能在 v6.5.0 正式 GA 并默认打开,适用于各类 DDL 变更场景。 + + 更多信息,请参考[用户文档](/metadata-lock.md)。 + +* 支持通过 `FLASHBACK CLUSTER TO TIMESTAMP` 命令将集群快速回退到特定的时间点 [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** + + TiDB v6.4.0 引入了 [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) 语句作为实验特性,支持在 Garbage Collection (GC) life time 内快速回退整个集群到指定的时间点。该功能在 v6.5.0 正式 GA,适用于快速撤消 DML 误操作、支持集群分钟级别的快速回退、支持在时间线上多次回退以确定特定数据更改发生的时间,并兼容 PITR 和 TiCDC 等工具。 + + 更多信息,请参考[用户文档](/sql-statements/sql-statement-flashback-to-timestamp.md)。 + +* 完整支持包含 `INSERT`、`REPLACE`、`UPDATE` 和 `DELETE` 的非事务 DML 语句 [#33485](https://github.com/pingcap/tidb/issues/33485) @[ekexium](https://github.com/ekexium) **tw@Oreoxmt** + + 在大批量的数据处理场景,单一大事务 SQL 处理可能对集群稳定性和性能造成影响。非事务 DML 语句将一个 DML 语句拆成多个 SQL 语句在内部执行。拆分后的语句将牺牲事务原子性和隔离性,但是对于集群的稳定性有很大提升。TiDB 从 v6.1.0 开始支持非事务 `DELETE` 语句,v6.5.0 新增对非事务 `INSERT`、`REPLACE` 和 `UPDATE` 语句的支持。 + + 更多信息,请参考[非事务 DML 语句](/non-transactional-dml.md) 和 [BATCH](/sql-statements/sql-statement-batch.md)。 + +* 支持 Time to live (TTL)(实验特性)[#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** + + TTL 提供了行级别的生命周期控制策略。在 TiDB 中,设置了 TTL 属性的表会根据配置自动检查并删除过期的行数据。TTL 设计的目标是在不影响在线读写负载的前提下,帮助用户周期性且及时地清理不需要的数据。 + + 更多信息请参考[Time to live(TTL)](/time-to-live.md) + +* TiFlash 支持 `INSERT SELECT` 语句(实验功能) [#37515](https://github.com/pingcap/tidb/issues/37515) @[gengliqi](https://github.com/gengliqi) **tw@qiancai** + + 用户可以指定 TiFlash 执行 `INSERT SELECT` 中的 `SELECT` 子句(分析查询),并将结果在此事务中写回到 TIDB 表中: + + ```sql + insert into t2 select mod(x,y) from t1; + ``` + + 用户可以方便地保存(物化)TiFlash 的计算结果以供下游步骤使用,可以起到结果缓存(物化)的效果。适用于以下场景:使用 TiFlash 做复杂分析,需重复使用计算结果或响应高并发的在线请求,计算性质本身聚合性好(相对输入数据,计算得出的结果集比较小,推荐 100MB 以内)。作为写入对象的 结果表本身没有特别限制,可以任意选择是否添加 TiFlash 副本。 + + 更多信息,请参考[用户文档](/tiflash/tiflash-results-materialization.md)。 + +### Security + +* 支持密码复杂度策略 [#38928](https://github.com/pingcap/tidb/issues/38928) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** + + TiDB 启用密码复杂度策略功能后,在为用户设置密码时,会检查密码长度、大写/小写字符个数、数字字符个数、特殊字符个数、密码字典、是否与用户名相同,以此确保为用户设置一个安全的密码。 + + TiDB 支持密码强度检查函数 `VALIDATE_PASSWORD_STRENGTH()`,用于判定一个给定密码的强度。 + + 更多信息,请参考[用户文档](/password-management.md#密码复杂度策略)。 + +* 支持密码过期策略 [#38936](https://github.com/pingcap/tidb/issues/38936) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** + + TiDB 支持密码过期策略,包括:手动密码过期、全局级别自动密码过期、账户级别自动密码过期。启用密码过期策略功能后,用户必须定期修改密码,防止密码长期使用带来的泄露风险,提高密码安全性。 + + 更多信息,请参考[用户文档](/password-management.md#密码过期策略) + +* 支持密码重用策略 [#38937](https://github.com/pingcap/tidb/issues/38937) @[keeplearning20221](https://github.com/keeplearning20221) **tw@ran-huang** + + TiDB 支持密码重用策略,包括:全局级别密码重用策略、账户级别密码重用策略。启用密码重用策略功能后,用户不允许使用最近一段时间使用过的密码,不允许使用最近几次使用过的密码,以此降低密码的重复使用带来的泄漏风险,提高密码安全性。 + + 更多信息,请参考[用户文档](/password-management.md#密码重用策略) + +* 支持密码连续错误限制登录策略 [#38938](https://github.com/pingcap/tidb/issues/38938) @[lastincisor](https://github.com/lastincisor) **tw@ran-huang** + + TiDB 启用密码连续错误限制登录策略功能后,当用户登录时密码连续多次错误,此时该账户将被临时锁定,达到锁定时间后将自动解锁。 + + 更多信息,请参考[用户文档](/password-management.md#密码连续错误限制登录策略) + +### Observability + +* TiDB Dashboard 在 Kubernetes 环境支持独立 Pod 部署 [#1447](https://github.com/pingcap/tidb-dashboard/issues/1447) @[SabaPing](https://github.com/SabaPing) **tw@shichun-0415 + + TiDB v6.5.0 且 TiDB Operator v1.4.0 之后,在 Kubernetes 上支持将 TiDB Dashboard 作为独立的 Pod 部署。在 TiDB Operator 环境,可直接访问该 Pod 的 IP 来打开 TiDB Dashboard。 + + 独立部署 TiDB Dashboard 后,用户将获得这些收益:1. 该组件的计算将不会再对 PD 节点有压力,更好的保障集群运行;2. 如果 PD 节点因异常不可访问,也还可以继续使用 Dashboard 进行集群诊断;3. 在开放 TiDB Dashboard 到外网时,不用担心 PD 中的特权端口的权限问题,降低集群的安全风险。 + + 具体信息,参考 [TiDB Operator 部署独立的 TiDB Dashboard](https://docs.pingcap.com/zh/tidb-in-kubernetes/dev/get-started#部署独立的-tidb-dashboard) + +### Performance + +* 进一步增强索引合并[INDEX MERGE](/glossary.md#index-merge)功能 [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[@time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** + + 新增了对在 WHERE 语句中使用 `AND` 联结的过滤条件的索引合并能力(v6.5 之前的版本只支持 `OR` 连接词的情况),TiDB 的索引合并至此可以覆盖更一般的查询过滤条件组合,不再限定于并集(`OR`)关系。当前版本仅支持优化器自动选择 “OR” 条件下的索引合并,用户须使用 [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) Hint 来开启对于 AND 联结的索引合并。 + + 关于“索引合并”功能的介绍请参阅 [v5.4 release note](/release-5.4.0#性能), 以及优化器相关的[用户文档](/explain-index-merge.md) + +* 新增支持下推[JSON 函数](/tiflash/tiflash-supported-pushdown-calculations.md) 至 TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** + + * `->` + * `->>` + * `JSON_EXTRACT()` + + JSON 格式为应用设计提供了更灵活的建模方式,目前越来越多的应用采用 JSON 格式进行数据交换和数据存储。 把 JSON 函数下推至 TiFlash 可以加速对 JSON 类型数据的分析效率,拓展 TiDB 实时分析的应用场景。TiDB 将持续完善,在未来版本支持更多的 JSON 函数下推至 TiFlash。 + +* 新增支持下推[字符串函数](/tiflash/tiflash-supported-pushdown-calculations.md) 至 TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** + + * `regexp_like` + * `regexp_instr` + * `regexp_substr` + +* 新增全局 Hint 干预[视图](/views.md)内查询的计划生成 [#37887](https://github.com/pingcap/tidb/issues/37887) @[Reminiscent](https://github.com/Reminiscent) **tw@Oreoxmt** + + 当 SQL 语句中包含对视图的访问时,部分情况下需要用 Hint 对视图内查询的执行计划进行干预,以获得最佳性能。在 v6.5.0 中,TiDB 允许针对视图内的查询块添加全局 Hint,使查询中定义的 Hint 能够在视图内部生效。全局 Hint 由[查询块命名](/optimizer-hints.md#第-1-步使用-qb_name-hint-重命名视图内的查询块)和 [Hint 引用](/optimizer-hints.md#第-2-步添加实际需要的-hint)两部分组成。该特性为包含复杂视图嵌套的 SQL 提供 Hint 的注入手段,增强了执行计划控制能力,进而稳定复杂 SQL 的执行性能。 + + 更多信息,请参考[用户文档](/optimizer-hints.md#全局生效的-Hint)。 + +* [分区表](/partitioned-table.md)的排序操作下推至 TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** + + [分区表](/partitioned-table.md)在 v6.1.0 正式 GA, TiDB 持续提升分区表相关的性能。 在 v6.5.0 中, 排序操作如 `ORDER BY`, `LIMIT` 能够下推至 TiKV 进行计算和过滤,降低网络 I/O 的开销,提升了使用分区表时 SQL 的性能。 + +* 优化器代价模型 Cost Model Version 2 GA [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** + + TiDB v6.2.0 引入了代价模型 [Cost Model Version 2](/cost-model.md#cost-model-version-2) 作为实验特性,通过更准确的代价估算方式,有利于最优执行计划的选择。尤其在部署了 TiFlash 的情况下,Cost Model Version 2 自动选择合理的存储引擎,避免过多的人工介入。经过一段时间真实场景的测试,这个模型在 v6.5.0 正式 GA。新创建的集群将默认使用 Cost Model Version 2。对于升级到 v6.5.0 的集群,由于 Cost Model Version 2 可能会改变原有的执行计划,在经过充分的性能测试之后,你可以通过设置变量 [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-从-v620-版本开始引入) 使用新的代价模型。 + + Cost Model Version 2 的 GA,大幅提升了 TiDB 优化器的整体能力,并切实地向更加强大的 HTAP 数据库演进。 + + 更多信息,请参考[用户文档](/cost-model.md#cost-model-version-2)。 + +* TiFlash 对获取表行数的操作进行针对优化 [#37165](https://github.com/pingcap/tidb/issues/37165) @[elsa0520](https://github.com/elsa0520) + + 在数据分析的场景中,通过无过滤条件的 `count(*)` 获取表的实际行数是一个常见操作。 TiFlash 在新版本中优化了 `count(*)` 的改写,自动选择带有“非空”属性的数据类型最短的列进行计数, 可以有效降低 TiFlash 上发生的 I/O 数量,进而提升获取表行数的执行效率。 + +### Transaction + +* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) + + 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) + + 更多信息,请参考[用户文档](链接)。 + +### Stability + +* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) + + 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) + + 更多信息,请参考[用户文档](链接)。 + +* TiDB 全局内存控制 GA [#37816](https://github.com/pingcap/tidb/issues/37816) @[wshwsh12](https://github.com/wshwsh12) **tw@TomShawn** + + 在 v6.5.0 中,TiDB 中主要的内存消耗都已经能被全局内存控制跟踪到, 当全局内存消耗接近 [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-从-v640-版本开始引入) 所定义的预设值时,TiDB 会尝试 GC 或取消 SQL 操作等手段限制内存使用,保证 TiDB 的稳定性。 + + 需要注意的是, 会话中事务所消耗的内存 (由配置项 [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) 设置最大值) 如今会被内存管理模块跟踪: 当单个会话的内存消耗达到系统变量 [`tidb_mem_quota_query`](/system-variables.md#tidbmemquotaquery) 所定义的阀值时,将会触发系统变量 [tidb-mem-oom-action](/system-variables.md#tidbmemoomaction-span-classversion-mark从-v610-版本开始引入span) 所定义的行为 (默认为 `CANCEL` ,即取消操作)。 为了保证行为向前兼容,当配置 [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) 为非默认值时, TiDB 仍旧会保证事务使用到这么大的内存而不被取消。 + + 对于运行 v6.5.0 及以上版本的客户,建议移除配置项 [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit),取消对事务内存做单独的限制,转而由系统变量 [`tidb_mem_quota_query`](/system-variables.md#tidbmemquotaquery) 和 [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-从-v640-版本开始引入) 对全局内存进行管理,从而提高内存的使用效率。 + + 更多信息,请参考[用户文档](/configure-memory-usage.md)。 + +### Ease of use + +* 完善 EXPLAIN ANALYZE 输出的 TiFlash 的 TableFullScan 算子的统计信息 [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) **tw@qiancai** + + [`EXPLAIN ANALYZE`] 语句可以输出执行计划及运行时的统计信息。现有版本的统计信息中,TiFlash 的 TableFullScan 算子统计信息不完善。v6.5.0 版本对 TableFullScan 算子的统计信息进行完善,补充了 dmfile 相关的执行信息,可以更加清晰的展示 TiFlash 的数据扫描状态信息,方便进行性能分析。 + + 更多信息,请参考[用户文档](sql-statements/sql-statement-explain-analyze.md)。 + +* 执行计划支持 JSON 格式的打印 [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) **tw@ran-huang** + + 在新版本中,TiDB 扩展了执行计划的打印格式。 通过 `explain format = tidb_json ` 能够将 SQL 的执行计划以 JSON 格式输出。借助这个能力,SQL 调试工具和诊断工具能够更方便准确地解读执行计划,进而提升 SQL 诊断调优的易用性。 + + 更多信息,请参考[用户文档](/sql-statements/sql-statement-explain.md)。 + +### MySQL compatibility + +* 支持高性能、全局单调递增的 `AUTO_INCREMENT` 列属性 [#38442](https://github.com/pingcap/tidb/issues/38442) @[tiancaiamao](https://github.com/tiancaiamao) **tw@Oreoxmt** + + TiDB v6.4.0 引入了 `AUTO_INCREMENT` 的 MySQL 兼容模式作为实验特性,通过中心化分配自增 ID,实现了自增 ID 在所有 TiDB 实例上单调递增。使用该特性能够更容易地实现查询结果按自增 ID 排序。该功能在 v6.5.0 正式 GA。使用该功能的单表写入 TPS 预期超过 2 万,并支持通过弹性扩容提升单表和整个集群的写入吞吐。要使用 MySQL 兼容模式,你需要在建表时将 `AUTO_ID_CACHE` 设置为 `1`。 + + ```sql + CREATE TABLE t(a int AUTO_INCREMENT key) AUTO_ID_CACHE 1; + ``` + + 更多信息,请参考[用户文档](/auto-increment.md#mysql-兼容模式)。 + +### Data migration + +* 支持导出和导入压缩后的 CSV、SQL 文件 [#38514](https://github.com/pingcap/tidb/issues/38514) @[lichunzhu](https://github.com/lichunzhu) **tw@hfxsd** + + Dumpling 支持将数据导出为 SQL、CSV 的压缩文件,支持 gzip/snappy/zstd 三种压缩格式。Lightning 支持导入压缩后的 SQL、CSV 文件,支持gzip/snappy/zstd 三种压缩格式。 + + 之前用户导出数据或者导入数据都需要提供较大的存储空间,用于存储导出或者即将导入的非压缩后的 csv 、sql文件,导致存储成本增加。该功能发布后,通过压缩存储空间,可以大大降低用户的存储成本。 + + 更多信息,请参考[用户文档](https://github.com/pingcap/tidb/issues/38514)。 + +* 优化了 binlog 解析能力 [#无](无) @[gmhdbjd](https://github.com/GMHDBJD) **tw@hfxsd** + + 可将不在迁移任务里的库、表对象的 binlog event 过滤掉不做解析,从而提升解析效率和稳定性。该策略在 6.5 版本默认生效,用户无需额外操作。 + + 原先用户仅迁移少数几张表,也需要解析上游整个 binlog 文件,即仍需要解析该 binlog 文件中不需要迁移的表的 binlog event,效率会比较低,同时如果不在迁移任务里的库表的 binlog event 不支持解析,还会导致任务失败。通过只解析在迁移任务里的库表对象的 binlog event 可以大大提升 binlog 解析效率,提升任务稳定性。 + +* Lightning 支持 disk quota 特性 GA,可避免 Lightning 任务写满本地磁盘 [#无](无) @[buchuitoudegou](https://github.com/buchuitoudegou) **tw@hfxsd** + + 你可以为 TiDB Lightning 配置磁盘配额 (disk quota)。当磁盘配额不足时,TiDB Lightning 会暂停读取源数据以及写入临时文件的过程,优先将已经完成排序的 key-value 写入到 TiKV,TiDB Lightning 删除本地临时文件后,再继续导入过程。 + + 有这个功能之前,TiDB Lightning 在使用物理模式导入数据时,会在本地磁盘创建大量的临时文件,用来对原始数据进行编码、排序、分割。当用户本地磁盘空间不足时,TiDB Lightning 会由于写入文件失败而报错退出。 + + 更多信息,请参考[用户文档]( https://docs.pingcap.com/tidb/v6.4/tidb-lightning-physical-import-mode-usage#configure-disk-quota-new-in-v620)。 + +* GA DM 增量数据校验的功能 [#4426](https://github.com/pingcap/tiflow/issues/4426) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** + + 在将增量数据从上游迁移到下游数据库的过程中,数据的流转有小概率导致错误或者丢失的情况。对于需要依赖于强数据一致的场景,如信贷、证券等业务,你可以在数据迁移完成之后对数据进行全量校验,确保数据的一致性。然而,在某些增量复制的业务场景下,上游和下游的写入是持续的、不会中断的,因为上下游的数据在不断变化,导致用户难以对表里面的全部数据进行一致性校验。 + + 过去,需要中断业务,做全量数据校验,会影响用户业务。现在推出该功能后,在一些不可中断的业务场景,无需中断业务,通过该功能就可以实现增量数据校验。 + + 更多信息,请参考[用户文档]( https://docs.pingcap.com/tidb/v6.4/dm-continuous-data-validation)。 + +### TiDB data share subscription + +* TiCDC 支持输出 storage sink [tiflow#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) **tw@shichun-0415** + + TiCDC 支持将 changed log 输出到 S3/Azure Blob Storage/NFS,以及兼容 S3 协议的存储服务中。Cloud Storage 价格便宜,使用方便。对于不希望使用 Kafka 的用户,可以选择使用 storage sink。 TiCDC 将 changed log 保存到文件,然后发送到 storage 中;消费程序定时从 storage 读取新产生的 changed log files 进行处理。 + + Storage sink 支持 changed log 格式位 canal-json/csv,此外 changed log 从 TiCDC 同步到 storage 的延迟可以达到 xx,支持更多信息,请参考[用户文档](https://github.com/pingcap/docs-cn/pull/12151/files)。 + +* TiCDC 性能提升 **tw@shichun-0415 + + 在 TiDB 场景测试验证中, TiCDC 的性能得到了比较大提升,单台 TiCDC 节点能处理的最大行变更吞吐可以达到 30K rows/s,同步延迟降低到 10s,即使在常规的 TiKV/TiCDC 滚动升级场景同步延迟也小于 30s;在容灾场景测试中,打开 TiCDC Redo log 和 Sync point 后,吞吐 xx rows/s 时,容灾复制延迟可以保持在 x s。 + +### 部署及运维 + +* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) + + 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) + + 更多信息,请参考[用户文档](链接)。 + +### Backup and restore + +* TiDB 快照备份支持断点续传 [#38647](https://github.com/pingcap/tidb/issues/38647) @[Leavrth](https://github.com/Leavrth) **tw@shichun-0415 + + TiDB 快照备份功能支持断点续传。当 BR 遇到对可恢复的错误时会进行重试,但是超过固定重试次数之后会备份退出。断点续传功能允许对持续更长时间的可恢复故障进行重试恢复,比如几十分钟的的网络故障。 + + 需要注意的是,如果你没有在 BR 退出后一个小时内完成故障恢复,那么还未备份的快照数据可能会被 GC 机制回收,而造成备份失败。更多信息,请参考[用户文档](/br/br-checkpoint.md)。 + +* PITR 性能大幅提升提升 **tw@shichun-0415 + + PITR 恢复的日志恢复阶单台 TiKV 的恢复速度可以达到 xx MB/s,提升了 x 倍,恢复速度可扩展,有效地降低容灾场景的 RTO 指标;容灾场景的 RPO 优化到 5 min,在常规的集群运维,如滚动升级,单 TiKV 故障等场景下,可以达到 RPO = 5 min 目标。 + +* TiKV-BR 工具 GA, 支持 RawKV 的备份和恢复 [#67](https://github.com/tikv/migration/issues/67) @[pingyu](https://github.com/pingyu) @[haojinming](https://github.com/haojinming) **tw@shichun-0415** + + TiKV-BR 是一个 TiKV 集群的备份和恢复工具。TiKV 可以独立于 TiDB,与 PD 构成 KV 数据库,此时的产品形态为 RawKV。TiKV-BR 工具支持对使用 RawKV 的产品进行备份和恢复,也支持将 TiKV 集群中的数据从 `API V1` 备份为 `API V2` 数据, 以实现 TiKV 集群 [`api-version`](https://docs.pingcap.com/zh/tidb/v6.4/tikv-configuration-file#api-version-%E4%BB%8E-v610-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) 的升级。 + + 更多信息,请参考[用户文档]( https://tikv.org/docs/dev/concepts/explore-tikv-features/backup-restore/ )。 + +## Compatibility changes + +### System variables + +| 变量名 | 修改类型(包括新增/修改/删除) | 描述 | +|--------|------------------------------|------| +| [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-从-v620-版本开始引入) | 修改 | 该变量默认值从 `1` 修改为 `2`,表示默认使用 Cost Model Version 2 进行索引选择和算子选择。 | +| [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-从-v630-版本开始引入) | 修改 | 该变量默认值从 `OFF` 修改为 `ON`,表示默认开启元数据锁。 | +| [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-从-v630-版本开始引入) | 修改 | 该变量默认值从 `OFF` 修改为 `ON`,表示默认开启创建索引加速功能。 | +| [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-从-v640-版本开始引入) | 修改 | 该变量默认值由 `0` 修改为 `80%`,表示默认将 TiDB 实例的内存限制设为总内存的 80%。| +| [`default_password_lifetime`](/system-variables.md#default_password_lifetime-从-v650-版本开始引入) | 新增 | 用于设置全局自动密码过期策略,要求用户定期修改密码。默认值为 `0` ,表示禁用全局自动密码过期策略 | +| [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-从-v650-版本开始引入) | 新增 | 该变量是一个只读变量,用来显示 TiDB 是否会直接断开密码已过期用户的连接 | +| [`password_history`](/system-variables.md#password_history-从-v650-版本开始引入) | 新增 | 基于密码更改次数的密码重用策略,不允许用户重复使用最近设置次数内使用过的密码。默认值为 `0`,表示禁用基于密码更改次数的密码重用策略 | +| [`password_reuse_interval`](/system-variables.md#password_reuse_interval-从-v650-版本开始引入) | 新增 | 基于经过时间限制的密码重用策略,不允许用户重复使用最近设置天数内使用过的密码。默认值为 `0`,表示禁用基于密码更改次数的密码重用策略 | +| [`tidb_cdc_write_source`](/system-variables.md#tidb_cdc_write_source-从-v650-版本开始引入) | 新增 | 当变量非 `0` 时,该 SESSION 写入的数据将被视为是由 TiCDC 写入的。这个变量仅由 TiCDC 设置,任何时候都不应该手动调整该变量。 | +| [`tidb_index_merge_intersection_concurrency`](/system-variables.md#tidb_index_merge_intersection_concurrency-从-v650-版本开始引入) | 新增 | 这个变量用来设置索引合并进行交集操作时的最大并发度,仅在以动态裁剪模式访问分区表时有效。 | +| [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | 修改 | 在 v6.5.0 之前的版本中,该变量用来设置单条查询的内存使用限制。在 v6.5.0 及之后的版本中,该变量用来设置单个会话整体的内存使用限制。 | +| [`tidb_source_id`](/system-variables.md#tidb_source_id-从-v650-版本开始引入) | 新增 | 设置在[双向复制](/ticdc/ticdc-bidirectional-replication.md)系统内不同集群的 ID。| +| [`tidb_ttl_delete_batch_size`](/system-variables.md#tidb_ttl_delete_batch_size-从-v650-版本开始引入) | 新增 | 这个变量用于设置 TTL 任务中单个删除事务中允许删除的最大行数。| +| [`tidb_ttl_delete_rate_limit`](/system-variables.md#tidb_ttl_delete_rate_limit-从-v650-版本开始引入) | 新增 | 这个变量用来对每个 TiDB 节点的 TTL 删除操作进行限流。其值代表了在 TTL 任务中单个节点每秒允许 `DELETE` 语句执行的最大次数。当此变量设置为 `0` 时,则表示不做限制。| +| [`tidb_ttl_delete_worker_count`](/system-variables.md#tidb_ttl_delete_worker_count-从-v650-版本开始引入) | 新增 | 这个变量用于设置每个 TiDB 节点上 TTL 删除任务的最大并发数。| +| [`tidb_ttl_job_enable`](/system-variables.md#tidb_ttl_job_enable-从-v650-版本开始引入) | 新增 | 这个变量用于控制是否启动 TTL 后台清理任务。如果设置为 `OFF`,所有具有 TTL 属性的表会自动停止清理过期数据。| +| [`tidb_ttl_job_run_interval`](/system-variables.md#tidb_ttl_job_run_interval-从-v650-版本开始引入) | 新增 | 这个变量用于控制 TTL 后台清理任务的调度周期。比如,如果当前值设置成了 `1h0m0s`,则代表每张设置了 TTL 属性的表会每小时清理一次过期数据。| +| [`tidb_ttl_job_schedule_window_start_time`](/system-variables.md#tidb_ttl_job_schedule_window_start_time-从-v650-版本开始引入) | 新增 | 这个变量用于控制 TTL 后台清理任务的调度窗口的起始时间。请谨慎调整此参数,过小的窗口有可能会造成过期数据的清理无法完成。| +| [`tidb_ttl_job_schedule_window_end_time`](/system-variables.md#tidb_ttl_job_schedule_window_end_time-从-v650-版本开始引入) | 新增 | 这个变量用于控制 TTL 后台清理任务的调度窗口的结束时间。请谨慎调整此参数,过小的窗口有可能会造成过期数据的清理无法完成。| +| [`tidb_ttl_scan_batch_size`](/system-variables.md#tidb_ttl_scan_batch_size-从-v650-版本开始引入) | 新增 | 这个变量用于设置 TTL 任务中用来扫描过期数据的每个 `SELECT` 语句的 `LIMIT` 的值。| +| [`tidb_ttl_scan_worker_count`](/system-variables.md#tidb_ttl_scan_worker_count-从-v650-版本开始引入) | 新增 | 这个变量用于设置每个 TiDB 节点 TTL 扫描任务的最大并发数。| + +| [`validate_password.check_user_name`](/system-variables.md#validate_passwordcheck_user_name-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,设置的用户密码不允许密码与当前会话账户的用户名部分相同。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启时,该变量才生效。默认值为 `ON` | +| [`validate_password.dictionary`](/system-variables.md#validate_passworddictionary-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,密码字典功能,设置的用户密码不允许包含字典中的单词。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启且 [validate_password.policy](/system-variables.md#validate_passwordpolicy-从-v650-版本开始引入) 设置为 `2` (STRONG) 时,该变量才生效。默认值为空 | +| [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查的开关,设置为 `ON` 后,TiDB 才进行密码复杂度检查。默认值为 `OFF` | +| [`validate_password.length`](/system-variables.md#validate_passwordlength-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,限定了用户密码最小长度。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启时,该变量才生效。默认值为 8 | +| [`validate_password.mixed_case_count`](/system-variables.md#validate_passwordmixed_case_count-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,限定了用户密码中大写字符和小写字符的最小数量。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启且 [validate_password.policy](/system-variables.md#validate_passwordpolicy-从-v650-版本开始引入) 大于或等于 `1` (MEDIUM) 时,该变量才生效。默认值为 1 | +| [`validate_password.number_count`](/system-variables.md#validate_passwordnumber_count-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,限定了用户密码中数字字符的最小数量。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启且 [validate_password.policy](/system-variables.md#validate_passwordpolicy-从-v650-版本开始引入) 大于或等于 `1` (MEDIUM) 时,该变量才生效。默认值为 1 | +| [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查的强度,强度等级分为 `[0, 1, 2]` 。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启时,该变量才生效。默认值为 1 | +| [`validate_password.special_char_count`](/system-variables.md#validate_passwordspecial_char_count-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,限定了用户密码中特殊字符的最小数量。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启且 [validate_password.policy](/system-variables.md#validate_passwordpolicy-从-v650-版本开始引入) 大于或等于 `1` (MEDIUM) 时,该变量才生效。默认值为 1 | +| | | | +| | | | + +### Configuration file parameters + +| 配置文件 | 配置项 | 修改类型 | 描述 | +| -------- | -------- | -------- | -------- | +| TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password`-从-v650-版本开始引入) | 新增 | 该配置用于控制 TiDB 服务端是否直接断开密码已过期用户的连接,默认值为 "true" ,表示 TiDB 服务端将直接断开密码已过期用户的连接 | +| TiDB | [`server-memory-quota`](/tidb-configuration-file.md#server-memory-quota-从-v409-版本开始引入) | 废弃 | 自 v6.5.0 起,该配置项被废弃。请使用 [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-从-v640-版本开始引入) 系统变量进行设置。 | +| TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | 修改 | 默认值从 `1s` 修改为 `200ms` | +| | | | | +| | | | | + +### Others + +## 废弃功能 + +即将于 v6.6.0 版本废弃 v4.0.7 版本引入的 Amending Transaction 机制,并使用[元数据锁](/metadata-lock.md) 替代。 + +## Improvements + ++ TiDB + + - 对于 `bit` and `char` 类型的列,使 `INFORMATION_SCHEMA.COLUMNS` 的显示结果与 MySQL 一致 [#25472](https://github.com/pingcap/tidb/issues/25472) @[hawkingrei](https://github.com/hawkingrei) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ TiKV + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - tikv-ctl 支持查询某个 key 范围中包含哪些 Region [#13768](https://github.com/tikv/tikv/pull/13768) [@HuSharp](https://github.com/HuSharp) + - 改进持续对特定行只加锁但不更新情况下的读写性能 [#13694](https://github.com/tikv/tikv/issues/13694) [@sticnarf](https://github.com/sticnarf) ++ PD + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ TiFlash + + - 提升了 TiFlash 在 SQL 端没有攒批的场景的写入性能 [#6404](https://github.com/pingcap/tiflash/issues/6404) @[lidezhu](https://github.com/lidezhu) + - 增加了 TableFullScan 的输出信息 [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ Tools + + + TiDB Dashboard + + - 在慢查询页面新增三个字段 `是否由 prepare 语句生成`,`查询计划是否来自缓存`,`查询计划是否来自绑定` 的描述。 [#1445](https://github.com/pingcap/tidb-dashboard/pull/1445/files) @[shhdgit](https://github.com/shhdgit) + + + Backup & Restore (BR) + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiCDC + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiDB Data Migration (DM) + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiDB Lightning + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiUP + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + +## Bug fixes + ++ TiDB + + - 修复 chunk reuse 功能部分情况下内存 chunk 被错误使用的问题 [#38917](https://github.com/pingcap/tidb/issues/38917) @[keeplearning20221](https://github.com/keeplearning20221) + - 修复 `tidb_constraint_check_in_place_pessimistic` 可能被全局设置影响内部 session 的问题 [#38766](https://github.com/pingcap/tidb/issues/38766) @[ekexium](https://github.com/ekexium) + - 修复了 AUTO_INCREMENT 列无法和 Check 约束一起使用的问题 [#38894](https://github.com/pingcap/tidb/issues/38894) @[YangKeao](https://github.com/YangKeao) + - 修复使用 'insert ignore into' 往 smallint 类型 auto increment 的列插入 string 类型数据会报错的问题 [#38483](https://github.com/pingcap/tidb/issues/38483) @[hawkingrei](https://github.com/hawkingrei) + - 修复了重命名分区表的分区列操作出现空指针报错的问题 [#38932](https://github.com/pingcap/tidb/issues/38932) @[mjonss](https://github.com/mjonss) + - 修复了一个修改分区表的分区列导致 DDL 卡死的问题 [#38530](https://github.com/pingcap/tidb/issues/38530) @[mjonss](https://github.com/mjonss) + - 修复了从 v4.0 升级到 v6.4 后 'admin show job' 操作崩溃的问题 [#38980](https://github.com/pingcap/tidb/issues/38980) @[tangenta](https://github.com/tangenta) + - 修复了 `tidb_decode_key` 函数未正确处理分区表编码的问题 [#39304](https://github.com/pingcap/tidb/issues/39304) @[Defined2014](https://github.com/Defined2014) + - 修复了 log rotate 时,grpc 的错误日志信息未被重定向到正确的日志文件的问题 [#38941](https://github.com/pingcap/tidb/issues/38941) @[xhebox](https://github.com/xhebox) + ++ TiKV + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ PD + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ TiFlash + + - 修复 TiFlash 重启不能正确合并小文件的问题 [#6159](https://github.com/pingcap/tiflash/issues/6159) @[lidezhu](https://github.com/lidezhu) + - 修复 TiFlash Open File OPS 过高的问题 [#6345](https://github.com/pingcap/tiflash/issues/6345) @[JaySon-Huang](https://github.com/JaySon-Huang) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + ++ Tools + + + Backup & Restore (BR) + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiCDC + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiDB Data Migration (DM) + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiDB Lightning + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + + + TiUP + + - note [#issue](链接) @[贡献者 GitHub ID](链接) + - note [#issue](链接) @[贡献者 GitHub ID](链接) + +## Contributors + +We would like to thank the following contributors from the TiDB community: + +- [贡献者 GitHub ID](链接) From 53f423b4798bbdacf3e61a857d00ddb1e9770ec6 Mon Sep 17 00:00:00 2001 From: Ran Date: Wed, 7 Dec 2022 17:38:43 +0800 Subject: [PATCH 02/83] translate --- releases/release-6.5.0.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index da826a04aef2d..d3d12ce491227 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -48,11 +48,11 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). 更多信息,请参考[非事务 DML 语句](/non-transactional-dml.md) 和 [BATCH](/sql-statements/sql-statement-batch.md)。 -* 支持 Time to live (TTL)(实验特性)[#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** +* Support time to live (TTL) (experimental feature) [#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** - TTL 提供了行级别的生命周期控制策略。在 TiDB 中,设置了 TTL 属性的表会根据配置自动检查并删除过期的行数据。TTL 设计的目标是在不影响在线读写负载的前提下,帮助用户周期性且及时地清理不需要的数据。 + TTL provides row-level data lifetime management. In TiDB, a table with the TTL attribute automatically checks data lifetime and deletes expired data at the row level. TTL is designed to help users clean up unnecessary data periodically and in a timely manner without affecting the online read and write workloads. - 更多信息请参考[Time to live(TTL)](/time-to-live.md) + For more information, refer to [user document](/time-to-live.md) * TiFlash 支持 `INSERT SELECT` 语句(实验功能) [#37515](https://github.com/pingcap/tidb/issues/37515) @[gengliqi](https://github.com/gengliqi) **tw@qiancai** @@ -68,13 +68,13 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### Security -* 支持密码复杂度策略 [#38928](https://github.com/pingcap/tidb/issues/38928) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** +* Support the password complexity policy [#38928](https://github.com/pingcap/tidb/issues/38928) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** - TiDB 启用密码复杂度策略功能后,在为用户设置密码时,会检查密码长度、大写/小写字符个数、数字字符个数、特殊字符个数、密码字典、是否与用户名相同,以此确保为用户设置一个安全的密码。 + After you enable the password complexity policy for TiDB, when you set a password, TiDB checks the password length, the number of uppercase and lowercase letters, numbers, and special characters, whether the password matches the dictionary, and whether the password matches the username. This ensures that you set a secure password. - TiDB 支持密码强度检查函数 `VALIDATE_PASSWORD_STRENGTH()`,用于判定一个给定密码的强度。 + TiDB provides the SQL function [`VALIDATE_PASSWORD_STRENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_validate-password-strength) to validate the password strength. - 更多信息,请参考[用户文档](/password-management.md#密码复杂度策略)。 + For more information, refer to [user document](/password-management.md#password-complexity-policy). * 支持密码过期策略 [#38936](https://github.com/pingcap/tidb/issues/38936) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** From 9157daf36a8e3affd11ef2bb2f935bc1861cb736 Mon Sep 17 00:00:00 2001 From: Ran Date: Wed, 7 Dec 2022 17:56:18 +0800 Subject: [PATCH 03/83] translate new features --- releases/release-6.5.0.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index d3d12ce491227..12b536f904270 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -76,23 +76,23 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). For more information, refer to [user document](/password-management.md#password-complexity-policy). -* 支持密码过期策略 [#38936](https://github.com/pingcap/tidb/issues/38936) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** +* Support the password expiration policy [#38936](https://github.com/pingcap/tidb/issues/38936) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** - TiDB 支持密码过期策略,包括:手动密码过期、全局级别自动密码过期、账户级别自动密码过期。启用密码过期策略功能后,用户必须定期修改密码,防止密码长期使用带来的泄露风险,提高密码安全性。 + TiDB supports the password expiration policy, including manual expiration, global-level automatic expiration, and account-level automatic expiration. After this policy is enabled, you must change your passwords periodically. This reduces the risk of password leakage due to long-term use and improve password security. - 更多信息,请参考[用户文档](/password-management.md#密码过期策略) + For more information, refer to [user document](/password-management.md#password-expiration-policy). -* 支持密码重用策略 [#38937](https://github.com/pingcap/tidb/issues/38937) @[keeplearning20221](https://github.com/keeplearning20221) **tw@ran-huang** +* Support the password reuse policy [#38937](https://github.com/pingcap/tidb/issues/38937) @[keeplearning20221](https://github.com/keeplearning20221) **tw@ran-huang** - TiDB 支持密码重用策略,包括:全局级别密码重用策略、账户级别密码重用策略。启用密码重用策略功能后,用户不允许使用最近一段时间使用过的密码,不允许使用最近几次使用过的密码,以此降低密码的重复使用带来的泄漏风险,提高密码安全性。 + TiDB supports the password reuse policy, including global-level password reuse policy and account-level password reuse policy. After this policy is enabled, you cannot use the passwords that you have used within a period or the most recent several passwords that you have used. This reduces the risk of password leakage due to repeated use of passwords and improves password security. - 更多信息,请参考[用户文档](/password-management.md#密码重用策略) + For more information, refer to [user document](/password-management.md#password-reuse-policy). -* 支持密码连续错误限制登录策略 [#38938](https://github.com/pingcap/tidb/issues/38938) @[lastincisor](https://github.com/lastincisor) **tw@ran-huang** +* Support failed-login tracking and temporary account locking policy [#38938](https://github.com/pingcap/tidb/issues/38938) @[lastincisor](https://github.com/lastincisor) **tw@ran-huang** - TiDB 启用密码连续错误限制登录策略功能后,当用户登录时密码连续多次错误,此时该账户将被临时锁定,达到锁定时间后将自动解锁。 + After this policy is enabled, if you log in to TiDB with incorrect passwords multiple times consecutively, the account is temporarily locked. After the lock time ends, the account is automatically unlocked. - 更多信息,请参考[用户文档](/password-management.md#密码连续错误限制登录策略) + For more information, refer to [user document](/password-management.md#failed-login-tracking-and-temporary-account-locking-policy). ### Observability @@ -182,11 +182,11 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). 更多信息,请参考[用户文档](sql-statements/sql-statement-explain-analyze.md)。 -* 执行计划支持 JSON 格式的打印 [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) **tw@ran-huang** +* Support the output of execution plans in JSON format [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) **tw@ran-huang** - 在新版本中,TiDB 扩展了执行计划的打印格式。 通过 `explain format = tidb_json ` 能够将 SQL 的执行计划以 JSON 格式输出。借助这个能力,SQL 调试工具和诊断工具能够更方便准确地解读执行计划,进而提升 SQL 诊断调优的易用性。 + In v6.5, TiDB extends the output format of the execution plan. By using `EXPLAIN FORMAT=tidb_json `, you can output the SQL execution plan in JSON format. With this capability, SQL debugging tools and diagnostic tools can read the execution plan more conveniently and accurately, thus improving the ease of use of SQL diagnosis and tuning. - 更多信息,请参考[用户文档](/sql-statements/sql-statement-explain.md)。 + For more information, see [user document](/sql-statements/sql-statement-explain.md). ### MySQL compatibility From 79675af491524bd6e8096317d5d818a2f442023c Mon Sep 17 00:00:00 2001 From: Ran Date: Wed, 7 Dec 2022 18:22:12 +0800 Subject: [PATCH 04/83] translate compatibility --- releases/release-6.5.0.md | 49 +++++++++++++++++++-------------------- 1 file changed, 24 insertions(+), 25 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 12b536f904270..d858659f9211a 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -280,32 +280,31 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). | [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-从-v630-版本开始引入) | 修改 | 该变量默认值从 `OFF` 修改为 `ON`,表示默认开启元数据锁。 | | [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-从-v630-版本开始引入) | 修改 | 该变量默认值从 `OFF` 修改为 `ON`,表示默认开启创建索引加速功能。 | | [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-从-v640-版本开始引入) | 修改 | 该变量默认值由 `0` 修改为 `80%`,表示默认将 TiDB 实例的内存限制设为总内存的 80%。| -| [`default_password_lifetime`](/system-variables.md#default_password_lifetime-从-v650-版本开始引入) | 新增 | 用于设置全局自动密码过期策略,要求用户定期修改密码。默认值为 `0` ,表示禁用全局自动密码过期策略 | -| [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-从-v650-版本开始引入) | 新增 | 该变量是一个只读变量,用来显示 TiDB 是否会直接断开密码已过期用户的连接 | -| [`password_history`](/system-variables.md#password_history-从-v650-版本开始引入) | 新增 | 基于密码更改次数的密码重用策略,不允许用户重复使用最近设置次数内使用过的密码。默认值为 `0`,表示禁用基于密码更改次数的密码重用策略 | -| [`password_reuse_interval`](/system-variables.md#password_reuse_interval-从-v650-版本开始引入) | 新增 | 基于经过时间限制的密码重用策略,不允许用户重复使用最近设置天数内使用过的密码。默认值为 `0`,表示禁用基于密码更改次数的密码重用策略 | -| [`tidb_cdc_write_source`](/system-variables.md#tidb_cdc_write_source-从-v650-版本开始引入) | 新增 | 当变量非 `0` 时,该 SESSION 写入的数据将被视为是由 TiCDC 写入的。这个变量仅由 TiCDC 设置,任何时候都不应该手动调整该变量。 | +| [`default_password_lifetime`](/system-variables.md#default_password_lifetime-new-in-v650) | Newly added | Sets the global policy for automatic password expiration to require the user to change passwords periodically. The default value `0` indicates that the password never expires. | +| [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-new-in-v650) | Newly added | This variable is read-only. It indicates whether to disconnect the client connection when the password is expired.| +| [`password_history`](/system-variables.md#password_history-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on the number of password changes. The default value `0` means disabling the password reuse policy based on the number of password changes. | +| [`password_reuse_interval`](/system-variables.md#password_reuse_interval-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on time elapsed. The default value `0` means disabling the password reuse policy based on time elapsed. | +| [`tidb_cdc_write_source`](/system-variables.md#tidb_cdc_write_source-new-in-v650) | Newly added | When this variable is set to a value other than 0, data written in this session is considered to be written by TiCDC. This variable can only be modified by TiCDC. Do not manually modify this variable in any case. | | [`tidb_index_merge_intersection_concurrency`](/system-variables.md#tidb_index_merge_intersection_concurrency-从-v650-版本开始引入) | 新增 | 这个变量用来设置索引合并进行交集操作时的最大并发度,仅在以动态裁剪模式访问分区表时有效。 | | [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | 修改 | 在 v6.5.0 之前的版本中,该变量用来设置单条查询的内存使用限制。在 v6.5.0 及之后的版本中,该变量用来设置单个会话整体的内存使用限制。 | -| [`tidb_source_id`](/system-variables.md#tidb_source_id-从-v650-版本开始引入) | 新增 | 设置在[双向复制](/ticdc/ticdc-bidirectional-replication.md)系统内不同集群的 ID。| -| [`tidb_ttl_delete_batch_size`](/system-variables.md#tidb_ttl_delete_batch_size-从-v650-版本开始引入) | 新增 | 这个变量用于设置 TTL 任务中单个删除事务中允许删除的最大行数。| -| [`tidb_ttl_delete_rate_limit`](/system-variables.md#tidb_ttl_delete_rate_limit-从-v650-版本开始引入) | 新增 | 这个变量用来对每个 TiDB 节点的 TTL 删除操作进行限流。其值代表了在 TTL 任务中单个节点每秒允许 `DELETE` 语句执行的最大次数。当此变量设置为 `0` 时,则表示不做限制。| -| [`tidb_ttl_delete_worker_count`](/system-variables.md#tidb_ttl_delete_worker_count-从-v650-版本开始引入) | 新增 | 这个变量用于设置每个 TiDB 节点上 TTL 删除任务的最大并发数。| -| [`tidb_ttl_job_enable`](/system-variables.md#tidb_ttl_job_enable-从-v650-版本开始引入) | 新增 | 这个变量用于控制是否启动 TTL 后台清理任务。如果设置为 `OFF`,所有具有 TTL 属性的表会自动停止清理过期数据。| -| [`tidb_ttl_job_run_interval`](/system-variables.md#tidb_ttl_job_run_interval-从-v650-版本开始引入) | 新增 | 这个变量用于控制 TTL 后台清理任务的调度周期。比如,如果当前值设置成了 `1h0m0s`,则代表每张设置了 TTL 属性的表会每小时清理一次过期数据。| -| [`tidb_ttl_job_schedule_window_start_time`](/system-variables.md#tidb_ttl_job_schedule_window_start_time-从-v650-版本开始引入) | 新增 | 这个变量用于控制 TTL 后台清理任务的调度窗口的起始时间。请谨慎调整此参数,过小的窗口有可能会造成过期数据的清理无法完成。| -| [`tidb_ttl_job_schedule_window_end_time`](/system-variables.md#tidb_ttl_job_schedule_window_end_time-从-v650-版本开始引入) | 新增 | 这个变量用于控制 TTL 后台清理任务的调度窗口的结束时间。请谨慎调整此参数,过小的窗口有可能会造成过期数据的清理无法完成。| -| [`tidb_ttl_scan_batch_size`](/system-variables.md#tidb_ttl_scan_batch_size-从-v650-版本开始引入) | 新增 | 这个变量用于设置 TTL 任务中用来扫描过期数据的每个 `SELECT` 语句的 `LIMIT` 的值。| -| [`tidb_ttl_scan_worker_count`](/system-variables.md#tidb_ttl_scan_worker_count-从-v650-版本开始引入) | 新增 | 这个变量用于设置每个 TiDB 节点 TTL 扫描任务的最大并发数。| - -| [`validate_password.check_user_name`](/system-variables.md#validate_passwordcheck_user_name-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,设置的用户密码不允许密码与当前会话账户的用户名部分相同。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启时,该变量才生效。默认值为 `ON` | -| [`validate_password.dictionary`](/system-variables.md#validate_passworddictionary-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,密码字典功能,设置的用户密码不允许包含字典中的单词。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启且 [validate_password.policy](/system-variables.md#validate_passwordpolicy-从-v650-版本开始引入) 设置为 `2` (STRONG) 时,该变量才生效。默认值为空 | -| [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查的开关,设置为 `ON` 后,TiDB 才进行密码复杂度检查。默认值为 `OFF` | -| [`validate_password.length`](/system-variables.md#validate_passwordlength-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,限定了用户密码最小长度。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启时,该变量才生效。默认值为 8 | -| [`validate_password.mixed_case_count`](/system-variables.md#validate_passwordmixed_case_count-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,限定了用户密码中大写字符和小写字符的最小数量。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启且 [validate_password.policy](/system-variables.md#validate_passwordpolicy-从-v650-版本开始引入) 大于或等于 `1` (MEDIUM) 时,该变量才生效。默认值为 1 | -| [`validate_password.number_count`](/system-variables.md#validate_passwordnumber_count-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,限定了用户密码中数字字符的最小数量。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启且 [validate_password.policy](/system-variables.md#validate_passwordpolicy-从-v650-版本开始引入) 大于或等于 `1` (MEDIUM) 时,该变量才生效。默认值为 1 | -| [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查的强度,强度等级分为 `[0, 1, 2]` 。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启时,该变量才生效。默认值为 1 | -| [`validate_password.special_char_count`](/system-variables.md#validate_passwordspecial_char_count-从-v650-版本开始引入) | 新增 | 密码复杂度策略检查项,限定了用户密码中特殊字符的最小数量。只有 [`validate_password.enable`](/system-variables.md#validate_passwordenable-从-v650-版本开始引入) 开启且 [validate_password.policy](/system-variables.md#validate_passwordpolicy-从-v650-版本开始引入) 大于或等于 `1` (MEDIUM) 时,该变量才生效。默认值为 1 | +| [`tidb_source_id`](/system-variables.md#tidb_source_id-new-in-v650) | Newly added | This variable is used to configure the different cluster IDs in a [bi-directional replication](/ticdc/manage-ticdc.md#bi-directional-replication) cluster.| +| [`tidb_ttl_delete_batch_size`](/system-variables.md#tidb_ttl_delete_batch_size-new-in-v650) | Newly added | This variable is used to set the maximum number of rows that can be deleted in a single `DELETE` transaction in a TTL job. | +| [`tidb_ttl_delete_rate_limit`](/system-variables.md#tidb_ttl_delete_rate_limit-new-in-v650) | Newly added | This variable is used to limit the rate of `DELETE` statements in TTL jobs on each TiDB node. The value represents the maximum number of `DELETE` statements allowed per second in a single node in a TTL job. When this variable is set to `0`, no limit is applied. | +| [`tidb_ttl_delete_worker_count`](/system-variables.md#tidb_ttl_delete_worker_count-new-in-v650) | Newly added | This variable is used to set the maximum concurrency of TTL jobs on each TiDB node. | +| [`tidb_ttl_job_enable`](/system-variables.md#tidb_ttl_job_enable-new-in-v650) | Newly added | This variable is used to control whether the TTL job is enabled. If it is set to `OFF`, all tables with TTL attributes automatically stops cleaning up expired data. | +| [`tidb_ttl_job_run_interval`](/system-variables.md#tidb_ttl_job_run_interval-new-in-v650) | Newly added | This variable is used to control the scheduling interval of the TTL job in the background. For example, if the current value is set to `1h0m0s`, each table with TTL attributes will clean up expired data once every hour. | +| [`tidb_ttl_job_schedule_window_start_time`](/system-variables.md#tidb_ttl_job_schedule_window_start_time-new-in-v650) | Newly added | This variable is used to control the start time of the scheduling window of the TTL job in the background. When you modify the value of this variable, be cautious that a small window might cause the cleanup of expired data to fail. | +| [`tidb_ttl_job_schedule_window_end_time`](/system-variables.md#tidb_ttl_job_schedule_window_end_time-new-in-v650) | Newly added | This variable is used to control the end time of the scheduling window of the TTL job in the background. When you modify the value of this variable, be cautious that a small window might cause the cleanup of expired data to fail. | +| [`tidb_ttl_scan_batch_size`](/system-variables.md#tidb_ttl_scan_batch_size-new-in-v650) | Newly added | This variable is used to set the `LIMIT` value of each `SELECT` statement used to scan expired data in a TTL job. | +| [`tidb_ttl_scan_worker_count`](/system-variables.md#tidb_ttl_scan_worker_count-new-in-v650) | Newly added | This variable is used to set the maximum concurrency of TTL scan jobs on each TiDB node. | +| [`validate_password.check_user_name`](/system-variables.md#validate_passwordcheck_user_name-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password matches the username. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled. The default value is `ON`. | +| [`validate_password.dictionary`](/system-variables.md#validate_passworddictionary-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password matches the dictionary. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `2` (STRONG). The default value is `""`. | +| [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) | Newly added | This variable controls whether to perform password complexity check. If this variable is set to `ON`, TiDB performs the password complexity check when you set a password. The default value is `OFF`. | +| [`validate_password.length`](/system-variables.md#validate_passwordlength-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password length is sufficient. By default, the minimum password length is `8`. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled. The default value is `8`. | +| [`validate_password.mixed_case_count`](/system-variables.md#validate_passwordmixed_case_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient uppercase and lowercase letters. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | +| [`validate_password.number_count`](/system-variables.md#validate_passwordnumber_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient numbers. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | +| [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) | Newly added | This variable controls the policy for the password complexity check. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled. The default value is `1`. | +| [`validate_password.special_char_count`](/system-variables.md#validate_passwordspecial_char_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient special characters. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | | | | | | | | | @@ -313,7 +312,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). | 配置文件 | 配置项 | 修改类型 | 描述 | | -------- | -------- | -------- | -------- | -| TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password`-从-v650-版本开始引入) | 新增 | 该配置用于控制 TiDB 服务端是否直接断开密码已过期用户的连接,默认值为 "true" ,表示 TiDB 服务端将直接断开密码已过期用户的连接 | +| TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password-new-in-v650) | Newly added | Determines whether TiDB disconnects the client connection when the password is expired. The default value is `true`, which means the client connection is disconnected when the password is expired. | | TiDB | [`server-memory-quota`](/tidb-configuration-file.md#server-memory-quota-从-v409-版本开始引入) | 废弃 | 自 v6.5.0 起,该配置项被废弃。请使用 [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-从-v640-版本开始引入) 系统变量进行设置。 | | TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | 修改 | 默认值从 `1s` 修改为 `200ms` | | | | | | From c1d5d400854e6b383c52f1f9c4fd45a4012f4206 Mon Sep 17 00:00:00 2001 From: qiancai Date: Thu, 8 Dec 2022 22:28:29 +0800 Subject: [PATCH 05/83] align with Chinese changes --- releases/release-6.5.0.md | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index d858659f9211a..b7bc2c0188894 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -20,7 +20,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). - 满足密码合规审计需求 [密码管理](/password-management.md) - TiDB 添加索引的速度提升为原来的 10 倍 - Flashback Cluster 功能兼容 TiCDC 和 PiTR -- JSON 抽取函数下推至 TiFlash +- 支持通过 `INSERT INTO SELECT` 语句[保存 TiFlash 查询结果](/tiflash/tiflash-results-materialization.md)(实验特性) +- 支持下推 JSON 抽取函数下推至 TiFlash +- 进一步增强索引合并[INDEX MERGE](/glossary.md#index-merge)功能 ## New features @@ -106,7 +108,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### Performance -* 进一步增强索引合并[INDEX MERGE](/glossary.md#index-merge)功能 [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[@time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** +* 进一步增强索引合并 [INDEX MERGE](/glossary.md#index-merge) 功能 [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[@time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** 新增了对在 WHERE 语句中使用 `AND` 联结的过滤条件的索引合并能力(v6.5 之前的版本只支持 `OR` 连接词的情况),TiDB 的索引合并至此可以覆盖更一般的查询过滤条件组合,不再限定于并集(`OR`)关系。当前版本仅支持优化器自动选择 “OR” 条件下的索引合并,用户须使用 [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) Hint 来开启对于 AND 联结的索引合并。 @@ -216,9 +218,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). 原先用户仅迁移少数几张表,也需要解析上游整个 binlog 文件,即仍需要解析该 binlog 文件中不需要迁移的表的 binlog event,效率会比较低,同时如果不在迁移任务里的库表的 binlog event 不支持解析,还会导致任务失败。通过只解析在迁移任务里的库表对象的 binlog event 可以大大提升 binlog 解析效率,提升任务稳定性。 -* Lightning 支持 disk quota 特性 GA,可避免 Lightning 任务写满本地磁盘 [#无](无) @[buchuitoudegou](https://github.com/buchuitoudegou) **tw@hfxsd** +* TiDB Lightning 支持 disk quota 特性 GA,可避免 TiDB Lightning 任务写满本地磁盘 [#无](无) @[buchuitoudegou](https://github.com/buchuitoudegou) **tw@hfxsd** - 你可以为 TiDB Lightning 配置磁盘配额 (disk quota)。当磁盘配额不足时,TiDB Lightning 会暂停读取源数据以及写入临时文件的过程,优先将已经完成排序的 key-value 写入到 TiKV,TiDB Lightning 删除本地临时文件后,再继续导入过程。 + 你可以为 TiDB Lightning 配置磁盘配额 (disk quota)。当磁盘配额不足时,TiDB Lightning 会暂停读取源数据以及写入临时文件的过程,优先将已经完成排序的 key-value 写入到 TiKV。TiDB Lightning 删除本地临时文件后,再继续导入过程。 有这个功能之前,TiDB Lightning 在使用物理模式导入数据时,会在本地磁盘创建大量的临时文件,用来对原始数据进行编码、排序、分割。当用户本地磁盘空间不足时,TiDB Lightning 会由于写入文件失败而报错退出。 @@ -240,6 +242,10 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). Storage sink 支持 changed log 格式位 canal-json/csv,此外 changed log 从 TiCDC 同步到 storage 的延迟可以达到 xx,支持更多信息,请参考[用户文档](https://github.com/pingcap/docs-cn/pull/12151/files)。 +* TiCDC 支持两个或者多个 TiDB 集群之间相互复制 @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** + + TiCDC 支持在多个 TiDB 集群之间进行双向复制。 如果业务上需要 TiDB 多活,尤其是异地多活的场景,可以使用该功能作为 TiDB 多活的解决方案。只要为每个 TiDB 集群到其他 TiDB 集群的 TiCDC changefeed 同步任务配置 `bdr-mode = true` 参数,就可以实现多个 TIDB 集群之间的数据相互复制。更多信息,请参考[用户文档](/ticdc/ticdc/ticdc-bidirectional-replication.md). + * TiCDC 性能提升 **tw@shichun-0415 在 TiDB 场景测试验证中, TiCDC 的性能得到了比较大提升,单台 TiCDC 节点能处理的最大行变更吞吐可以达到 30K rows/s,同步延迟降低到 10s,即使在常规的 TiKV/TiCDC 滚动升级场景同步延迟也小于 30s;在容灾场景测试中,打开 TiCDC Redo log 和 Sync point 后,吞吐 xx rows/s 时,容灾复制延迟可以保持在 x s。 @@ -266,9 +272,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * TiKV-BR 工具 GA, 支持 RawKV 的备份和恢复 [#67](https://github.com/tikv/migration/issues/67) @[pingyu](https://github.com/pingyu) @[haojinming](https://github.com/haojinming) **tw@shichun-0415** - TiKV-BR 是一个 TiKV 集群的备份和恢复工具。TiKV 可以独立于 TiDB,与 PD 构成 KV 数据库,此时的产品形态为 RawKV。TiKV-BR 工具支持对使用 RawKV 的产品进行备份和恢复,也支持将 TiKV 集群中的数据从 `API V1` 备份为 `API V2` 数据, 以实现 TiKV 集群 [`api-version`](https://docs.pingcap.com/zh/tidb/v6.4/tikv-configuration-file#api-version-%E4%BB%8E-v610-%E7%89%88%E6%9C%AC%E5%BC%80%E5%A7%8B%E5%BC%95%E5%85%A5) 的升级。 + TiKV-BR 是一个 TiKV 集群的备份和恢复工具。TiKV 可以独立于 TiDB,与 PD 构成 KV 数据库,此时的产品形态为 RawKV。TiKV-BR 工具支持对使用 RawKV 的产品进行备份和恢复,也支持将 TiKV 集群中的数据从 `API V1` 备份为 `API V2` 数据, 以实现 TiKV 集群 [`api-version`](/tikv-configuration-file.md#api-version-从-v610-版本开始引入) 的升级。 - 更多信息,请参考[用户文档]( https://tikv.org/docs/dev/concepts/explore-tikv-features/backup-restore/ )。 + 更多信息,请参考[用户文档](https://tikv.org/docs/latest/concepts/explore-tikv-features/backup-restore/)。 ## Compatibility changes From 5acb766e59e8b9929304589dd0036156b592fc73 Mon Sep 17 00:00:00 2001 From: qiancai Date: Thu, 8 Dec 2022 23:33:02 +0800 Subject: [PATCH 06/83] add translation for some feature descriptions --- releases/release-6.5.0.md | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index b7bc2c0188894..e27083ad015db 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -56,17 +56,21 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). For more information, refer to [user document](/time-to-live.md) -* TiFlash 支持 `INSERT SELECT` 语句(实验功能) [#37515](https://github.com/pingcap/tidb/issues/37515) @[gengliqi](https://github.com/gengliqi) **tw@qiancai** +* Support saving TiFlash query results using the `INSERT INTO SELECT` statement (experimental) [#37515](https://github.com/pingcap/tidb/issues/37515) @[gengliqi](https://github.com/gengliqi) **tw@qiancai** - 用户可以指定 TiFlash 执行 `INSERT SELECT` 中的 `SELECT` 子句(分析查询),并将结果在此事务中写回到 TIDB 表中: + Starting from v6.5.0, TiDB supports pushing down the `SELECT` clause (analysis query) of the `INSERT INTO SELECT` statement to TiFlash. In this way, you can easily save the TiFlash query result to a TiDB table specified by `INSERT INTO` for further analysis, which takes effect as result caching (that is, result materialization). For example: ```sql - insert into t2 select mod(x,y) from t1; + INSERT INTO t2 SELECT Mod(x,y) FROM t1; ``` - 用户可以方便地保存(物化)TiFlash 的计算结果以供下游步骤使用,可以起到结果缓存(物化)的效果。适用于以下场景:使用 TiFlash 做复杂分析,需重复使用计算结果或响应高并发的在线请求,计算性质本身聚合性好(相对输入数据,计算得出的结果集比较小,推荐 100MB 以内)。作为写入对象的 结果表本身没有特别限制,可以任意选择是否添加 TiFlash 副本。 + During the experimental phase, this feature is disabled by default. To enable it, you can set the [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) system variable to `ON`. There are no special restrictions on the result table specified by `INSERT INTO` for this feature, and you are free to add a TiFlash replica to that result table or not. Typical usage scenarios of this feature include: - 更多信息,请参考[用户文档](/tiflash/tiflash-results-materialization.md)。 + - Run complex analysis queries using TiFlash + - Reuse TiFlash query results or deal with highly concurrent online requests + - Need a relatively small result set comparing with the input data size, recommended to be within 100MiB. + + For more information, see the [user documentation](/tiflash/tiflash-results-materialization.md). ### Security @@ -114,15 +118,15 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). 关于“索引合并”功能的介绍请参阅 [v5.4 release note](/release-5.4.0#性能), 以及优化器相关的[用户文档](/explain-index-merge.md) -* 新增支持下推[JSON 函数](/tiflash/tiflash-supported-pushdown-calculations.md) 至 TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** +* Support pushing down the following [JSON functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** * `->` * `->>` * `JSON_EXTRACT()` - JSON 格式为应用设计提供了更灵活的建模方式,目前越来越多的应用采用 JSON 格式进行数据交换和数据存储。 把 JSON 函数下推至 TiFlash 可以加速对 JSON 类型数据的分析效率,拓展 TiDB 实时分析的应用场景。TiDB 将持续完善,在未来版本支持更多的 JSON 函数下推至 TiFlash。 + The JSON format provides a flexible way to model application design. Therefore, more and more applications are using the JSON format for data exchange and data storage. By pushing down JSON functions to TiFlash, you can improve the efficiency of analyzing data in the JSON type and use TiDB for more real-time analytics scenarios. -* 新增支持下推[字符串函数](/tiflash/tiflash-supported-pushdown-calculations.md) 至 TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** +* Support pushing down the following [string functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** * `regexp_like` * `regexp_instr` @@ -134,9 +138,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). 更多信息,请参考[用户文档](/optimizer-hints.md#全局生效的-Hint)。 -* [分区表](/partitioned-table.md)的排序操作下推至 TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** +* Support pushing down the sorting operation of [partitioned-table](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** - [分区表](/partitioned-table.md)在 v6.1.0 正式 GA, TiDB 持续提升分区表相关的性能。 在 v6.5.0 中, 排序操作如 `ORDER BY`, `LIMIT` 能够下推至 TiKV 进行计算和过滤,降低网络 I/O 的开销,提升了使用分区表时 SQL 的性能。 + Although [partitioned table](/partitioned-table.md) has been GA since v6.1.0, TiDB is continually improving its performance. In v6.5.0, TiDB supports pushing down sort operations such as `ORDER BY` and `LIMIT` to TiKV for computation and filtering, which reduces network I/O overhead and improves SQL performance when you use partitioned tables. * 优化器代价模型 Cost Model Version 2 GA [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** @@ -178,11 +182,11 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### Ease of use -* 完善 EXPLAIN ANALYZE 输出的 TiFlash 的 TableFullScan 算子的统计信息 [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) **tw@qiancai** +* Refine the execution information of the TiFlash `TableFullScan` operator in the `EXPLAIN ANALYZE` output [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) **tw@qiancai** - [`EXPLAIN ANALYZE`] 语句可以输出执行计划及运行时的统计信息。现有版本的统计信息中,TiFlash 的 TableFullScan 算子统计信息不完善。v6.5.0 版本对 TableFullScan 算子的统计信息进行完善,补充了 dmfile 相关的执行信息,可以更加清晰的展示 TiFlash 的数据扫描状态信息,方便进行性能分析。 + The `EXPLAIN ANALYZE` statement is used to print execution plans and runtime statistics. In v6.5.0, TiFlash has refined the execution information of the `TableFullScan` operator by adding the DMFile-related execution information. Now the TiFlash data scan status information is presented more intuitively, which helps you analyze TiFlash performance more easily. - 更多信息,请参考[用户文档](sql-statements/sql-statement-explain-analyze.md)。 + For more information, see [user documentation](sql-statements/sql-statement-explain-analyze.md). * Support the output of execution plans in JSON format [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) **tw@ran-huang** From 5515526f00c94b1e42afc3790c88826f520104ed Mon Sep 17 00:00:00 2001 From: Aolin Date: Fri, 9 Dec 2022 10:32:09 +0800 Subject: [PATCH 07/83] translate new features: - add index acceleration - metadata lock - flashback cluster - non-transactional DML - global Hint - cost model version 2 - auto_increment MySQL Signed-off-by: Aolin --- releases/release-6.5.0.md | 42 +++++++++++++++++++-------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index e27083ad015db..9d04f5c064a72 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -28,27 +28,27 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### SQL -* TiDB 添加索引的性能提升为原来的 10 倍 [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) **tw@Oreoxmt** +* The performance of TiDB adding indexes is improved by 10 times [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) **tw@Oreoxmt** - TiDB v6.3.0 引入了[添加索引加速](/system-variables.md#tidb_ddl_enable_fast_reorg-从-v630-版本开始引入)作为实验特性,提升了添加索引回填过程的速度。该功能在 v6.5.0 正式 GA 并默认打开,预期大表添加索引的性能提升约为原来的 10 倍。添加索引加速适用于单条 SQL 语句串行添加索引的场景,在多条 SQL 并行添加索引时仅对其中一条添加索引的 SQL 语句生效。 + TiDB v6.3.0 introduces the [Add index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) as an experimental feature to improve the speed of backfilling when creating an index. In v6.5.0, this feature becomes GA and is enabled by default and the performance improvement is expected to be 10 times faster than before. The acceleration feature is suitable for scenarios where a single SQL statement adds an index serially. When multiple SQL statements add indexes in parallel, only one of the SQL statements will be accelerated. -* 提供轻量级元数据锁,提升 DDL 变更过程 DML 的成功率 [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) **tw@Oreoxmt** +* Provide lightweight metadata lock to improve the DML success rate during DDL change [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) **tw@Oreoxmt** - TiDB v6.3.0 引入了[元数据锁](/metadata-lock.md)作为实验特性,通过协调表元数据变更过程中 DML 语句和 DDL 语句的优先级,让执行中的 DDL 语句等待持有旧版本元数据的 DML 语句提交,尽可能避免 DML 语句的 `Information schema is changed` 错误。该功能在 v6.5.0 正式 GA 并默认打开,适用于各类 DDL 变更场景。 + TiDB v6.3.0 introduces [Metadata lock](/metadata-lock.md) as an experimental feature. To avoid the `Information schema is changed` error caused by DML statements, TiDB coordinates the priority of DMLs and DDLs during table metadata change, and makes executing DDLs wait for the DMLs with old metadata to commit. In v6.5.0, this feature becomes GA and is enabled by default. It is suitable for all types of DDLs change scenarios. - 更多信息,请参考[用户文档](/metadata-lock.md)。 + For more information, see [User document](/metadata-lock.md). -* 支持通过 `FLASHBACK CLUSTER TO TIMESTAMP` 命令将集群快速回退到特定的时间点 [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** +* Support restoring a cluster to a specific point in time by using `FLASHBACK CLUSTER TO TIMESTAMP` [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** - TiDB v6.4.0 引入了 [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) 语句作为实验特性,支持在 Garbage Collection (GC) life time 内快速回退整个集群到指定的时间点。该功能在 v6.5.0 正式 GA,适用于快速撤消 DML 误操作、支持集群分钟级别的快速回退、支持在时间线上多次回退以确定特定数据更改发生的时间,并兼容 PITR 和 TiCDC 等工具。 + TiDB v6.4.0 introduces the [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement as an experimental feature. You can use this statement to restore a cluster to a specific point in time within the Garbage Collection (GC) life time. In v6.5.0, this statement becomes GA. This feature helps you to easily undo DML misoperations, restore the original cluster in minutes, rollback data at different time points to determine the exact time when data changes, and it is compatible with PITR and TiCDC. - 更多信息,请参考[用户文档](/sql-statements/sql-statement-flashback-to-timestamp.md)。 + For more information, see [User document](/sql-statements/sql-statement-flashback-to-timestamp.md). -* 完整支持包含 `INSERT`、`REPLACE`、`UPDATE` 和 `DELETE` 的非事务 DML 语句 [#33485](https://github.com/pingcap/tidb/issues/33485) @[ekexium](https://github.com/ekexium) **tw@Oreoxmt** +* Fully support non-transactional DML statements including `INSERT`, `REPLACE`, `UPDATE`, and `DELETE` [#33485](https://github.com/pingcap/tidb/issues/33485) @[ekexium](https://github.com/ekexium) **tw@Oreoxmt** - 在大批量的数据处理场景,单一大事务 SQL 处理可能对集群稳定性和性能造成影响。非事务 DML 语句将一个 DML 语句拆成多个 SQL 语句在内部执行。拆分后的语句将牺牲事务原子性和隔离性,但是对于集群的稳定性有很大提升。TiDB 从 v6.1.0 开始支持非事务 `DELETE` 语句,v6.5.0 新增对非事务 `INSERT`、`REPLACE` 和 `UPDATE` 语句的支持。 + In the scenarios of large data processing, a single SQL statement with a large transaction might have a negative impact on the cluster stability and performance. A non-transactional DML statement is a DML statement split into multiple SQL statements for internal execution. The split statements compromise transaction atomicity and isolation but greatly improve the cluster stability. TiDB supports non-transactional `DELETE` statements since v6.1.0, and v6.5.0 adds support for non-transactional `INSERT`, `REPLACE`, and `UPDATE` statements. - 更多信息,请参考[非事务 DML 语句](/non-transactional-dml.md) 和 [BATCH](/sql-statements/sql-statement-batch.md)。 + For more information, see [Non-Transactional DML statements](/non-transactional-dml.md) and [`BATCH` syntax](/sql-statements/sql-statement-batch.md). * Support time to live (TTL) (experimental feature) [#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** @@ -132,23 +132,23 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * `regexp_instr` * `regexp_substr` -* 新增全局 Hint 干预[视图](/views.md)内查询的计划生成 [#37887](https://github.com/pingcap/tidb/issues/37887) @[Reminiscent](https://github.com/Reminiscent) **tw@Oreoxmt** +* Support the global Hint to interfere with the execution plan generation in [Views](/views.md) [#37887](https://github.com/pingcap/tidb/issues/37887) @[Reminiscent](https://github.com/Reminiscent) **tw@Oreoxmt** - 当 SQL 语句中包含对视图的访问时,部分情况下需要用 Hint 对视图内查询的执行计划进行干预,以获得最佳性能。在 v6.5.0 中,TiDB 允许针对视图内的查询块添加全局 Hint,使查询中定义的 Hint 能够在视图内部生效。全局 Hint 由[查询块命名](/optimizer-hints.md#第-1-步使用-qb_name-hint-重命名视图内的查询块)和 [Hint 引用](/optimizer-hints.md#第-2-步添加实际需要的-hint)两部分组成。该特性为包含复杂视图嵌套的 SQL 提供 Hint 的注入手段,增强了执行计划控制能力,进而稳定复杂 SQL 的执行性能。 + In some view access scenarios, you need to use Hints to interfere with the execution plan of the query in the view to achieve best performances. In v6.5.0, TiDB supports adding global Hints for the query blocks in the view, thus the Hints defined in the query can be effective in the view. This feature provides a way to inject Hints into complex SQL statements that contain nested views, enhances the execution plan control, and stabilizes the performance of complex statements. To use global Hints, you need to [name the query blocks](/optimizer-hints.md#step-1-define-the-query-block-name-of-the-view-using-the-qb_name-hint) and [specify Hint references](/optimizer-hints.md#step-2-add-the-target-hints). - 更多信息,请参考[用户文档](/optimizer-hints.md#全局生效的-Hint)。 + For more information, see [User document](/optimizer-hints.md#hints-that-take-effect-globally). * Support pushing down the sorting operation of [partitioned-table](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** Although [partitioned table](/partitioned-table.md) has been GA since v6.1.0, TiDB is continually improving its performance. In v6.5.0, TiDB supports pushing down sort operations such as `ORDER BY` and `LIMIT` to TiKV for computation and filtering, which reduces network I/O overhead and improves SQL performance when you use partitioned tables. -* 优化器代价模型 Cost Model Version 2 GA [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** +* Optimizer introduces a more accurate Cost Model Version 2 [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** - TiDB v6.2.0 引入了代价模型 [Cost Model Version 2](/cost-model.md#cost-model-version-2) 作为实验特性,通过更准确的代价估算方式,有利于最优执行计划的选择。尤其在部署了 TiFlash 的情况下,Cost Model Version 2 自动选择合理的存储引擎,避免过多的人工介入。经过一段时间真实场景的测试,这个模型在 v6.5.0 正式 GA。新创建的集群将默认使用 Cost Model Version 2。对于升级到 v6.5.0 的集群,由于 Cost Model Version 2 可能会改变原有的执行计划,在经过充分的性能测试之后,你可以通过设置变量 [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-从-v620-版本开始引入) 使用新的代价模型。 + TiDB v6.2.0 introduces the [Cost Model Version 2](/cost-model.md#cost-model-version-2) as an experimental feature. This model uses a more accurate cost estimation method to help the optimizer choose the optimal execution plan. Especially when TiFlash is deployed, Cost Model Version 2 automatically chooses the appropriate storage engine and avoids manual intervention. After real scene testing for a period of time, this model becomes GA in v6.5.0. The newly created cluster uses Cost Model Version 2 by default. For clusters upgrade to v6.5.0, because Cost Model Version 2 might cause changes to query plans, you can set the [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-new-in-v620) variable to use the new cost model after sufficient performance testing. - Cost Model Version 2 的 GA,大幅提升了 TiDB 优化器的整体能力,并切实地向更加强大的 HTAP 数据库演进。 + Cost Model Version 2 becomes a generally available feature that significantly improves the overall capability of the TiDB optimizer and evolves towards a more powerful HTAP database. - 更多信息,请参考[用户文档](/cost-model.md#cost-model-version-2)。 + For more information, see [User document](/cost-model.md#cost-model-version-2). * TiFlash 对获取表行数的操作进行针对优化 [#37165](https://github.com/pingcap/tidb/issues/37165) @[elsa0520](https://github.com/elsa0520) @@ -196,15 +196,15 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### MySQL compatibility -* 支持高性能、全局单调递增的 `AUTO_INCREMENT` 列属性 [#38442](https://github.com/pingcap/tidb/issues/38442) @[tiancaiamao](https://github.com/tiancaiamao) **tw@Oreoxmt** +* Support a high-performance and globally monotonic `AUTO_INCREMENT` [#38442](https://github.com/pingcap/tidb/issues/38442) @[tiancaiamao](https://github.com/tiancaiamao) **tw@Oreoxmt** - TiDB v6.4.0 引入了 `AUTO_INCREMENT` 的 MySQL 兼容模式作为实验特性,通过中心化分配自增 ID,实现了自增 ID 在所有 TiDB 实例上单调递增。使用该特性能够更容易地实现查询结果按自增 ID 排序。该功能在 v6.5.0 正式 GA。使用该功能的单表写入 TPS 预期超过 2 万,并支持通过弹性扩容提升单表和整个集群的写入吞吐。要使用 MySQL 兼容模式,你需要在建表时将 `AUTO_ID_CACHE` 设置为 `1`。 + TiDB v6.4.0 introduces the `AUTO_INCREMENT` MySQL compatibility mode as an experimental feature. This mode introduces a centralized auto-increment ID allocating service that ensures IDs monotonically increase on all TiDB instances. This feature makes it easier to sort query results by auto-increment IDs. In v6.5.0, this feature becomes GA. The insert TPS of a table using this feature is expected to exceed 20,000, and this feature supports elastic scaling to improve the write throughput of a single table and entire clusters. To use the MySQL compatibility mode, you need to set `AUTO_ID_CACHE` to `1` when creating a table. The following is an example: ```sql CREATE TABLE t(a int AUTO_INCREMENT key) AUTO_ID_CACHE 1; ``` - 更多信息,请参考[用户文档](/auto-increment.md#mysql-兼容模式)。 + For more information, see [User document](/auto-increment.md#mysql-compatibility-mode). ### Data migration From 36671aca3b5d59f1ed31588788303a408becb87d Mon Sep 17 00:00:00 2001 From: qiancai Date: Fri, 9 Dec 2022 14:12:21 +0800 Subject: [PATCH 08/83] synch with Chinese --- releases/release-6.5.0.md | 92 ++++++++++++++++++++++----------------- 1 file changed, 53 insertions(+), 39 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 9d04f5c064a72..a77d421a1b5d1 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -339,23 +339,40 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + TiDB - 对于 `bit` and `char` 类型的列,使 `INFORMATION_SCHEMA.COLUMNS` 的显示结果与 MySQL 一致 [#25472](https://github.com/pingcap/tidb/issues/25472) @[hawkingrei](https://github.com/hawkingrei) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + TiKV - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - `cdc.min-ts-interval` 默认值从 1s 改为 200ms 以降低 CDC 延迟 [#12840](https://github.com/tikv/tikv/issues/12840) @[hicqu](https://github.com/hicqu) + - 引入 witness peer [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) + - 当剩余空间不足时停止 Raft Engine 的写入以避免硬盘空间耗尽 [#13642](https://github.com/tikv/tikv/issues/13642) @[jiayang-zheng](https://github.com/jiayang-zheng) + - 实现 `json_valid` 函数下推 [#13571](https://github.com/tikv/tikv/issues/13571) @[lizhenhuan](https://github.com/lizhenhuan) + - 支持在一个备份请求中同时备份多个范围的数据 [#13701](https://github.com/tikv/tikv/issues/13701) @[Leavrth](https://github.com/Leavrth) + - 更新 rusoto 库以支持备份到 ap-southeast-3 [#13751](https://github.com/tikv/tikv/issues/13751) @[3pointer](https://github.com/3pointer) + - 减少悲观事务冲突 [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) + - 缓存外部存储对象以提升恢复性能 [#13798](https://github.com/tikv/tikv/issues/13798) @[YuJuncen](https://github.com/YuJuncen) + - 在专用线程中运行 CheckLeader 以缩短 TiCDC 的复制延迟 [#13774](https://github.com/tikv/tikv/issues/13774) @[overvenus](https://github.com/overvenus) + - Checkpoint 支持拉取模式 [#13824](https://github.com/tikv/tikv/issues/13824) @[YuJuncen](https://github.com/YuJuncen) + - 升级 crossbeam-channel 以优化发送端的自旋问题 [#13815](https://github.com/tikv/tikv/issues/13815) @[sticnarf](https://github.com/sticnarf) + - Coprocessor 支持批量处理 [#13849](https://github.com/tikv/tikv/issues/13849) @[cfzjywxk](https://github.com/cfzjywxk) + - 故障恢复时通知 TiKV 唤醒休眠的 region 以减少等待时间 [#13648](https://github.com/tikv/tikv/issues/13648) @[LykxSassinator](https://github.com/LykxSassinator) + - 通过代码优化减少内存申请 [#13836](https://github.com/tikv/tikv/pull/13836) @[BusyJay](https://github.com/BusyJay) + - 引入 raft extension 以提升代码可扩展性 [#13864](https://github.com/tikv/tikv/pull/13864) @[BusyJay](https://github.com/BusyJay) + - 通过引入 `hint_min_ts` 加速 flashback [#13842](https://github.com/tikv/tikv/pull/13842) @[JmPotato](https://github.com/JmPotato) - tikv-ctl 支持查询某个 key 范围中包含哪些 Region [#13768](https://github.com/tikv/tikv/pull/13768) [@HuSharp](https://github.com/HuSharp) - 改进持续对特定行只加锁但不更新情况下的读写性能 [#13694](https://github.com/tikv/tikv/issues/13694) [@sticnarf](https://github.com/sticnarf) + + PD - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 优化锁的粒度以减少锁争用,提升高并发下心跳的处理能力 [#5586](https://github.com/tikv/pd/issues/5586) @[rleungx](https://github.com/rleungx) + - 优化调度器在大规模集群下的性能问题,提升调度策略生产速度 [#5473](https://github.com/tikv/pd/issues/5473) @[bufferflies](https://github.com/bufferflies) + - 增加 btree 的泛型性支持 [#5606](https://github.com/tikv/pd/issues/5606) @[rleungx](https://github.com/rleungx) + - 优化心跳处理过程,减少一些不要的开销 [#5648](https://github.com/tikv/pd/issues/5648)@[rleungx](https://github.com/rleungx) + - 增加了自动清理 tombstone store 的功能 [#5348](https://github.com/tikv/pd/issues/5348) @[nolouch](https://github.com/nolouch) + TiFlash - 提升了 TiFlash 在 SQL 端没有攒批的场景的写入性能 [#6404](https://github.com/pingcap/tiflash/issues/6404) @[lidezhu](https://github.com/lidezhu) - 增加了 TableFullScan 的输出信息 [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + Tools @@ -365,28 +382,20 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + Backup & Restore (BR) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 优化清理备份日志数据是 BR 的内存使用 [#38869](https://github.com/pingcap/tidb/issues/38869) @[Leavrth](https://github.com/Leavrth) + - 提升在恢复时的稳定性,允许 PD leader 切换的情况发生 [#36910](https://github.com/pingcap/tidb/issues/36910) @[MoCuishle28](https://github.com/MoCuishle28) + - 日志备份的 tls 功能使用 openssl 协议,提升 tls 兼容性。[#13867](https://github.com/tikv/tikv/issues/13867) @[YuJuncen](https://github.com/YuJuncen) + TiCDC - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 采用并发的方式对数据进行编码,极大提升了同步到 kafka 的吞吐能力 [#7532](https://github.com/pingcap/tiflow/issues/7532) [#7543](https://github.com/pingcap/tiflow/issues/7543) [#7540](https://github.com/pingcap/tiflow/issues/7540) @[3AceShowHand](https://github.com/3AceShowHand) @[sdojjy](https://github.com/sdojjy) + TiDB Data Migration (DM) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - + TiDB Lightning - - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - + TiUP - - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 通过不再解析黑名单表的数据提升了 dm 同步数据的性能 [#7622](https://github.com/pingcap/tiflow/pull/7622) @[GMHDBJD](https://github.com/GMHDBJD) + - 通过异步写与批量写的方式提升 dm relay 写数据效率 [#7580](https://github.com/pingcap/tiflow/pull/7580) @[GMHDBJD](https://github.com/GMHDBJD) + - 改进 DM 前置检查的错误提示信息 [#7696](https://github.com/pingcap/tiflow/pull/7696) @[buchuitoudegou](https://github.com/buchuitoudegou) + - 改进 DM 针对老版本 MySQL 使用 `SHOW SLAVE HOSTS` 获取结果时的兼容性 [#7373](https://github.com/pingcap/tiflow/pull/7372) @[lyzx2001](https://github.com/lyzx2001) ## Bug fixes @@ -401,49 +410,54 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). - 修复了从 v4.0 升级到 v6.4 后 'admin show job' 操作崩溃的问题 [#38980](https://github.com/pingcap/tidb/issues/38980) @[tangenta](https://github.com/tangenta) - 修复了 `tidb_decode_key` 函数未正确处理分区表编码的问题 [#39304](https://github.com/pingcap/tidb/issues/39304) @[Defined2014](https://github.com/Defined2014) - 修复了 log rotate 时,grpc 的错误日志信息未被重定向到正确的日志文件的问题 [#38941](https://github.com/pingcap/tidb/issues/38941) @[xhebox](https://github.com/xhebox) + - 修复了 `begin; select... for update;` 点查在 read engines 未配置 TiKV 时生成非预期执行计划的问题 [#39344](https://github.com/pingcap/tidb/issues/39344) @[Yisaer](https://github.com/Yisaer) + - 修复了错误地下推 `StreamAgg` 到 TiFlash 导致结果错误的问题 [#39266](https://github.com/pingcap/tidb/issues/39266) @[fixdb](https://github.com/fixdb) + TiKV - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 修复 raft engine ctl 中的错误 [#13108](https://github.com/tikv/tikv/issues/13108) @[tabokie](https://github.com/tabokie) + - 修复 tikv-ctl 中 compact raft 命令的错误 [#13515](https://github.com/tikv/tikv/issues/13515) @[guoxiangCN](https://github.com/guoxiangCN) + - 修复当启用 TLS 时 log backup 无法使用的问题 [#13851](https://github.com/tikv/tikv/issues/13851) @[YuJuncen](https://github.com/YuJuncen) + - 修复对 Geometry 字段类型的支持 [#13651](https://github.com/tikv/tikv/issues/13651) @[dveeden](https://github.com/dveeden) + - 修复当未使用 new collation 时 `like` 无法处理 `_` 中非 ASCII 字符的问题 [#13769](https://github.com/tikv/tikv/issues/13769) @[YangKeao](https://github.com/YangKeao) + - 修复 tikv-ctl 执行 reset-to-version 时出现 segfault 的问题 [#13829](https://github.com/tikv/tikv/issues/13829) @[tabokie](https://github.com/tabokie) + PD - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 修复热点调度配置在没有修改的情况下不持久化的问题 [#5701](https://github.com/tikv/pd/issues/5701) @[HunDunDM](https://github.com/HunDunDM) + - 修复 rank-formula-version 在升级过程中没有保持升级前的配置的问题 [#5699](https://github.com/tikv/pd/issues/5698) @[HunDunDM](https://github.com/HunDunDM) + TiFlash - 修复 TiFlash 重启不能正确合并小文件的问题 [#6159](https://github.com/pingcap/tiflash/issues/6159) @[lidezhu](https://github.com/lidezhu) - 修复 TiFlash Open File OPS 过高的问题 [#6345](https://github.com/pingcap/tiflash/issues/6345) @[JaySon-Huang](https://github.com/JaySon-Huang) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + Tools + Backup & Restore (BR) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 修复清理备份日志数据时错误删除数据导致数据丢失的问题 [#38939](https://github.com/pingcap/tidb/issues/38939) @[Leavrth](https://github.com/Leavrth) + - 修复在大于 6.1 版本关闭 new_collation 设置,仍然恢复失败的问题 [#39150](https://github.com/pingcap/tidb/issues/39150) @[MoCuishle28](https://github.com/MoCuishle28) + - 修复因非 s3 存储的不兼容请求导致备份 panic 的问题 [39545](https://github.com/pingcap/tidb/issues/39545) @[3pointer](https://github.com/3pointer) + TiCDC - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 修复 PD leader crash时 CDC 卡住的问题 [#7470](https://github.com/pingcap/tiflow/issues/7470) @[zeminzhou](https://github.com/zeminzhou) + - 修复在执行drop table 时用户快速暂停恢复同步任务导致可能的数据丢失问题 [#7682](https://github.com/pingcap/tiflow/issues/7682) @[asddongmen](https://github.com/asddongmen) + - 兼容上游开启 TiFlash 时版本兼容性问题 [#7744](https://github.com/pingcap/tiflow/issues/7744) @[overvenus](https://github.com/overvenus) + - 修复下游网络出现故障导致cdc 卡住的问题 [#7706](https://github.com/pingcap/tiflow/issues/7706) @[hicqu](https://github.com/hicqu) + - 修复用户快速删除、创建同名同步任务可能导致的数据丢失问题 [#7657](https://github.com/pingcap/tiflow/issues/7657) @[overvenus](https://github.com/overvenus) + TiDB Data Migration (DM) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 修复无法在上游开启 gtid mode 且无数据时启动 all mode 任务的错误 [#7037](https://github.com/pingcap/tiflow/issues/7037) @[liumengya94](https://github.com/liumengya94) + - 修复 DM-worker 异常重启可能引起的多 worker 写同一下游同张表的错误 [#7658](https://github.com/pingcap/tiflow/issues/7658) @[GMHDBJD](https://github.com/GMHDBJD) + - 修复上游数据库使用正则匹配授权时 DM 前置检查不通过的错误[#7645](https://github.com/pingcap/tiflow/issues/7645) @[lance6716](https://github.com/lance6716) + TiDB Lightning - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - + TiUP - - - note [#issue](链接) @[贡献者 GitHub ID](链接) - - note [#issue](链接) @[贡献者 GitHub ID](链接) + - 修复 TiDB Lightning 导入巨大数据源文件时的内存泄漏问题 [#39331](https://github.com/pingcap/tidb/issues/39331) @[dsdashun](https://github.com/dsdashun) + - 修复 TiDB Lightning 在并行导入冲突检测时无法正确检测的问题 [#39476](https://github.com/pingcap/tidb/issues/39476) @[dsdashun](https://github.com/dsdashun) ## Contributors From b19d37defd405f7410f45a39b426131681b63755 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Tue, 13 Dec 2022 13:54:43 +0800 Subject: [PATCH 09/83] Apply suggestions from code review --- releases/release-6.5.0.md | 56 +++++++++++++++++++-------------------- 1 file changed, 28 insertions(+), 28 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index a77d421a1b5d1..f8d18aef0ff8e 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -208,35 +208,35 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### Data migration -* 支持导出和导入压缩后的 CSV、SQL 文件 [#38514](https://github.com/pingcap/tidb/issues/38514) @[lichunzhu](https://github.com/lichunzhu) **tw@hfxsd** +* Support exporting and importing SQL and CSV files in the following compression formats: gzip, snappy and zstd [#38514](https://github.com/pingcap/tidb/issues/38514) @[lichunzhu](https://github.com/lichunzhu) **tw@hfxsd** - Dumpling 支持将数据导出为 SQL、CSV 的压缩文件,支持 gzip/snappy/zstd 三种压缩格式。Lightning 支持导入压缩后的 SQL、CSV 文件,支持gzip/snappy/zstd 三种压缩格式。 + Dumpling supports exporting data to compressed SQL and CSV files in the following compression formats: gzip, snappy, and zstd. TiDB Lightning also supports importing compressed files in these formats. - 之前用户导出数据或者导入数据都需要提供较大的存储空间,用于存储导出或者即将导入的非压缩后的 csv 、sql文件,导致存储成本增加。该功能发布后,通过压缩存储空间,可以大大降低用户的存储成本。 + Previously, you had to provide large storage space for exporting or importing data to store the uncompressed CSV and SQL files, resulting in high storage costs. With the release of this feature, you can greatly reduce your storage costs by compressing the storage space. - 更多信息,请参考[用户文档](https://github.com/pingcap/tidb/issues/38514)。 + For more information, see [User document](/dumpling-overview.md#improve-export-efficiency-through-concurrency). -* 优化了 binlog 解析能力 [#无](无) @[gmhdbjd](https://github.com/GMHDBJD) **tw@hfxsd** +* Optimize binlog parsing capability [#924](https://github.com/pingcap/dm/issues/924) @[gmhdbjd](https://github.com/GMHDBJD) **tw@hfxsd** - 可将不在迁移任务里的库、表对象的 binlog event 过滤掉不做解析,从而提升解析效率和稳定性。该策略在 6.5 版本默认生效,用户无需额外操作。 + TiDB can filter out binlog events of the schemas and tables that are not in the migration task, thus improving the parsing efficiency and stability. This policy takes effect by default in v6.5.0. No additional configuration is required. + + Previously, even if only a few tables were migrated, the entire binlog file upstream had to be parsed. The binlog events of the tables in the binlog file that did not need to be migrated still had to be parsed, which was not efficient. Meanwhile, if the binlog events of the schemas and tables that are not in the migration task do not support parsing, the task will fail. By only parsing the binlog events of the tables in the migration task, the binlog parsing efficiency can be greatly improved and the task stability can be enhanced. - 原先用户仅迁移少数几张表,也需要解析上游整个 binlog 文件,即仍需要解析该 binlog 文件中不需要迁移的表的 binlog event,效率会比较低,同时如果不在迁移任务里的库表的 binlog event 不支持解析,还会导致任务失败。通过只解析在迁移任务里的库表对象的 binlog event 可以大大提升 binlog 解析效率,提升任务稳定性。 +* The disk quota in TiDB Lightning is GA. It can prevent TiDB Lightning tasks from overwriting local disks [#446](https://github.com/pingcap/tidb-lightning/issues/446) @[buchuitoudegou](https://github.com/buchuitoudegou) **tw@hfxsd** -* TiDB Lightning 支持 disk quota 特性 GA,可避免 TiDB Lightning 任务写满本地磁盘 [#无](无) @[buchuitoudegou](https://github.com/buchuitoudegou) **tw@hfxsd** + You can configure disk quota for TiDB Lightning. When there is not enough disk quota, TiDB Lightning pauses the process of reading the source data and writing temporary files, and writes the sorted key-values to TiKV first, and then continues the import process after TiDB Lightning deletes the local temporary files. - 你可以为 TiDB Lightning 配置磁盘配额 (disk quota)。当磁盘配额不足时,TiDB Lightning 会暂停读取源数据以及写入临时文件的过程,优先将已经完成排序的 key-value 写入到 TiKV。TiDB Lightning 删除本地临时文件后,再继续导入过程。 + Previously, when TiDB Lightning imported data using physical mode, it would create a large number of temporary files on the local disk for encoding, sorting, and splitting the raw data. When your local disk ran out of space, TiDB Lightning would exit with an error due to failing to write to the file. With this feature, TiDB Lightning tasks can avoid overwriting the local disk. - 有这个功能之前,TiDB Lightning 在使用物理模式导入数据时,会在本地磁盘创建大量的临时文件,用来对原始数据进行编码、排序、分割。当用户本地磁盘空间不足时,TiDB Lightning 会由于写入文件失败而报错退出。 + For more information, see [User document](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620). - 更多信息,请参考[用户文档]( https://docs.pingcap.com/tidb/v6.4/tidb-lightning-physical-import-mode-usage#configure-disk-quota-new-in-v620)。 +* Continuous data validation in DM is GA [#4426](https://github.com/pingcap/tiflow/issues/4426) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** -* GA DM 增量数据校验的功能 [#4426](https://github.com/pingcap/tiflow/issues/4426) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** + In the process of migrating incremental data from upstream to downstream databases, there is a small probability that the flow of data causes errors or data loss. In scenarios that rely on strong data consistency, such as credit and securities businesses, you can perform a full volume checksum on the data after the data migration is complete to ensure data consistency. However, in some scenarios with incremental replication, upstream and downstream writes are continuous and uninterrupted because the upstream and downstream data is constantly changing, making it difficult to perform consistency checks on all the data in the tables. - 在将增量数据从上游迁移到下游数据库的过程中,数据的流转有小概率导致错误或者丢失的情况。对于需要依赖于强数据一致的场景,如信贷、证券等业务,你可以在数据迁移完成之后对数据进行全量校验,确保数据的一致性。然而,在某些增量复制的业务场景下,上游和下游的写入是持续的、不会中断的,因为上下游的数据在不断变化,导致用户难以对表里面的全部数据进行一致性校验。 + Previously, you needed to interrupt the business to do the full data verification, which would affect your business. Now, with this feature, you can perform incremental data verification without interrupting the business. - 过去,需要中断业务,做全量数据校验,会影响用户业务。现在推出该功能后,在一些不可中断的业务场景,无需中断业务,通过该功能就可以实现增量数据校验。 - - 更多信息,请参考[用户文档]( https://docs.pingcap.com/tidb/v6.4/dm-continuous-data-validation)。 + For more information, see [User document](/dm/dm-continuous-data-validation.md). ### TiDB data share subscription @@ -363,22 +363,22 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + PD - - 优化锁的粒度以减少锁争用,提升高并发下心跳的处理能力 [#5586](https://github.com/tikv/pd/issues/5586) @[rleungx](https://github.com/rleungx) - - 优化调度器在大规模集群下的性能问题,提升调度策略生产速度 [#5473](https://github.com/tikv/pd/issues/5473) @[bufferflies](https://github.com/bufferflies) - - 增加 btree 的泛型性支持 [#5606](https://github.com/tikv/pd/issues/5606) @[rleungx](https://github.com/rleungx) - - 优化心跳处理过程,减少一些不要的开销 [#5648](https://github.com/tikv/pd/issues/5648)@[rleungx](https://github.com/rleungx) - - 增加了自动清理 tombstone store 的功能 [#5348](https://github.com/tikv/pd/issues/5348) @[nolouch](https://github.com/nolouch) + - Optimize the granularity of locks to reduce lock contention and improve the handling capability of heartbeats under high concurrency [#5586](https://github.com/tikv/pd/issues/5586) @[rleungx](https://github.com/rleungx) + - Optimize scheduler performance for large-scale clusters and improve production speed of the scheduling policy [#5473](https://github.com/tikv/pd/issues/5473) @[bufferflies](https://github.com/bufferflies) + - Improve the speed of loading Regions [#5606](https://github.com/tikv/pd/issues/5606) @[rleungx](https://github.com/rleungx) + - Improve the performance of handling Region heartbeats [#5648](https://github.com/tikv/pd/issues/5648)@[rleungx](https://github.com/rleungx) + - Add the function to automatically GC the tombstone store [#5348](https://github.com/tikv/pd/issues/5348) @[nolouch](https://github.com/nolouch) + TiFlash - - 提升了 TiFlash 在 SQL 端没有攒批的场景的写入性能 [#6404](https://github.com/pingcap/tiflash/issues/6404) @[lidezhu](https://github.com/lidezhu) - - 增加了 TableFullScan 的输出信息 [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) + - Improve write performance in scenarios where there is no batch processing on the SQL side [#6404](https://github.com/pingcap/tiflash/issues/6404) @[lidezhu](https://github.com/lidezhu) + - Add more details for TableFullScan in the `explain analyze` output [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) + Tools + TiDB Dashboard - - 在慢查询页面新增三个字段 `是否由 prepare 语句生成`,`查询计划是否来自缓存`,`查询计划是否来自绑定` 的描述。 [#1445](https://github.com/pingcap/tidb-dashboard/pull/1445/files) @[shhdgit](https://github.com/shhdgit) + - Add three new fields to the slow query page: "Is Prepared?","Is Plan from Cache?","Is Plan from Binding?" [#1451](https://github.com/pingcap/tidb-dashboard/issues/1451) @[shhdgit](https://github.com/shhdgit) + Backup & Restore (BR) @@ -424,13 +424,13 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + PD - - 修复热点调度配置在没有修改的情况下不持久化的问题 [#5701](https://github.com/tikv/pd/issues/5701) @[HunDunDM](https://github.com/HunDunDM) - - 修复 rank-formula-version 在升级过程中没有保持升级前的配置的问题 [#5699](https://github.com/tikv/pd/issues/5698) @[HunDunDM](https://github.com/HunDunDM) + - Fix the issue that the `balance-hot-region-scheduler` configuration is not persisted if not modified [#5701](https://github.com/tikv/pd/issues/5701) @[HunDunDM](https://github.com/HunDunDM) + - Fix the issue that `rank-formula-version` does not retain the pre-upgrade configuration during the upgrade process [#5698](https://github.com/tikv/pd/issues/5698) @[HunDunDM](https://github.com/HunDunDM) + TiFlash - - 修复 TiFlash 重启不能正确合并小文件的问题 [#6159](https://github.com/pingcap/tiflash/issues/6159) @[lidezhu](https://github.com/lidezhu) - - 修复 TiFlash Open File OPS 过高的问题 [#6345](https://github.com/pingcap/tiflash/issues/6345) @[JaySon-Huang](https://github.com/JaySon-Huang) + - Fix the issue that minor compaction does not work as expected after TiFlash restarts [#6159](https://github.com/pingcap/tiflash/issues/6159) @[lidezhu](https://github.com/lidezhu) + - Fix the issue that TiFlash Open File OPS is too high [#6345](https://github.com/pingcap/tiflash/issues/6345) @[JaySon-Huang](https://github.com/JaySon-Huang) + Tools From 5a50c143133eb9b69cb86629a4ffa62c8a391347 Mon Sep 17 00:00:00 2001 From: Aolin Date: Tue, 13 Dec 2022 15:17:33 +0800 Subject: [PATCH 10/83] translate TiKV improvements 11 --- releases/release-6.5.0.md | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index f8d18aef0ff8e..2fd7ea5f2164d 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -342,18 +342,17 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + TiKV - - `cdc.min-ts-interval` 默认值从 1s 改为 200ms 以降低 CDC 延迟 [#12840](https://github.com/tikv/tikv/issues/12840) @[hicqu](https://github.com/hicqu) - - 引入 witness peer [#12876](https://github.com/tikv/tikv/issues/12876) @[Connor1996](https://github.com/Connor1996) - - 当剩余空间不足时停止 Raft Engine 的写入以避免硬盘空间耗尽 [#13642](https://github.com/tikv/tikv/issues/13642) @[jiayang-zheng](https://github.com/jiayang-zheng) - - 实现 `json_valid` 函数下推 [#13571](https://github.com/tikv/tikv/issues/13571) @[lizhenhuan](https://github.com/lizhenhuan) - - 支持在一个备份请求中同时备份多个范围的数据 [#13701](https://github.com/tikv/tikv/issues/13701) @[Leavrth](https://github.com/Leavrth) - - 更新 rusoto 库以支持备份到 ap-southeast-3 [#13751](https://github.com/tikv/tikv/issues/13751) @[3pointer](https://github.com/3pointer) - - 减少悲观事务冲突 [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) - - 缓存外部存储对象以提升恢复性能 [#13798](https://github.com/tikv/tikv/issues/13798) @[YuJuncen](https://github.com/YuJuncen) - - 在专用线程中运行 CheckLeader 以缩短 TiCDC 的复制延迟 [#13774](https://github.com/tikv/tikv/issues/13774) @[overvenus](https://github.com/overvenus) - - Checkpoint 支持拉取模式 [#13824](https://github.com/tikv/tikv/issues/13824) @[YuJuncen](https://github.com/YuJuncen) - - 升级 crossbeam-channel 以优化发送端的自旋问题 [#13815](https://github.com/tikv/tikv/issues/13815) @[sticnarf](https://github.com/sticnarf) - - Coprocessor 支持批量处理 [#13849](https://github.com/tikv/tikv/issues/13849) @[cfzjywxk](https://github.com/cfzjywxk) + - The default value of `cdc.min-ts-interval` has been changed from `1s` to `200ms` to reduce CDC latency [#12840](https://github.com/tikv/tikv/issues/12840) @[hicqu](https://github.com/hicqu) + - Stop writing to Raft Engine when there is insufficient space to avoid exhausting disk space [#13642](https://github.com/tikv/tikv/issues/13642) @[jiayang-zheng](https://github.com/jiayang-zheng) + - Support pushing down the `json_valid` function to TiKV [#13571](https://github.com/tikv/tikv/issues/13571) @[lizhenhuan](https://github.com/lizhenhuan) + - Support backing up multiple ranges of data in a single backup request [#13701](https://github.com/tikv/tikv/issues/13701) @[Leavrth](https://github.com/Leavrth) + - Support backing up data to the Asia Pacific region (ap-southeast-3) of AWS by updating the rusoto library [#13751](https://github.com/tikv/tikv/issues/13751) @[3pointer](https://github.com/3pointer) + - Reduce pessimistic transaction conflicts [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) + - Improve recovery performance by caching external storage objects [#13798](https://github.com/tikv/tikv/issues/13798) @[YuJuncen](https://github.com/YuJuncen) + - The CheckLeader is run in a dedicated thread to reduce TiCDC replication latency [#13774](https://github.com/tikv/tikv/issues/13774) @[overvenus](https://github.com/overvenus) + - Support pull model for Checkpoints [#13824](https://github.com/tikv/tikv/issues/13824) @[YuJuncen](https://github.com/YuJuncen) + - Avoid spinning issues on the sender side by updating crossbeam-channel [#13815](https://github.com/tikv/tikv/issues/13815) @[sticnarf](https://github.com/sticnarf) + - Support batch Coprocessor tasks processing in TiKV [#13849](https://github.com/tikv/tikv/issues/13849) @[cfzjywxk](https://github.com/cfzjywxk) - 故障恢复时通知 TiKV 唤醒休眠的 region 以减少等待时间 [#13648](https://github.com/tikv/tikv/issues/13648) @[LykxSassinator](https://github.com/LykxSassinator) - 通过代码优化减少内存申请 [#13836](https://github.com/tikv/tikv/pull/13836) @[BusyJay](https://github.com/BusyJay) - 引入 raft extension 以提升代码可扩展性 [#13864](https://github.com/tikv/tikv/pull/13864) @[BusyJay](https://github.com/BusyJay) From 37a33ed6a4e7f20f2e4ba155e6afc34d6303f120 Mon Sep 17 00:00:00 2001 From: Ran Date: Tue, 13 Dec 2022 19:33:50 +0800 Subject: [PATCH 11/83] improvements and bug fixes for lightning/dm/dumpling Signed-off-by: Ran --- releases/release-6.5.0.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 2fd7ea5f2164d..46152c3299c19 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -219,7 +219,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * Optimize binlog parsing capability [#924](https://github.com/pingcap/dm/issues/924) @[gmhdbjd](https://github.com/GMHDBJD) **tw@hfxsd** TiDB can filter out binlog events of the schemas and tables that are not in the migration task, thus improving the parsing efficiency and stability. This policy takes effect by default in v6.5.0. No additional configuration is required. - + Previously, even if only a few tables were migrated, the entire binlog file upstream had to be parsed. The binlog events of the tables in the binlog file that did not need to be migrated still had to be parsed, which was not efficient. Meanwhile, if the binlog events of the schemas and tables that are not in the migration task do not support parsing, the task will fail. By only parsing the binlog events of the tables in the migration task, the binlog parsing efficiency can be greatly improved and the task stability can be enhanced. * The disk quota in TiDB Lightning is GA. It can prevent TiDB Lightning tasks from overwriting local disks [#446](https://github.com/pingcap/tidb-lightning/issues/446) @[buchuitoudegou](https://github.com/buchuitoudegou) **tw@hfxsd** @@ -391,10 +391,10 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + TiDB Data Migration (DM) - - 通过不再解析黑名单表的数据提升了 dm 同步数据的性能 [#7622](https://github.com/pingcap/tiflow/pull/7622) @[GMHDBJD](https://github.com/GMHDBJD) - - 通过异步写与批量写的方式提升 dm relay 写数据效率 [#7580](https://github.com/pingcap/tiflow/pull/7580) @[GMHDBJD](https://github.com/GMHDBJD) - - 改进 DM 前置检查的错误提示信息 [#7696](https://github.com/pingcap/tiflow/pull/7696) @[buchuitoudegou](https://github.com/buchuitoudegou) - - 改进 DM 针对老版本 MySQL 使用 `SHOW SLAVE HOSTS` 获取结果时的兼容性 [#7373](https://github.com/pingcap/tiflow/pull/7372) @[lyzx2001](https://github.com/lyzx2001) + - Improve the data replication performance for DM by not parsing the data of tables in the block list [#4287](https://github.com/pingcap/tiflow/issues/4287) @[GMHDBJD](https://github.com/GMHDBJD) + - Improve the write efficiency of DM relay by using asynchronous write and batch write [#4287](https://github.com/pingcap/tiflow/issues/4287) @[GMHDBJD](https://github.com/GMHDBJD) + - Optimize the error messages in DM precheck [#7621](https://github.com/pingcap/tiflow/issues/7621) @[buchuitoudegou](https://github.com/buchuitoudegou) + - Improve the compatibility of `SHOW SLAVE HOSTS` for old MySQL versions [#5017](https://github.com/pingcap/tiflow/issues/5017) @[lyzx2001](https://github.com/lyzx2001) ## Bug fixes @@ -449,14 +449,14 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + TiDB Data Migration (DM) - - 修复无法在上游开启 gtid mode 且无数据时启动 all mode 任务的错误 [#7037](https://github.com/pingcap/tiflow/issues/7037) @[liumengya94](https://github.com/liumengya94) - - 修复 DM-worker 异常重启可能引起的多 worker 写同一下游同张表的错误 [#7658](https://github.com/pingcap/tiflow/issues/7658) @[GMHDBJD](https://github.com/GMHDBJD) - - 修复上游数据库使用正则匹配授权时 DM 前置检查不通过的错误[#7645](https://github.com/pingcap/tiflow/issues/7645) @[lance6716](https://github.com/lance6716) + - Fix the issue that a `task-mode:all` task cannot be started when the upstream database enables GTID mode but does not have any data [#7037](https://github.com/pingcap/tiflow/issues/7037) @[liumengya94](https://github.com/liumengya94) + - Fix the issue that data is replicated for multiple times when a new DM worker is scheduled before the existing worker exits [#7658](https://github.com/pingcap/tiflow/issues/7658) @[GMHDBJD](https://github.com/GMHDBJD) + - Fix the issue that DM precheck is not passed when the upstream database uses regular expression to grant privileges [#7645](https://github.com/pingcap/tiflow/issues/7645) @[lance6716](https://github.com/lance6716) + TiDB Lightning - - 修复 TiDB Lightning 导入巨大数据源文件时的内存泄漏问题 [#39331](https://github.com/pingcap/tidb/issues/39331) @[dsdashun](https://github.com/dsdashun) - - 修复 TiDB Lightning 在并行导入冲突检测时无法正确检测的问题 [#39476](https://github.com/pingcap/tidb/issues/39476) @[dsdashun](https://github.com/dsdashun) + - Fix the memory leakage issue when TiDB Lightning imports a huge source data file [#39331](https://github.com/pingcap/tidb/issues/39331) @[dsdashun](https://github.com/dsdashun) + - Fix the issue that TiDB Lightning cannot detect conflict correctly when importing data in parallel [#39476](https://github.com/pingcap/tidb/issues/39476) @[dsdashun](https://github.com/dsdashun) ## Contributors From 6663e01451e7dc4c1e8ab37708200e74958ecd31 Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 13 Dec 2022 23:24:25 +0800 Subject: [PATCH 12/83] add translations for TiKV improvements and bug fixes --- releases/release-6.5.0.md | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 46152c3299c19..020b70ef2c145 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -353,12 +353,11 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). - Support pull model for Checkpoints [#13824](https://github.com/tikv/tikv/issues/13824) @[YuJuncen](https://github.com/YuJuncen) - Avoid spinning issues on the sender side by updating crossbeam-channel [#13815](https://github.com/tikv/tikv/issues/13815) @[sticnarf](https://github.com/sticnarf) - Support batch Coprocessor tasks processing in TiKV [#13849](https://github.com/tikv/tikv/issues/13849) @[cfzjywxk](https://github.com/cfzjywxk) - - 故障恢复时通知 TiKV 唤醒休眠的 region 以减少等待时间 [#13648](https://github.com/tikv/tikv/issues/13648) @[LykxSassinator](https://github.com/LykxSassinator) - - 通过代码优化减少内存申请 [#13836](https://github.com/tikv/tikv/pull/13836) @[BusyJay](https://github.com/BusyJay) - - 引入 raft extension 以提升代码可扩展性 [#13864](https://github.com/tikv/tikv/pull/13864) @[BusyJay](https://github.com/BusyJay) - - 通过引入 `hint_min_ts` 加速 flashback [#13842](https://github.com/tikv/tikv/pull/13842) @[JmPotato](https://github.com/JmPotato) - - tikv-ctl 支持查询某个 key 范围中包含哪些 Region [#13768](https://github.com/tikv/tikv/pull/13768) [@HuSharp](https://github.com/HuSharp) - - 改进持续对特定行只加锁但不更新情况下的读写性能 [#13694](https://github.com/tikv/tikv/issues/13694) [@sticnarf](https://github.com/sticnarf) + - Reduce waiting time on failure recovery by notifying TiKV to wake up Regions [#13648](https://github.com/tikv/tikv/issues/13648) @[LykxSassinator](https://github.com/LykxSassinator) + - Reduce the requested size of memory usage by code optimization [#13827](https://github.com/tikv/tikv/issues/13827) @[BusyJay](https://github.com/BusyJay) + - Introduce the raft extension to improve code extensibility [#13827](https://github.com/tikv/tikv/issues/13827) @[BusyJay](https://github.com/BusyJay) + - Support using tikv-ctl to query which Regions are included in a certain key range [#13760](https://github.com/tikv/tikv/issues/13760) [@HuSharp](https://github.com/HuSharp) + - Improve read and write performance for rows that are not updated but locked continuously [#13694](https://github.com/tikv/tikv/issues/13694) [@sticnarf](https://github.com/sticnarf) + PD @@ -414,12 +413,12 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + TiKV - - 修复 raft engine ctl 中的错误 [#13108](https://github.com/tikv/tikv/issues/13108) @[tabokie](https://github.com/tabokie) - - 修复 tikv-ctl 中 compact raft 命令的错误 [#13515](https://github.com/tikv/tikv/issues/13515) @[guoxiangCN](https://github.com/guoxiangCN) - - 修复当启用 TLS 时 log backup 无法使用的问题 [#13851](https://github.com/tikv/tikv/issues/13851) @[YuJuncen](https://github.com/YuJuncen) - - 修复对 Geometry 字段类型的支持 [#13651](https://github.com/tikv/tikv/issues/13651) @[dveeden](https://github.com/dveeden) - - 修复当未使用 new collation 时 `like` 无法处理 `_` 中非 ASCII 字符的问题 [#13769](https://github.com/tikv/tikv/issues/13769) @[YangKeao](https://github.com/YangKeao) - - 修复 tikv-ctl 执行 reset-to-version 时出现 segfault 的问题 [#13829](https://github.com/tikv/tikv/issues/13829) @[tabokie](https://github.com/tabokie) + - Fix an error in Raft Engine ctl [#11119](https://github.com/tikv/tikv/issues/11119) @[tabokie](https://github.com/tabokie) + - Fix the `Get raft db is not allowed` error when executing the `compact raft` command in tikv-ctl [#13515](https://github.com/tikv/tikv/issues/13515) @[guoxiangCN](https://github.com/guoxiangCN) + - Fix the issue that log backup does not work when TLS is enabled [#13867](https://github.com/tikv/tikv/issues/13867) @[YuJuncen](https://github.com/YuJuncen) + - Fix the support issue of the Geometry field type [#13651](https://github.com/tikv/tikv/issues/13651) @[dveeden](https://github.com/dveeden) + - Fix the issue that `_` in the `LIKE` operator cannot match non-ASCII characters when new collation is not enabled [#13769](https://github.com/tikv/tikv/issues/13769) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that tikv-ctl is terminated unexpectedly when executing the `reset-to-version` command [#13829](https://github.com/tikv/tikv/issues/13829) @[tabokie](https://github.com/tabokie) + PD From 1e6b4ad00ccf9905a8af3fc6ac98d199b407faad Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Wed, 14 Dec 2022 14:57:44 +0800 Subject: [PATCH 13/83] add translation for binding history execution plans --- releases/release-6.5.0.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 020b70ef2c145..b993846c0376f 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -72,6 +72,13 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). For more information, see the [user documentation](/tiflash/tiflash-results-materialization.md). +* Support binding history execution plans (experimental) [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@qiancai** + + For a SQL statement, due to various factors during execution, the optimizer might occasionally choose a new execution plan instead of its previous optimal execution plan, and the SQL performance is impacted. In this case, if the optimal execution plan has not been cleared yet, it still exists in the SQL execution history. + + In v6.5.0, TiDB supports binding historical execution plans by extending the binding object in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statement. When the execution plan of a SQL statement changes, you can bind the original execution plan by specifying `plan_digest` in the `CREATE [GLOBAL | SESSION] BINDING` statement to quickly recover SQL performance, as long as the original execution plan is still in the SQL execution history memory table (for example, `statements_summary`). This feature can simplify the process of handling execution plan change issues and improve your maintenance efficiency. + + For more information, see [user documentation](/sql-plan-management.md#bind-historical-execution-plans). ### Security * Support the password complexity policy [#38928](https://github.com/pingcap/tidb/issues/38928) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** From 267cc604575e595a76e3b391cd171e5d2255a09d Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Wed, 14 Dec 2022 15:03:36 +0800 Subject: [PATCH 14/83] Apply suggestions from code review --- releases/release-6.5.0.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index b993846c0376f..5d9f3cfd8862e 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -79,6 +79,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). In v6.5.0, TiDB supports binding historical execution plans by extending the binding object in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statement. When the execution plan of a SQL statement changes, you can bind the original execution plan by specifying `plan_digest` in the `CREATE [GLOBAL | SESSION] BINDING` statement to quickly recover SQL performance, as long as the original execution plan is still in the SQL execution history memory table (for example, `statements_summary`). This feature can simplify the process of handling execution plan change issues and improve your maintenance efficiency. For more information, see [user documentation](/sql-plan-management.md#bind-historical-execution-plans). + ### Security * Support the password complexity policy [#38928](https://github.com/pingcap/tidb/issues/38928) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** @@ -145,7 +146,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). For more information, see [User document](/optimizer-hints.md#hints-that-take-effect-globally). -* Support pushing down the sorting operation of [partitioned-table](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** +* Support pushing down sorting operations of [partitioned-table](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** Although [partitioned table](/partitioned-table.md) has been GA since v6.1.0, TiDB is continually improving its performance. In v6.5.0, TiDB supports pushing down sort operations such as `ORDER BY` and `LIMIT` to TiKV for computation and filtering, which reduces network I/O overhead and improves SQL performance when you use partitioned tables. From 90c29109ef6fde2334dd9d148b3a5898206b1a4f Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 14 Dec 2022 16:09:50 +0800 Subject: [PATCH 15/83] Update releases/release-6.5.0.md --- releases/release-6.5.0.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 5d9f3cfd8862e..6647530d9353c 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -178,15 +178,15 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). 更多信息,请参考[用户文档](链接)。 -* TiDB 全局内存控制 GA [#37816](https://github.com/pingcap/tidb/issues/37816) @[wshwsh12](https://github.com/wshwsh12) **tw@TomShawn** +* The global memory control feature is now GA. [#37816](https://github.com/pingcap/tidb/issues/37816) @[wshwsh12](https://github.com/wshwsh12) **tw@TomShawn** - 在 v6.5.0 中,TiDB 中主要的内存消耗都已经能被全局内存控制跟踪到, 当全局内存消耗接近 [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-从-v640-版本开始引入) 所定义的预设值时,TiDB 会尝试 GC 或取消 SQL 操作等手段限制内存使用,保证 TiDB 的稳定性。 + Since v6.5.0, the global memory control feature can track the main memory consumption in TiDB. When the global memory consumption reaches the preset value defined by [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640), TiDB tries to limit the memory usage by GC or canceling SQL operations, to ensure stability. - 需要注意的是, 会话中事务所消耗的内存 (由配置项 [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) 设置最大值) 如今会被内存管理模块跟踪: 当单个会话的内存消耗达到系统变量 [`tidb_mem_quota_query`](/system-variables.md#tidbmemquotaquery) 所定义的阀值时,将会触发系统变量 [tidb-mem-oom-action](/system-variables.md#tidbmemoomaction-span-classversion-mark从-v610-版本开始引入span) 所定义的行为 (默认为 `CANCEL` ,即取消操作)。 为了保证行为向前兼容,当配置 [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) 为非默认值时, TiDB 仍旧会保证事务使用到这么大的内存而不被取消。 + Note that the memory consumed by the transaction in a session (the maximum value was previously set by the configuration item [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit)) is now tracked by the memory management module: when the memory consumption of a single session reaches the threshold defined by the system variable [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query), the behavior defined by the system variable [`tidb_mem_oom_action`](/system-variables.md#tidb_mem_oom_action-new-in-v610) will be triggered (the default is `CANCEL`, that is, canceling operations). To ensure forward compatibility, when [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) is configured as a non-default value, TiDB will still ensure that transactions can use the memory size set by `txn-total-size-limit`. - 对于运行 v6.5.0 及以上版本的客户,建议移除配置项 [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit),取消对事务内存做单独的限制,转而由系统变量 [`tidb_mem_quota_query`](/system-variables.md#tidbmemquotaquery) 和 [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-从-v640-版本开始引入) 对全局内存进行管理,从而提高内存的使用效率。 + If you are running TiDB v6.5.0 or later, it is recommended to remove [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) and not to set a separate limit on the memory usage of transactions. Instead, use the system variables [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) and [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) to manage memory globally, which can improve memory efficiency. - 更多信息,请参考[用户文档](/configure-memory-usage.md)。 + For more info, see the [user document](/configure-memory-usage.md). ### Ease of use From e14762783fe5d08b2495c0b7bf3464e6ab3b36ce Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 14 Dec 2022 16:09:59 +0800 Subject: [PATCH 16/83] Update releases/release-6.5.0.md --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 6647530d9353c..cc44102659be1 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -120,11 +120,11 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### Performance -* 进一步增强索引合并 [INDEX MERGE](/glossary.md#index-merge) 功能 [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[@time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** +* Further enhance the [INDEX MERGE](/glossary.md#index-merge) feature. [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** - 新增了对在 WHERE 语句中使用 `AND` 联结的过滤条件的索引合并能力(v6.5 之前的版本只支持 `OR` 连接词的情况),TiDB 的索引合并至此可以覆盖更一般的查询过滤条件组合,不再限定于并集(`OR`)关系。当前版本仅支持优化器自动选择 “OR” 条件下的索引合并,用户须使用 [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) Hint 来开启对于 AND 联结的索引合并。 + Before v6.5.0, TiDB only supported using index merge for the filter conditions connected by `OR`. Starting from v6.5.0, TiDB has supported using index merge for filter conditions connected by`AND` in the `WHERE` clause. In this way, the index merge of TiDB can now cover more general combinations of query filter conditions and is no longer limited to Union (`OR`) relationship. The current v6.5.0 version only supports index merge under `OR` conditions as automatically selected by the optimizer. To enable index merge for `AND` conditions, you need to use the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint. - 关于“索引合并”功能的介绍请参阅 [v5.4 release note](/release-5.4.0#性能), 以及优化器相关的[用户文档](/explain-index-merge.md) + For more details about index merge, see [v5.4 release notes](/release-5.4.0#performance) and [Explain Index Merge](/explain-index-merge.md). * Support pushing down the following [JSON functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** From d8fe0a6249b1d6a50c5a36c3ea9f01b6e5891e3f Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 14 Dec 2022 17:22:50 +0800 Subject: [PATCH 17/83] Apply suggestions from code review --- releases/release-6.5.0.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index cc44102659be1..b9506bae8268a 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -346,7 +346,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + TiDB - - 对于 `bit` and `char` 类型的列,使 `INFORMATION_SCHEMA.COLUMNS` 的显示结果与 MySQL 一致 [#25472](https://github.com/pingcap/tidb/issues/25472) @[hawkingrei](https://github.com/hawkingrei) + - For `BIT` and `CHAR` columns, make the result of `INFORMATION_SCHEMA.COLUMNS` consistent with MySQL [#25472](https://github.com/pingcap/tidb/issues/25472) @[hawkingrei](https://github.com/hawkingrei) + TiKV @@ -407,17 +407,17 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + TiDB - - 修复 chunk reuse 功能部分情况下内存 chunk 被错误使用的问题 [#38917](https://github.com/pingcap/tidb/issues/38917) @[keeplearning20221](https://github.com/keeplearning20221) - - 修复 `tidb_constraint_check_in_place_pessimistic` 可能被全局设置影响内部 session 的问题 [#38766](https://github.com/pingcap/tidb/issues/38766) @[ekexium](https://github.com/ekexium) - - 修复了 AUTO_INCREMENT 列无法和 Check 约束一起使用的问题 [#38894](https://github.com/pingcap/tidb/issues/38894) @[YangKeao](https://github.com/YangKeao) - - 修复使用 'insert ignore into' 往 smallint 类型 auto increment 的列插入 string 类型数据会报错的问题 [#38483](https://github.com/pingcap/tidb/issues/38483) @[hawkingrei](https://github.com/hawkingrei) - - 修复了重命名分区表的分区列操作出现空指针报错的问题 [#38932](https://github.com/pingcap/tidb/issues/38932) @[mjonss](https://github.com/mjonss) - - 修复了一个修改分区表的分区列导致 DDL 卡死的问题 [#38530](https://github.com/pingcap/tidb/issues/38530) @[mjonss](https://github.com/mjonss) - - 修复了从 v4.0 升级到 v6.4 后 'admin show job' 操作崩溃的问题 [#38980](https://github.com/pingcap/tidb/issues/38980) @[tangenta](https://github.com/tangenta) - - 修复了 `tidb_decode_key` 函数未正确处理分区表编码的问题 [#39304](https://github.com/pingcap/tidb/issues/39304) @[Defined2014](https://github.com/Defined2014) - - 修复了 log rotate 时,grpc 的错误日志信息未被重定向到正确的日志文件的问题 [#38941](https://github.com/pingcap/tidb/issues/38941) @[xhebox](https://github.com/xhebox) - - 修复了 `begin; select... for update;` 点查在 read engines 未配置 TiKV 时生成非预期执行计划的问题 [#39344](https://github.com/pingcap/tidb/issues/39344) @[Yisaer](https://github.com/Yisaer) - - 修复了错误地下推 `StreamAgg` 到 TiFlash 导致结果错误的问题 [#39266](https://github.com/pingcap/tidb/issues/39266) @[fixdb](https://github.com/fixdb) + - Fix the issue of memory chunk misuse for the chunk reuse feature that occurs in some cases [#38917](https://github.com/pingcap/tidb/issues/38917) @[keeplearning20221](https://github.com/keeplearning20221) + - Fix the issue that the internal sessions of `tidb_constraint_check_in_place_pessimistic` might be affected by the global setting [#38766](https://github.com/pingcap/tidb/issues/38766) @[ekexium](https://github.com/ekexium) + - Fix the issue that the `AUTO_INCREMENT` column cannot be used together with the `Check` constraint [#38894](https://github.com/pingcap/tidb/issues/38894) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that using `INSERT IGNORE INTO` to insert data of the `STRING` type into an auto-increment column of the `SMALLINT` type will raise an error [#38483](https://github.com/pingcap/tidb/issues/38483) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that the null pointer error occurs in the operation of renaming the partition column of a partitioned table [#38932](https://github.com/pingcap/tidb/issues/38932) @[mjonss](https://github.com/mjonss) + - Fix the issue that modifying the partition column of a partitioned table causes DDL to hang [#38530](https://github.com/pingcap/tidb/issues/38530) @[mjonss](https://github.com/mjonss) + - Fix the issue that the `ADMIN SHOW JOB` operation panics after upgrading from v4.0 to v6.4 [#38980](https://github.com/pingcap/tidb/issues/38980) @[tangenta](https://github.com/tangenta) + - Fix the issue that the `tidb_decode_key` function fails to correctly parse the encoding of partitioned tables [#39304](https://github.com/pingcap/tidb/issues/39304) @[Defined2014](https://github.com/Defined2014) + - Fixe the issue that gRPC error log messages are not redirected to the correct log file during log rotation [#38941](https://github.com/pingcap/tidb/issues/38941) @[xhebox](https://github.com/xhebox) + - Fix the issue that TiDB generates an unexpected query plan for the `BEGIN; SELECT... FOR UPDATE;` point query when TiKV is not configured for the read engine [#39344](https://github.com/pingcap/tidb/issues/39344) @[Yisaer](https://github.com/Yisaer) + - Fix the issue that mistakenly pushing down `StreamAgg` to TiFlash causes wrong result [#39266](https://github.com/pingcap/tidb/issues/39266) @[fixdb](https://github.com/fixdb) + TiKV From 6a3a464d51a55f3e857d74ec0c6de1812d2c4d16 Mon Sep 17 00:00:00 2001 From: Ran Date: Wed, 14 Dec 2022 19:11:30 +0800 Subject: [PATCH 18/83] Apply suggestions from code review Co-authored-by: Grace Cai --- releases/release-6.5.0.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index b9506bae8268a..02d312203d1e9 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -50,7 +50,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). For more information, see [Non-Transactional DML statements](/non-transactional-dml.md) and [`BATCH` syntax](/sql-statements/sql-statement-batch.md). -* Support time to live (TTL) (experimental feature) [#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** +* Support time to live (TTL) (experimental) [#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** TTL provides row-level data lifetime management. In TiDB, a table with the TTL attribute automatically checks data lifetime and deletes expired data at the row level. TTL is designed to help users clean up unnecessary data periodically and in a timely manner without affecting the online read and write workloads. @@ -84,29 +84,29 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * Support the password complexity policy [#38928](https://github.com/pingcap/tidb/issues/38928) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** - After you enable the password complexity policy for TiDB, when you set a password, TiDB checks the password length, the number of uppercase and lowercase letters, numbers, and special characters, whether the password matches the dictionary, and whether the password matches the username. This ensures that you set a secure password. + After this policy is enabled, when you set a password, TiDB checks the password length, whether uppercase and lowercase letters, numbers, and special characters in the password are sufficient, whether the password matches the dictionary, and whether the password matches the username. This ensures that you set a secure password. TiDB provides the SQL function [`VALIDATE_PASSWORD_STRENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_validate-password-strength) to validate the password strength. - For more information, refer to [user document](/password-management.md#password-complexity-policy). + For more information, see [User document](/password-management.md#password-complexity-policy). * Support the password expiration policy [#38936](https://github.com/pingcap/tidb/issues/38936) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** - TiDB supports the password expiration policy, including manual expiration, global-level automatic expiration, and account-level automatic expiration. After this policy is enabled, you must change your passwords periodically. This reduces the risk of password leakage due to long-term use and improve password security. + TiDB supports configuring the password expiration policy, including manual expiration, global-level automatic expiration, and account-level automatic expiration. After this policy is enabled, you must change your passwords periodically. This reduces the risk of password leakage due to long-term use and improves password security. - For more information, refer to [user document](/password-management.md#password-expiration-policy). + For more information, see [User document](/password-management.md#password-expiration-policy). * Support the password reuse policy [#38937](https://github.com/pingcap/tidb/issues/38937) @[keeplearning20221](https://github.com/keeplearning20221) **tw@ran-huang** - TiDB supports the password reuse policy, including global-level password reuse policy and account-level password reuse policy. After this policy is enabled, you cannot use the passwords that you have used within a period or the most recent several passwords that you have used. This reduces the risk of password leakage due to repeated use of passwords and improves password security. + TiDB supports configuring the password reuse policy, including global-level password reuse policy and account-level password reuse policy. After this policy is enabled, you cannot use the passwords that you have used within a specified period or the most recent several passwords that you have used. This reduces the risk of password leakage due to repeated use of passwords and improves password security. - For more information, refer to [user document](/password-management.md#password-reuse-policy). + For more information, see [User document](/password-management.md#password-reuse-policy). * Support failed-login tracking and temporary account locking policy [#38938](https://github.com/pingcap/tidb/issues/38938) @[lastincisor](https://github.com/lastincisor) **tw@ran-huang** After this policy is enabled, if you log in to TiDB with incorrect passwords multiple times consecutively, the account is temporarily locked. After the lock time ends, the account is automatically unlocked. - For more information, refer to [user document](/password-management.md#failed-login-tracking-and-temporary-account-locking-policy). + For more information, see [User document](/password-management.md#failed-login-tracking-and-temporary-account-locking-policy). ### Observability @@ -196,9 +196,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). For more information, see [user documentation](sql-statements/sql-statement-explain-analyze.md). -* Support the output of execution plans in JSON format [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) **tw@ran-huang** +* Support the output of execution plans in the JSON format [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) **tw@ran-huang** - In v6.5, TiDB extends the output format of the execution plan. By using `EXPLAIN FORMAT=tidb_json `, you can output the SQL execution plan in JSON format. With this capability, SQL debugging tools and diagnostic tools can read the execution plan more conveniently and accurately, thus improving the ease of use of SQL diagnosis and tuning. + In v6.5.0, TiDB extends the output format of execution plans. By using `EXPLAIN FORMAT=tidb_json `, you can output SQL execution plans in the JSON format. With this capability, SQL debugging tools and diagnostic tools can read execution plans more conveniently and accurately, thus improving the ease of use of SQL diagnosis and tuning. For more information, see [user document](/sql-statements/sql-statement-explain.md). From e3fb54a19ddecedf4c9f7d45b2558c38769ee2dc Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Thu, 15 Dec 2022 00:01:27 +0800 Subject: [PATCH 19/83] Apply suggestions from code review Co-authored-by: Ran --- releases/release-6.5.0.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 02d312203d1e9..550a4565674b3 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -58,7 +58,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * Support saving TiFlash query results using the `INSERT INTO SELECT` statement (experimental) [#37515](https://github.com/pingcap/tidb/issues/37515) @[gengliqi](https://github.com/gengliqi) **tw@qiancai** - Starting from v6.5.0, TiDB supports pushing down the `SELECT` clause (analysis query) of the `INSERT INTO SELECT` statement to TiFlash. In this way, you can easily save the TiFlash query result to a TiDB table specified by `INSERT INTO` for further analysis, which takes effect as result caching (that is, result materialization). For example: + Starting from v6.5.0, TiDB supports pushing down the `SELECT` clause (analytical query) of the `INSERT INTO SELECT` statement to TiFlash. In this way, you can easily save the TiFlash query result to a TiDB table specified by `INSERT INTO` for further analysis, which takes effect as result caching (that is, result materialization). For example: ```sql INSERT INTO t2 SELECT Mod(x,y) FROM t1; @@ -66,9 +66,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). During the experimental phase, this feature is disabled by default. To enable it, you can set the [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) system variable to `ON`. There are no special restrictions on the result table specified by `INSERT INTO` for this feature, and you are free to add a TiFlash replica to that result table or not. Typical usage scenarios of this feature include: - - Run complex analysis queries using TiFlash + - Run complex analytical queries using TiFlash - Reuse TiFlash query results or deal with highly concurrent online requests - - Need a relatively small result set comparing with the input data size, recommended to be within 100MiB. + - Need a relatively small result set comparing with the input data size, preferably smaller than 100MiB. For more information, see the [user documentation](/tiflash/tiflash-results-materialization.md). @@ -132,7 +132,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * `->>` * `JSON_EXTRACT()` - The JSON format provides a flexible way to model application design. Therefore, more and more applications are using the JSON format for data exchange and data storage. By pushing down JSON functions to TiFlash, you can improve the efficiency of analyzing data in the JSON type and use TiDB for more real-time analytics scenarios. + The JSON format provides a flexible way for application data modeling. Therefore, more and more applications are using the JSON format for data exchange and data storage. By pushing down JSON functions to TiFlash, you can improve the efficiency of analyzing data in the JSON type and use TiDB for more real-time analytics scenarios. * Support pushing down the following [string functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** @@ -146,9 +146,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). For more information, see [User document](/optimizer-hints.md#hints-that-take-effect-globally). -* Support pushing down sorting operations of [partitioned-table](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** +* Support pushing down sorting operations of [partitioned tables](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** - Although [partitioned table](/partitioned-table.md) has been GA since v6.1.0, TiDB is continually improving its performance. In v6.5.0, TiDB supports pushing down sort operations such as `ORDER BY` and `LIMIT` to TiKV for computation and filtering, which reduces network I/O overhead and improves SQL performance when you use partitioned tables. + Although the [partitioned table](/partitioned-table.md) feature has been GA since v6.1.0, TiDB is continually improving its performance. In v6.5.0, TiDB supports pushing down sorting operations such as `ORDER BY` and `LIMIT` to TiKV for computation and filtering, which reduces network I/O overhead and improves SQL performance when you use partitioned tables. * Optimizer introduces a more accurate Cost Model Version 2 [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** @@ -190,7 +190,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### Ease of use -* Refine the execution information of the TiFlash `TableFullScan` operator in the `EXPLAIN ANALYZE` output [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) **tw@qiancai** +* Refine the execution information of the TiFlash `TableFullScan` operator in the `EXPLAIN ANALYZE` output [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) **tw@qiancai** The `EXPLAIN ANALYZE` statement is used to print execution plans and runtime statistics. In v6.5.0, TiFlash has refined the execution information of the `TableFullScan` operator by adding the DMFile-related execution information. Now the TiFlash data scan status information is presented more intuitively, which helps you analyze TiFlash performance more easily. @@ -363,7 +363,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). - Support batch Coprocessor tasks processing in TiKV [#13849](https://github.com/tikv/tikv/issues/13849) @[cfzjywxk](https://github.com/cfzjywxk) - Reduce waiting time on failure recovery by notifying TiKV to wake up Regions [#13648](https://github.com/tikv/tikv/issues/13648) @[LykxSassinator](https://github.com/LykxSassinator) - Reduce the requested size of memory usage by code optimization [#13827](https://github.com/tikv/tikv/issues/13827) @[BusyJay](https://github.com/BusyJay) - - Introduce the raft extension to improve code extensibility [#13827](https://github.com/tikv/tikv/issues/13827) @[BusyJay](https://github.com/BusyJay) + - Introduce the Raft extension to improve code extensibility [#13827](https://github.com/tikv/tikv/issues/13827) @[BusyJay](https://github.com/BusyJay) - Support using tikv-ctl to query which Regions are included in a certain key range [#13760](https://github.com/tikv/tikv/issues/13760) [@HuSharp](https://github.com/HuSharp) - Improve read and write performance for rows that are not updated but locked continuously [#13694](https://github.com/tikv/tikv/issues/13694) [@sticnarf](https://github.com/sticnarf) From b0f7ebb03cd27119f28ed204db0708747c75adcf Mon Sep 17 00:00:00 2001 From: Ran Date: Thu, 15 Dec 2022 11:03:42 +0800 Subject: [PATCH 20/83] Apply suggestions from code review Co-authored-by: Grace Cai --- releases/release-6.5.0.md | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 550a4565674b3..09e0f5bae07f7 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -52,9 +52,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * Support time to live (TTL) (experimental) [#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** - TTL provides row-level data lifetime management. In TiDB, a table with the TTL attribute automatically checks data lifetime and deletes expired data at the row level. TTL is designed to help users clean up unnecessary data periodically and in a timely manner without affecting the online read and write workloads. + TTL provides row-level data lifetime management. In TiDB, a table with the TTL attribute automatically checks data lifetime and deletes expired data at the row level. TTL is designed to help you clean up unnecessary data periodically and in a timely manner without affecting the online read and write workloads. - For more information, refer to [user document](/time-to-live.md) + For more information, see [User document](/time-to-live.md). * Support saving TiFlash query results using the `INSERT INTO SELECT` statement (experimental) [#37515](https://github.com/pingcap/tidb/issues/37515) @[gengliqi](https://github.com/gengliqi) **tw@qiancai** @@ -254,9 +254,11 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). Storage sink 支持 changed log 格式位 canal-json/csv,此外 changed log 从 TiCDC 同步到 storage 的延迟可以达到 xx,支持更多信息,请参考[用户文档](https://github.com/pingcap/docs-cn/pull/12151/files)。 -* TiCDC 支持两个或者多个 TiDB 集群之间相互复制 @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** +* TiCDC supports bidirectional replication across multiple clusters @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** - TiCDC 支持在多个 TiDB 集群之间进行双向复制。 如果业务上需要 TiDB 多活,尤其是异地多活的场景,可以使用该功能作为 TiDB 多活的解决方案。只要为每个 TiDB 集群到其他 TiDB 集群的 TiCDC changefeed 同步任务配置 `bdr-mode = true` 参数,就可以实现多个 TIDB 集群之间的数据相互复制。更多信息,请参考[用户文档](/ticdc/ticdc/ticdc-bidirectional-replication.md). + TiCDC supports bidirectional replication across multiple TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution in multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from each TiDB cluster to other TiDB clusters, you can achieve bidirectional data replication across multiple TiDB clusters. + + For more information, refer to [user document](/ticdc/ticdc-bidirectional-replication.md). * TiCDC 性能提升 **tw@shichun-0415 @@ -456,14 +458,14 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + TiDB Data Migration (DM) - - Fix the issue that a `task-mode:all` task cannot be started when the upstream database enables GTID mode but does not have any data [#7037](https://github.com/pingcap/tiflow/issues/7037) @[liumengya94](https://github.com/liumengya94) + - Fix the issue that a `task-mode:all` task cannot be started when the upstream database enables the GTID mode but does not have any data [#7037](https://github.com/pingcap/tiflow/issues/7037) @[liumengya94](https://github.com/liumengya94) - Fix the issue that data is replicated for multiple times when a new DM worker is scheduled before the existing worker exits [#7658](https://github.com/pingcap/tiflow/issues/7658) @[GMHDBJD](https://github.com/GMHDBJD) - - Fix the issue that DM precheck is not passed when the upstream database uses regular expression to grant privileges [#7645](https://github.com/pingcap/tiflow/issues/7645) @[lance6716](https://github.com/lance6716) + - Fix the issue that DM precheck is not passed when the upstream database uses regular expressions to grant privileges [#7645](https://github.com/pingcap/tiflow/issues/7645) @[lance6716](https://github.com/lance6716) + TiDB Lightning - Fix the memory leakage issue when TiDB Lightning imports a huge source data file [#39331](https://github.com/pingcap/tidb/issues/39331) @[dsdashun](https://github.com/dsdashun) - - Fix the issue that TiDB Lightning cannot detect conflict correctly when importing data in parallel [#39476](https://github.com/pingcap/tidb/issues/39476) @[dsdashun](https://github.com/dsdashun) + - Fix the issue that TiDB Lightning cannot detect conflicts correctly when importing data in parallel [#39476](https://github.com/pingcap/tidb/issues/39476) @[dsdashun](https://github.com/dsdashun) ## Contributors From 07169066a53f5bb781a482d51b75a61aab4ca9db Mon Sep 17 00:00:00 2001 From: Ran Date: Thu, 15 Dec 2022 11:07:23 +0800 Subject: [PATCH 21/83] Update releases/release-6.5.0.md --- releases/release-6.5.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 09e0f5bae07f7..a430a8c006d8d 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -400,8 +400,8 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + TiDB Data Migration (DM) - - Improve the data replication performance for DM by not parsing the data of tables in the block list [#4287](https://github.com/pingcap/tiflow/issues/4287) @[GMHDBJD](https://github.com/GMHDBJD) - - Improve the write efficiency of DM relay by using asynchronous write and batch write [#4287](https://github.com/pingcap/tiflow/issues/4287) @[GMHDBJD](https://github.com/GMHDBJD) + - Improve the data replication performance for DM by not parsing the data of tables in the block list [#7622](https://github.com/pingcap/tiflow/pull/7622) @[GMHDBJD](https://github.com/GMHDBJD) + - Improve the write efficiency of DM relay by using asynchronous write and batch write [#7580](https://github.com/pingcap/tiflow/pull/7580) @[GMHDBJD](https://github.com/GMHDBJD) - Optimize the error messages in DM precheck [#7621](https://github.com/pingcap/tiflow/issues/7621) @[buchuitoudegou](https://github.com/buchuitoudegou) - Improve the compatibility of `SHOW SLAVE HOSTS` for old MySQL versions [#5017](https://github.com/pingcap/tiflow/issues/5017) @[lyzx2001](https://github.com/lyzx2001) From d0dae7547ede3cf5b34a4cf708c6e8a7a38d03c6 Mon Sep 17 00:00:00 2001 From: Aolin Date: Thu, 15 Dec 2022 14:28:45 +0800 Subject: [PATCH 22/83] Apply suggestions from code review Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- releases/release-6.5.0.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index a430a8c006d8d..667fef55afd45 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -30,23 +30,23 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * The performance of TiDB adding indexes is improved by 10 times [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) **tw@Oreoxmt** - TiDB v6.3.0 introduces the [Add index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) as an experimental feature to improve the speed of backfilling when creating an index. In v6.5.0, this feature becomes GA and is enabled by default and the performance improvement is expected to be 10 times faster than before. The acceleration feature is suitable for scenarios where a single SQL statement adds an index serially. When multiple SQL statements add indexes in parallel, only one of the SQL statements will be accelerated. + TiDB v6.3.0 introduces the [Add index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) as an experimental feature to improve the speed of backfilling when creating an index. In v6.5.0, this feature becomes GA and is enabled by default, and the performance on large tables is expected to be 10 times faster. The acceleration feature is suitable for scenarios where a single SQL statement adds an index serially. When multiple SQL statements add indexes in parallel, only one of the SQL statements will be accelerated. * Provide lightweight metadata lock to improve the DML success rate during DDL change [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) **tw@Oreoxmt** - TiDB v6.3.0 introduces [Metadata lock](/metadata-lock.md) as an experimental feature. To avoid the `Information schema is changed` error caused by DML statements, TiDB coordinates the priority of DMLs and DDLs during table metadata change, and makes executing DDLs wait for the DMLs with old metadata to commit. In v6.5.0, this feature becomes GA and is enabled by default. It is suitable for all types of DDLs change scenarios. + TiDB v6.3.0 introduces [Metadata lock](/metadata-lock.md) as an experimental feature. To avoid the `Information schema is changed` error caused by DML statements, TiDB coordinates the priority of DMLs and DDLs during table metadata change, and makes the ongoing DDLs wait for the DMLs with old metadata to commit. In v6.5.0, this feature becomes GA and is enabled by default. It is suitable for various types of DDLs change scenarios. For more information, see [User document](/metadata-lock.md). * Support restoring a cluster to a specific point in time by using `FLASHBACK CLUSTER TO TIMESTAMP` [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** - TiDB v6.4.0 introduces the [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement as an experimental feature. You can use this statement to restore a cluster to a specific point in time within the Garbage Collection (GC) life time. In v6.5.0, this statement becomes GA. This feature helps you to easily undo DML misoperations, restore the original cluster in minutes, rollback data at different time points to determine the exact time when data changes, and it is compatible with PITR and TiCDC. + TiDB v6.4.0 introduces the [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement as an experimental feature. You can use this statement to restore a cluster to a specific point in time within the Garbage Collection (GC) life time. In v6.5.0, this statement becomes GA. This feature helps you to easily undo DML misoperations, restore the original cluster in minutes, roll back data at different time points to determine the exact time when data changes, and it is compatible with PITR and TiCDC. - For more information, see [User document](/sql-statements/sql-statement-flashback-to-timestamp.md). + For more information, see [user document](/sql-statements/sql-statement-flashback-to-timestamp.md). * Fully support non-transactional DML statements including `INSERT`, `REPLACE`, `UPDATE`, and `DELETE` [#33485](https://github.com/pingcap/tidb/issues/33485) @[ekexium](https://github.com/ekexium) **tw@Oreoxmt** - In the scenarios of large data processing, a single SQL statement with a large transaction might have a negative impact on the cluster stability and performance. A non-transactional DML statement is a DML statement split into multiple SQL statements for internal execution. The split statements compromise transaction atomicity and isolation but greatly improve the cluster stability. TiDB supports non-transactional `DELETE` statements since v6.1.0, and v6.5.0 adds support for non-transactional `INSERT`, `REPLACE`, and `UPDATE` statements. + In the scenarios of large data processing, a single SQL statement with a large transaction might have a negative impact on the cluster stability and performance. A non-transactional DML statement is a DML statement split into multiple SQL statements for internal execution. The split statements compromise transaction atomicity and isolation but greatly improve the cluster stability. TiDB supports non-transactional `DELETE` statements since v6.1.0, and supports non-transactional `INSERT`, `REPLACE`, and `UPDATE` statements since v6.5.0. For more information, see [Non-Transactional DML statements](/non-transactional-dml.md) and [`BATCH` syntax](/sql-statements/sql-statement-batch.md). @@ -140,9 +140,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * `regexp_instr` * `regexp_substr` -* Support the global Hint to interfere with the execution plan generation in [Views](/views.md) [#37887](https://github.com/pingcap/tidb/issues/37887) @[Reminiscent](https://github.com/Reminiscent) **tw@Oreoxmt** +* Support the global optimizer hint to interfere with the execution plan generation in [Views](/views.md) [#37887](https://github.com/pingcap/tidb/issues/37887) @[Reminiscent](https://github.com/Reminiscent) **tw@Oreoxmt** - In some view access scenarios, you need to use Hints to interfere with the execution plan of the query in the view to achieve best performances. In v6.5.0, TiDB supports adding global Hints for the query blocks in the view, thus the Hints defined in the query can be effective in the view. This feature provides a way to inject Hints into complex SQL statements that contain nested views, enhances the execution plan control, and stabilizes the performance of complex statements. To use global Hints, you need to [name the query blocks](/optimizer-hints.md#step-1-define-the-query-block-name-of-the-view-using-the-qb_name-hint) and [specify Hint references](/optimizer-hints.md#step-2-add-the-target-hints). + In some view access scenarios, you need to use optimizer hints to interfere with the execution plan of the query in the view to achieve the best performance. Since v6.5.0, TiDB supports adding global hints for the query blocks in the view, thus the hints defined in the query can be effective in the view. This feature provides a way to inject hints into complex SQL statements that contain nested views, enhances the execution plan control, and stabilizes the performance of complex statements. To use global hints, you need to [name the query blocks](/optimizer-hints.md#step-1-define-the-query-block-name-of-the-view-using-the-qb_name-hint) and [specify hint references](/optimizer-hints.md#step-2-add-the-target-hints). For more information, see [User document](/optimizer-hints.md#hints-that-take-effect-globally). @@ -152,9 +152,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * Optimizer introduces a more accurate Cost Model Version 2 [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** - TiDB v6.2.0 introduces the [Cost Model Version 2](/cost-model.md#cost-model-version-2) as an experimental feature. This model uses a more accurate cost estimation method to help the optimizer choose the optimal execution plan. Especially when TiFlash is deployed, Cost Model Version 2 automatically chooses the appropriate storage engine and avoids manual intervention. After real scene testing for a period of time, this model becomes GA in v6.5.0. The newly created cluster uses Cost Model Version 2 by default. For clusters upgrade to v6.5.0, because Cost Model Version 2 might cause changes to query plans, you can set the [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-new-in-v620) variable to use the new cost model after sufficient performance testing. + TiDB v6.2.0 introduces the [Cost Model Version 2](/cost-model.md#cost-model-version-2) as an experimental feature. This model uses a more accurate cost estimation method to help the optimizer choose the optimal execution plan. Especially when TiFlash is deployed, Cost Model Version 2 automatically helps choose the appropriate storage engine and avoids much manual intervention. After real-scene testing for a period of time, this model becomes GA in v6.5.0. SInce v6.5.0, newly-created clusters use Cost Model Version 2 by default. For clusters upgrade to v6.5.0, because Cost Model Version 2 might cause changes to query plans, you can set the [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-new-in-v620) variable to use the new cost model after sufficient performance testing. - Cost Model Version 2 becomes a generally available feature that significantly improves the overall capability of the TiDB optimizer and evolves towards a more powerful HTAP database. + Cost Model Version 2 becomes a generally available feature that significantly improves the overall capability of the TiDB optimizer and helps TiDB evolve towards a more powerful HTAP database. For more information, see [User document](/cost-model.md#cost-model-version-2). @@ -212,7 +212,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). CREATE TABLE t(a int AUTO_INCREMENT key) AUTO_ID_CACHE 1; ``` - For more information, see [User document](/auto-increment.md#mysql-compatibility-mode). + For more information, see [user document](/auto-increment.md#mysql-compatibility-mode). ### Data migration @@ -359,7 +359,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). - Support backing up data to the Asia Pacific region (ap-southeast-3) of AWS by updating the rusoto library [#13751](https://github.com/tikv/tikv/issues/13751) @[3pointer](https://github.com/3pointer) - Reduce pessimistic transaction conflicts [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) - Improve recovery performance by caching external storage objects [#13798](https://github.com/tikv/tikv/issues/13798) @[YuJuncen](https://github.com/YuJuncen) - - The CheckLeader is run in a dedicated thread to reduce TiCDC replication latency [#13774](https://github.com/tikv/tikv/issues/13774) @[overvenus](https://github.com/overvenus) + - Run the CheckLeader in a dedicated thread to reduce TiCDC replication latency [#13774](https://github.com/tikv/tikv/issues/13774) @[overvenus](https://github.com/overvenus) - Support pull model for Checkpoints [#13824](https://github.com/tikv/tikv/issues/13824) @[YuJuncen](https://github.com/YuJuncen) - Avoid spinning issues on the sender side by updating crossbeam-channel [#13815](https://github.com/tikv/tikv/issues/13815) @[sticnarf](https://github.com/sticnarf) - Support batch Coprocessor tasks processing in TiKV [#13849](https://github.com/tikv/tikv/issues/13849) @[cfzjywxk](https://github.com/cfzjywxk) From 56207b629af948e98bc0e7e2f8e300041dced104 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Thu, 15 Dec 2022 14:30:53 +0800 Subject: [PATCH 23/83] translate compatibility changes Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- releases/release-6.5.0.md | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 667fef55afd45..bbbf40bac86e8 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -294,19 +294,19 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### System variables -| 变量名 | 修改类型(包括新增/修改/删除) | 描述 | +| Variable name | Change type | Description | |--------|------------------------------|------| -| [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-从-v620-版本开始引入) | 修改 | 该变量默认值从 `1` 修改为 `2`,表示默认使用 Cost Model Version 2 进行索引选择和算子选择。 | -| [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-从-v630-版本开始引入) | 修改 | 该变量默认值从 `OFF` 修改为 `ON`,表示默认开启元数据锁。 | -| [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-从-v630-版本开始引入) | 修改 | 该变量默认值从 `OFF` 修改为 `ON`,表示默认开启创建索引加速功能。 | -| [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-从-v640-版本开始引入) | 修改 | 该变量默认值由 `0` 修改为 `80%`,表示默认将 TiDB 实例的内存限制设为总内存的 80%。| +| [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Change the default value from `1` to `2`, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | +| [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Change the default value from `OFF` to `ON`, meaning that the metadata lock feature is enabled by default. | +| [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Change the default value from `OFF` to `ON`, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | +| [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) | Modified | Change the default value from `0` to `80%`, meaning that the memory limit for a TiDB instance is 80% of the total memory by default. | | [`default_password_lifetime`](/system-variables.md#default_password_lifetime-new-in-v650) | Newly added | Sets the global policy for automatic password expiration to require the user to change passwords periodically. The default value `0` indicates that the password never expires. | | [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-new-in-v650) | Newly added | This variable is read-only. It indicates whether to disconnect the client connection when the password is expired.| | [`password_history`](/system-variables.md#password_history-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on the number of password changes. The default value `0` means disabling the password reuse policy based on the number of password changes. | | [`password_reuse_interval`](/system-variables.md#password_reuse_interval-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on time elapsed. The default value `0` means disabling the password reuse policy based on time elapsed. | | [`tidb_cdc_write_source`](/system-variables.md#tidb_cdc_write_source-new-in-v650) | Newly added | When this variable is set to a value other than 0, data written in this session is considered to be written by TiCDC. This variable can only be modified by TiCDC. Do not manually modify this variable in any case. | -| [`tidb_index_merge_intersection_concurrency`](/system-variables.md#tidb_index_merge_intersection_concurrency-从-v650-版本开始引入) | 新增 | 这个变量用来设置索引合并进行交集操作时的最大并发度,仅在以动态裁剪模式访问分区表时有效。 | -| [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | 修改 | 在 v6.5.0 之前的版本中,该变量用来设置单条查询的内存使用限制。在 v6.5.0 及之后的版本中,该变量用来设置单个会话整体的内存使用限制。 | +| [`tidb_index_merge_intersection_concurrency`](/system-variables.md#tidb_index_merge_intersection_concurrency-new-in-v650) | Newly added | Sets the maximum concurrency for the intersection operations that index merge performs. It is effective only when TiDB accesses partitioned tables in the dynamic pruning mode. | +| [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | Modified | For versions earlier than TiDB v6.5.0, this variable is used to set the threshold value of memory quota for a query. For TiDB v6.5.0 and later versions, this variable is used to set the threshold value of memory quota for a session. | | [`tidb_source_id`](/system-variables.md#tidb_source_id-new-in-v650) | Newly added | This variable is used to configure the different cluster IDs in a [bi-directional replication](/ticdc/manage-ticdc.md#bi-directional-replication) cluster.| | [`tidb_ttl_delete_batch_size`](/system-variables.md#tidb_ttl_delete_batch_size-new-in-v650) | Newly added | This variable is used to set the maximum number of rows that can be deleted in a single `DELETE` transaction in a TTL job. | | [`tidb_ttl_delete_rate_limit`](/system-variables.md#tidb_ttl_delete_rate_limit-new-in-v650) | Newly added | This variable is used to limit the rate of `DELETE` statements in TTL jobs on each TiDB node. The value represents the maximum number of `DELETE` statements allowed per second in a single node in a TTL job. When this variable is set to `0`, no limit is applied. | @@ -325,21 +325,20 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). | [`validate_password.number_count`](/system-variables.md#validate_passwordnumber_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient numbers. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | | [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) | Newly added | This variable controls the policy for the password complexity check. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled. The default value is `1`. | | [`validate_password.special_char_count`](/system-variables.md#validate_passwordspecial_char_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient special characters. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | -| | | | -| | | | ### Configuration file parameters -| 配置文件 | 配置项 | 修改类型 | 描述 | +| Configuration file | Configuration parameter | Change type | Description | | -------- | -------- | -------- | -------- | | TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password-new-in-v650) | Newly added | Determines whether TiDB disconnects the client connection when the password is expired. The default value is `true`, which means the client connection is disconnected when the password is expired. | -| TiDB | [`server-memory-quota`](/tidb-configuration-file.md#server-memory-quota-从-v409-版本开始引入) | 废弃 | 自 v6.5.0 起,该配置项被废弃。请使用 [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-从-v640-版本开始引入) 系统变量进行设置。 | -| TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | 修改 | 默认值从 `1s` 修改为 `200ms` | -| | | | | -| | | | | +| TiDB | [`server-memory-quota`](/tidb-configuration-file.md#server-memory-quota-new-in-v409) | Deprecated | Since v6.5.0, this configuration item is deprecated. Instead, use the system variable [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) for setting. | +| TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | The default value changes from `1s` to `200ms`. | ### Others +- Starting from v6.4.0, the mysql.user table adds two new columns: `Password_reuse_history` and `Password_reuse_time`. +- The [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) feature is enabled by default and is not compatible with the [PITR (Point-in-time recovery)](/br/br-pitr-guide.md) feature. When using the index acceleration feature, you need to make sure that no PITR backup task is running in the background, otherwise unexpected results might occur. For more information, see [tidb_ddl_enable_fast_reorg](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630). + ## 废弃功能 即将于 v6.6.0 版本废弃 v4.0.7 版本引入的 Amending Transaction 机制,并使用[元数据锁](/metadata-lock.md) 替代。 From 608d19149acccb5bfc63dd0e8ca22d88df187479 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Thu, 15 Dec 2022 14:58:36 +0800 Subject: [PATCH 24/83] adjust row sequences for compatibility changes and add community contributor IDs --- releases/release-6.5.0.md | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index bbbf40bac86e8..8886691c12bf7 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -298,7 +298,10 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). |--------|------------------------------|------| | [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Change the default value from `1` to `2`, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | | [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Change the default value from `OFF` to `ON`, meaning that the metadata lock feature is enabled by default. | +| [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | | [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Change the default value from `OFF` to `ON`, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | +| [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | Modified | For versions earlier than TiDB v6.5.0, this variable is used to set the threshold value of memory quota for a query. For TiDB v6.5.0 and later versions, this variable is used to set the threshold value of memory quota for a session. | +| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | Starting from v6.5.0, when this variable is set to`closest-adaptive` and the estimated result of a read request is greater than or equal to [`tidb_adaptive_closest _read_threshold`](/system-variables.md#tidb_adaptive_closest_read_threshold-new-in-v630), the number of TiDB nodes whose `closest-adaptive` configuration takes effect is limited in each availability zone, which is always the same as the number of TiDB nodes in the availability zone with the fewest TiDB nodes, and the other TiDB nodes automatically read from the leader replica. | | [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) | Modified | Change the default value from `0` to `80%`, meaning that the memory limit for a TiDB instance is 80% of the total memory by default. | | [`default_password_lifetime`](/system-variables.md#default_password_lifetime-new-in-v650) | Newly added | Sets the global policy for automatic password expiration to require the user to change passwords periodically. The default value `0` indicates that the password never expires. | | [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-new-in-v650) | Newly added | This variable is read-only. It indicates whether to disconnect the client connection when the password is expired.| @@ -306,7 +309,6 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). | [`password_reuse_interval`](/system-variables.md#password_reuse_interval-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on time elapsed. The default value `0` means disabling the password reuse policy based on time elapsed. | | [`tidb_cdc_write_source`](/system-variables.md#tidb_cdc_write_source-new-in-v650) | Newly added | When this variable is set to a value other than 0, data written in this session is considered to be written by TiCDC. This variable can only be modified by TiCDC. Do not manually modify this variable in any case. | | [`tidb_index_merge_intersection_concurrency`](/system-variables.md#tidb_index_merge_intersection_concurrency-new-in-v650) | Newly added | Sets the maximum concurrency for the intersection operations that index merge performs. It is effective only when TiDB accesses partitioned tables in the dynamic pruning mode. | -| [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | Modified | For versions earlier than TiDB v6.5.0, this variable is used to set the threshold value of memory quota for a query. For TiDB v6.5.0 and later versions, this variable is used to set the threshold value of memory quota for a session. | | [`tidb_source_id`](/system-variables.md#tidb_source_id-new-in-v650) | Newly added | This variable is used to configure the different cluster IDs in a [bi-directional replication](/ticdc/manage-ticdc.md#bi-directional-replication) cluster.| | [`tidb_ttl_delete_batch_size`](/system-variables.md#tidb_ttl_delete_batch_size-new-in-v650) | Newly added | This variable is used to set the maximum number of rows that can be deleted in a single `DELETE` transaction in a TTL job. | | [`tidb_ttl_delete_rate_limit`](/system-variables.md#tidb_ttl_delete_rate_limit-new-in-v650) | Newly added | This variable is used to limit the rate of `DELETE` statements in TTL jobs on each TiDB node. The value represents the maximum number of `DELETE` statements allowed per second in a single node in a TTL job. When this variable is set to `0`, no limit is applied. | @@ -330,9 +332,11 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). | Configuration file | Configuration parameter | Change type | Description | | -------- | -------- | -------- | -------- | -| TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password-new-in-v650) | Newly added | Determines whether TiDB disconnects the client connection when the password is expired. The default value is `true`, which means the client connection is disconnected when the password is expired. | | TiDB | [`server-memory-quota`](/tidb-configuration-file.md#server-memory-quota-new-in-v409) | Deprecated | Since v6.5.0, this configuration item is deprecated. Instead, use the system variable [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) for setting. | -| TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | The default value changes from `1s` to `200ms`. | +| TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password-new-in-v650) | Newly added | Determines whether TiDB disconnects the client connection when the password is expired. The default value is `true`, which means the client connection is disconnected when the password is expired. | +| TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | Change the default value from `1s` to `200ms`. | +| TiKV | [`memory-use-ratio`](/tikv-configuration-file.md#memory-use-ratio-introduced-new-in-v650) | Newly added | Indicates the ratio of available memory to total system memory in PITR log recovery. | + ### Others @@ -470,4 +474,12 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). We would like to thank the following contributors from the TiDB community: -- [贡献者 GitHub ID](链接) +- [e1ijah1](https://github.com/e1ijah1) +- [guoxiangCN](https://github.com/guoxiangCN) (First-time contributor) +- [jiayang-zheng](https://github.com/jiayang-zheng) +- [jiyfhust](https://github.com/jiyfhust) +- [mikechengwei](https://github.com/mikechengwei) +- [pingandb](https://github.com/pingandb) +- [sashashura](https://github.com/sashashura) +- [sourcelliu](https://github.com/sourcelliu) +- [wxbty](https://github.com/wxbty) From 7f55badded746b858f68959249f5346af0c446a9 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Thu, 15 Dec 2022 15:23:19 +0800 Subject: [PATCH 25/83] translate new features of observability, ticdc, and br --- releases/release-6.5.0.md | 39 +++++++++++++++++++++------------------ 1 file changed, 21 insertions(+), 18 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 8886691c12bf7..3a747ef4d90da 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -110,13 +110,17 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### Observability -* TiDB Dashboard 在 Kubernetes 环境支持独立 Pod 部署 [#1447](https://github.com/pingcap/tidb-dashboard/issues/1447) @[SabaPing](https://github.com/SabaPing) **tw@shichun-0415 +* TiDB Dashboard can be deployed on Kubernetes as an independent Pod [#1447](https://github.com/pingcap/tidb-dashboard/issues/1447) @[SabaPing](https://github.com/SabaPing) **tw@shichun-0415 - TiDB v6.5.0 且 TiDB Operator v1.4.0 之后,在 Kubernetes 上支持将 TiDB Dashboard 作为独立的 Pod 部署。在 TiDB Operator 环境,可直接访问该 Pod 的 IP 来打开 TiDB Dashboard。 + TiDB v6.5.0 (and later) and TiDB Operator v1.4.0 (and later) support deploying TiDB Dashboard as an independent Pod on Kubernetes. Using TiDB Operator, you can access the IP address of this Pod to start TiDB Dashboard. - 独立部署 TiDB Dashboard 后,用户将获得这些收益:1. 该组件的计算将不会再对 PD 节点有压力,更好的保障集群运行;2. 如果 PD 节点因异常不可访问,也还可以继续使用 Dashboard 进行集群诊断;3. 在开放 TiDB Dashboard 到外网时,不用担心 PD 中的特权端口的权限问题,降低集群的安全风险。 + Independently deploying TiDB Dashboard provides the following benefits: - 具体信息,参考 [TiDB Operator 部署独立的 TiDB Dashboard](https://docs.pingcap.com/zh/tidb-in-kubernetes/dev/get-started#部署独立的-tidb-dashboard) + - The compute work of TiDB Dashboard does not pose pressure on PD nodes. This ensures more stable cluster operation. + - The user can still access TiDB Dashboard for diagnosis even if the PD node is unavailable. + - Accessing TiDB Dashboard in Internet does not involve the privileged interfaces of PD. Therefore, the security risk of the cluster is mitigated. + + For more information, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). ### Performance @@ -248,11 +252,11 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### TiDB data share subscription -* TiCDC 支持输出 storage sink [tiflow#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) **tw@shichun-0415** +* TiCDC supports replicating changed logs to storage sinks [tiflow#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) **tw@shichun-0415** - TiCDC 支持将 changed log 输出到 S3/Azure Blob Storage/NFS,以及兼容 S3 协议的存储服务中。Cloud Storage 价格便宜,使用方便。对于不希望使用 Kafka 的用户,可以选择使用 storage sink。 TiCDC 将 changed log 保存到文件,然后发送到 storage 中;消费程序定时从 storage 读取新产生的 changed log files 进行处理。 + TiCDC supports replicating changed logs to Amazon S3, Azure Blob Storage, NFS, and other S3-compatible storage services. Cloud storage is reasonably priced and easy to use. If you do not want to use Kafka, you can use storage sinks. TiCDC saves the changed logs to a file and then sends it to the storage system. From the storage system, the consumer program reads the newly generated changed log files periodically. - Storage sink 支持 changed log 格式位 canal-json/csv,此外 changed log 从 TiCDC 同步到 storage 的延迟可以达到 xx,支持更多信息,请参考[用户文档](https://github.com/pingcap/docs-cn/pull/12151/files)。 + The storage sink supports changed logs in the canal-json and CSV formats. Noticeably, the latency of replicating changed logs from TiCDC to storage can be xx. For more information, see [User document](/ticdc/ticdc-sink-to-cloud-storage.md). * TiCDC supports bidirectional replication across multiple clusters @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** @@ -260,9 +264,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). For more information, refer to [user document](/ticdc/ticdc-bidirectional-replication.md). -* TiCDC 性能提升 **tw@shichun-0415 +* TiCDC performance improves significantly **tw@shichun-0415 - 在 TiDB 场景测试验证中, TiCDC 的性能得到了比较大提升,单台 TiCDC 节点能处理的最大行变更吞吐可以达到 30K rows/s,同步延迟降低到 10s,即使在常规的 TiKV/TiCDC 滚动升级场景同步延迟也小于 30s;在容灾场景测试中,打开 TiCDC Redo log 和 Sync point 后,吞吐 xx rows/s 时,容灾复制延迟可以保持在 x s。 + In a test scenario of the TiDB cluster, the performance of TiCDC improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even in TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is xx rows/s, the replication latency in DR can be maintained at x s. ### 部署及运维 @@ -274,21 +278,21 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### Backup and restore -* TiDB 快照备份支持断点续传 [#38647](https://github.com/pingcap/tidb/issues/38647) @[Leavrth](https://github.com/Leavrth) **tw@shichun-0415 +* TiDB Backup & Restore supports snapshot checkpoint backup [#38647](https://github.com/pingcap/tidb/issues/38647) @[Leavrth](https://github.com/Leavrth) **tw@shichun-0415 - TiDB 快照备份功能支持断点续传。当 BR 遇到对可恢复的错误时会进行重试,但是超过固定重试次数之后会备份退出。断点续传功能允许对持续更长时间的可恢复故障进行重试恢复,比如几十分钟的的网络故障。 + TiDB snapshot backup supports resuming backup from a checkpoint. When Backup & Restore (BR) encounters a recoverable error, it retries backup. However, BR exits if the retry fails for several times. The checkpoint backup feature allows for longer recoverable failures to be retried, for example, a network failure of tens of minutes. - 需要注意的是,如果你没有在 BR 退出后一个小时内完成故障恢复,那么还未备份的快照数据可能会被 GC 机制回收,而造成备份失败。更多信息,请参考[用户文档](/br/br-checkpoint.md)。 + Note that if you do not recover the system from a failure within one hour after BR exits, the snapshot data to be backed up might be recycled by the GC mechanism, causing the backup to fail. For more information, see [User document](/br/br-checkpoint.md). -* PITR 性能大幅提升提升 **tw@shichun-0415 +* PITR performance improved remarkably **tw@shichun-0415 - PITR 恢复的日志恢复阶单台 TiKV 的恢复速度可以达到 xx MB/s,提升了 x 倍,恢复速度可扩展,有效地降低容灾场景的 RTO 指标;容灾场景的 RPO 优化到 5 min,在常规的集群运维,如滚动升级,单 TiKV 故障等场景下,可以达到 RPO = 5 min 目标。 + In the log restore stage, the restore speed of one TiKV can reach xx MB/s, which is x times faster than before. The restore speed is scalable and the RTO in DR scenarios is reduced greatly. The RPO in DR scenarios can be as short as 5 minutes. In normal cluster operation and maintenance (OM), for example, a rolling upgrade is performed or only one TiKV is down, the RPO can be 5 minutes. -* TiKV-BR 工具 GA, 支持 RawKV 的备份和恢复 [#67](https://github.com/tikv/migration/issues/67) @[pingyu](https://github.com/pingyu) @[haojinming](https://github.com/haojinming) **tw@shichun-0415** +* TiKV-BR GA: Supports backing up and restoring RawKV [#67](https://github.com/tikv/migration/issues/67) @[pingyu](https://github.com/pingyu) @[haojinming](https://github.com/haojinming) **tw@shichun-0415** - TiKV-BR 是一个 TiKV 集群的备份和恢复工具。TiKV 可以独立于 TiDB,与 PD 构成 KV 数据库,此时的产品形态为 RawKV。TiKV-BR 工具支持对使用 RawKV 的产品进行备份和恢复,也支持将 TiKV 集群中的数据从 `API V1` 备份为 `API V2` 数据, 以实现 TiKV 集群 [`api-version`](/tikv-configuration-file.md#api-version-从-v610-版本开始引入) 的升级。 + TiKV-BR is a backup and restore tool used in TiKV clusters. TiKV and PD can constitute a KV database when used without TiDB, which is called RawKV. TiKV-BR supports data backup and restore for products that use RawKV. TiKV-BR can also upgrade the [`api-version`](/tikv-configuration-file.md#api-version-new-in-v610) from `API V1` to `API V2` for TiKV cluster. - 更多信息,请参考[用户文档](https://tikv.org/docs/latest/concepts/explore-tikv-features/backup-restore/)。 + For more information, see [User document](https://tikv.org/docs/latest/concepts/explore-tikv-features/backup-restore/). ## Compatibility changes @@ -337,7 +341,6 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). | TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | Change the default value from `1s` to `200ms`. | | TiKV | [`memory-use-ratio`](/tikv-configuration-file.md#memory-use-ratio-introduced-new-in-v650) | Newly added | Indicates the ratio of available memory to total system memory in PITR log recovery. | - ### Others - Starting from v6.4.0, the mysql.user table adds two new columns: `Password_reuse_history` and `Password_reuse_time`. From 47d7473cabbefc67a477912c2ca1902fe99a674b Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Thu, 15 Dec 2022 16:05:54 +0800 Subject: [PATCH 26/83] translate br and ticdc improvements and bug fixes --- releases/release-6.5.0.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 3a747ef4d90da..8213395949f72 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -396,13 +396,13 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + Backup & Restore (BR) - - 优化清理备份日志数据是 BR 的内存使用 [#38869](https://github.com/pingcap/tidb/issues/38869) @[Leavrth](https://github.com/Leavrth) - - 提升在恢复时的稳定性,允许 PD leader 切换的情况发生 [#36910](https://github.com/pingcap/tidb/issues/36910) @[MoCuishle28](https://github.com/MoCuishle28) - - 日志备份的 tls 功能使用 openssl 协议,提升 tls 兼容性。[#13867](https://github.com/tikv/tikv/issues/13867) @[YuJuncen](https://github.com/YuJuncen) + - Optimize BR memory usage during the process of cleaning backup log data [#38869](https://github.com/pingcap/tidb/issues/38869) @[Leavrth](https://github.com/Leavrth) + - (dup) Fix the restoration failure issue caused by PD leader switch during the restoration process [#36910](https://github.com/pingcap/tidb/issues/36910) @[MoCuishle28](https://github.com/MoCuishle28) + - Improve TLS compatibility by using the OpenSSL protocol in log backup [#13867](https://github.com/tikv/tikv/issues/13867) @[YuJuncen](https://github.com/YuJuncen) + TiCDC - - 采用并发的方式对数据进行编码,极大提升了同步到 kafka 的吞吐能力 [#7532](https://github.com/pingcap/tiflow/issues/7532) [#7543](https://github.com/pingcap/tiflow/issues/7543) [#7540](https://github.com/pingcap/tiflow/issues/7540) @[3AceShowHand](https://github.com/3AceShowHand) @[sdojjy](https://github.com/sdojjy) + - (dup) Improve the performance of Kafka protocol encoder [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7532](https://github.com/pingcap/tiflow/issues/7532) [#7543](https://github.com/pingcap/tiflow/issues/7543) @[3AceShowHand](https://github.com/3AceShowHand) @[sdojjy](https://github.com/sdojjy) + TiDB Data Migration (DM) @@ -450,17 +450,17 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). + Backup & Restore (BR) - - 修复清理备份日志数据时错误删除数据导致数据丢失的问题 [#38939](https://github.com/pingcap/tidb/issues/38939) @[Leavrth](https://github.com/Leavrth) - - 修复在大于 6.1 版本关闭 new_collation 设置,仍然恢复失败的问题 [#39150](https://github.com/pingcap/tidb/issues/39150) @[MoCuishle28](https://github.com/MoCuishle28) - - 修复因非 s3 存储的不兼容请求导致备份 panic 的问题 [39545](https://github.com/pingcap/tidb/issues/39545) @[3pointer](https://github.com/3pointer) + - (dup) Fix the issue that when BR deletes log backup data, it mistakenly deletes data that should not be deleted [#38939](https://github.com/pingcap/tidb/issues/38939) @[Leavrth](https://github.com/Leavrth) + - (dup) Fix the issue that restore tasks fail when using old framework for collations in databases or tables [#39150](https://github.com/pingcap/tidb/issues/39150) @[MoCuishle28](https://github.com/MoCuishle28) + - Fix the issue that backup fails because Alibaba Cloud and Huawei Cloud are not fully compatible with Amazon S3 storage [39545](https://github.com/pingcap/tidb/issues/39545) @[3pointer](https://github.com/3pointer) + TiCDC - - 修复 PD leader crash时 CDC 卡住的问题 [#7470](https://github.com/pingcap/tiflow/issues/7470) @[zeminzhou](https://github.com/zeminzhou) - - 修复在执行drop table 时用户快速暂停恢复同步任务导致可能的数据丢失问题 [#7682](https://github.com/pingcap/tiflow/issues/7682) @[asddongmen](https://github.com/asddongmen) - - 兼容上游开启 TiFlash 时版本兼容性问题 [#7744](https://github.com/pingcap/tiflow/issues/7744) @[overvenus](https://github.com/overvenus) - - 修复下游网络出现故障导致cdc 卡住的问题 [#7706](https://github.com/pingcap/tiflow/issues/7706) @[hicqu](https://github.com/hicqu) - - 修复用户快速删除、创建同名同步任务可能导致的数据丢失问题 [#7657](https://github.com/pingcap/tiflow/issues/7657) @[overvenus](https://github.com/overvenus) + - Fix the issue that TiCDC gets stuck when the PD leader crashes [#7470](https://github.com/pingcap/tiflow/issues/7470) @[zeminzhou](https://github.com/zeminzhou) + - (dup) Fix data loss occurred in the scenario of executing DDL statements first and then pausing and resuming the changefeed [#7682](https://github.com/pingcap/tiflow/issues/7682) @[asddongmen](https://github.com/asddongmen) + - Fix the issue that TiCDC mistakenly reports an error when there is a higher version of TiFlash [#7744](https://github.com/pingcap/tiflow/issues/7744) @[overvenus](https://github.com/overvenus) + - (dup) Fix the issue that the sink component gets stuck if the downstream network is unavailable [#7706](https://github.com/pingcap/tiflow/issues/7706) @[hicqu](https://github.com/hicqu) + - Fix the issue that data is lost when a user quickly deletes a replication task and then creates another one with the same task name [#7657](https://github.com/pingcap/tiflow/issues/7657) @[overvenus](https://github.com/overvenus) + TiDB Data Migration (DM) From 06c45fe92786381dae56fa3699e67eed0658fd4d Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Thu, 15 Dec 2022 16:17:56 +0800 Subject: [PATCH 27/83] Apply suggestions from code review --- releases/release-6.5.0.md | 32 +++++--------------------------- 1 file changed, 5 insertions(+), 27 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 8213395949f72..d25da77d786a7 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -166,22 +166,8 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). 在数据分析的场景中,通过无过滤条件的 `count(*)` 获取表的实际行数是一个常见操作。 TiFlash 在新版本中优化了 `count(*)` 的改写,自动选择带有“非空”属性的数据类型最短的列进行计数, 可以有效降低 TiFlash 上发生的 I/O 数量,进而提升获取表行数的执行效率。 -### Transaction - -* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) - - 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) - - 更多信息,请参考[用户文档](链接)。 - ### Stability -* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) - - 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) - - 更多信息,请参考[用户文档](链接)。 - * The global memory control feature is now GA. [#37816](https://github.com/pingcap/tidb/issues/37816) @[wshwsh12](https://github.com/wshwsh12) **tw@TomShawn** Since v6.5.0, the global memory control feature can track the main memory consumption in TiDB. When the global memory consumption reaches the preset value defined by [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640), TiDB tries to limit the memory usage by GC or canceling SQL operations, to ensure stability. @@ -268,14 +254,6 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). In a test scenario of the TiDB cluster, the performance of TiCDC improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even in TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is xx rows/s, the replication latency in DR can be maintained at x s. -### 部署及运维 - -* 功能标题 [#issue号](链接) @[贡献者 GitHub ID](链接) - - 功能描述(需要包含这个功能是什么、在什么场景下对用户有什么价值、怎么用) - - 更多信息,请参考[用户文档](链接)。 - ### Backup and restore * TiDB Backup & Restore supports snapshot checkpoint backup [#38647](https://github.com/pingcap/tidb/issues/38647) @[Leavrth](https://github.com/Leavrth) **tw@shichun-0415 @@ -300,13 +278,13 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). | Variable name | Change type | Description | |--------|------------------------------|------| -| [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Change the default value from `1` to `2`, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | -| [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Change the default value from `OFF` to `ON`, meaning that the metadata lock feature is enabled by default. | +| [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Changes the default value from `1` to `2`, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | +| [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON`, meaning that the metadata lock feature is enabled by default. | | [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | -| [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Change the default value from `OFF` to `ON`, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | +| [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Changes the default value from `OFF` to `ON`, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | | [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | Modified | For versions earlier than TiDB v6.5.0, this variable is used to set the threshold value of memory quota for a query. For TiDB v6.5.0 and later versions, this variable is used to set the threshold value of memory quota for a session. | -| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | Starting from v6.5.0, when this variable is set to`closest-adaptive` and the estimated result of a read request is greater than or equal to [`tidb_adaptive_closest _read_threshold`](/system-variables.md#tidb_adaptive_closest_read_threshold-new-in-v630), the number of TiDB nodes whose `closest-adaptive` configuration takes effect is limited in each availability zone, which is always the same as the number of TiDB nodes in the availability zone with the fewest TiDB nodes, and the other TiDB nodes automatically read from the leader replica. | -| [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) | Modified | Change the default value from `0` to `80%`, meaning that the memory limit for a TiDB instance is 80% of the total memory by default. | +| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | Starting from v6.5.0, when this variable is set to`closest-adaptive` and the estimated result of a read request is greater than or equal to [`tidb_adaptive_closest_read_threshold`](/system-variables.md#tidb_adaptive_closest_read_threshold-new-in-v630), the number of TiDB nodes whose `closest-adaptive` configuration takes effect is limited in each availability zone, which is always the same as the number of TiDB nodes in the availability zone with the fewest TiDB nodes, and the other TiDB nodes automatically read from the leader replica. | +| [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) | Modified | Changes the default value from `0` to `80%`, meaning that the memory limit for a TiDB instance is 80% of the total memory by default. | | [`default_password_lifetime`](/system-variables.md#default_password_lifetime-new-in-v650) | Newly added | Sets the global policy for automatic password expiration to require the user to change passwords periodically. The default value `0` indicates that the password never expires. | | [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-new-in-v650) | Newly added | This variable is read-only. It indicates whether to disconnect the client connection when the password is expired.| | [`password_history`](/system-variables.md#password_history-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on the number of password changes. The default value `0` means disabling the password reuse policy based on the number of password changes. | From e6f14bccdfca67bec8eb10eb9f98b1e19353dcb2 Mon Sep 17 00:00:00 2001 From: Ran Date: Thu, 15 Dec 2022 18:02:45 +0800 Subject: [PATCH 28/83] Apply suggestions from code review Co-authored-by: Grace Cai --- releases/release-6.5.0.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index d25da77d786a7..958e4670612a3 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -285,29 +285,29 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). | [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | Modified | For versions earlier than TiDB v6.5.0, this variable is used to set the threshold value of memory quota for a query. For TiDB v6.5.0 and later versions, this variable is used to set the threshold value of memory quota for a session. | | [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | Starting from v6.5.0, when this variable is set to`closest-adaptive` and the estimated result of a read request is greater than or equal to [`tidb_adaptive_closest_read_threshold`](/system-variables.md#tidb_adaptive_closest_read_threshold-new-in-v630), the number of TiDB nodes whose `closest-adaptive` configuration takes effect is limited in each availability zone, which is always the same as the number of TiDB nodes in the availability zone with the fewest TiDB nodes, and the other TiDB nodes automatically read from the leader replica. | | [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) | Modified | Changes the default value from `0` to `80%`, meaning that the memory limit for a TiDB instance is 80% of the total memory by default. | -| [`default_password_lifetime`](/system-variables.md#default_password_lifetime-new-in-v650) | Newly added | Sets the global policy for automatic password expiration to require the user to change passwords periodically. The default value `0` indicates that the password never expires. | -| [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-new-in-v650) | Newly added | This variable is read-only. It indicates whether to disconnect the client connection when the password is expired.| +| [`default_password_lifetime`](/system-variables.md#default_password_lifetime-new-in-v650) | Newly added | Sets the global policy for automatic password expiration to require users to change passwords periodically. The default value `0` indicates that passwords never expire. | +| [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-new-in-v650) | Newly added | Indicates whether TiDB disconnects the client connection when the password is expired. This variable is read-only. | | [`password_history`](/system-variables.md#password_history-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on the number of password changes. The default value `0` means disabling the password reuse policy based on the number of password changes. | | [`password_reuse_interval`](/system-variables.md#password_reuse_interval-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on time elapsed. The default value `0` means disabling the password reuse policy based on time elapsed. | | [`tidb_cdc_write_source`](/system-variables.md#tidb_cdc_write_source-new-in-v650) | Newly added | When this variable is set to a value other than 0, data written in this session is considered to be written by TiCDC. This variable can only be modified by TiCDC. Do not manually modify this variable in any case. | | [`tidb_index_merge_intersection_concurrency`](/system-variables.md#tidb_index_merge_intersection_concurrency-new-in-v650) | Newly added | Sets the maximum concurrency for the intersection operations that index merge performs. It is effective only when TiDB accesses partitioned tables in the dynamic pruning mode. | | [`tidb_source_id`](/system-variables.md#tidb_source_id-new-in-v650) | Newly added | This variable is used to configure the different cluster IDs in a [bi-directional replication](/ticdc/manage-ticdc.md#bi-directional-replication) cluster.| | [`tidb_ttl_delete_batch_size`](/system-variables.md#tidb_ttl_delete_batch_size-new-in-v650) | Newly added | This variable is used to set the maximum number of rows that can be deleted in a single `DELETE` transaction in a TTL job. | -| [`tidb_ttl_delete_rate_limit`](/system-variables.md#tidb_ttl_delete_rate_limit-new-in-v650) | Newly added | This variable is used to limit the rate of `DELETE` statements in TTL jobs on each TiDB node. The value represents the maximum number of `DELETE` statements allowed per second in a single node in a TTL job. When this variable is set to `0`, no limit is applied. | +| [`tidb_ttl_delete_rate_limit`](/system-variables.md#tidb_ttl_delete_rate_limit-new-in-v650) | Newly added | This variable is used to limit the maximum number of `DELETE` statements allowed per second in a single node in a TTL job. When this variable is set to `0`, no limit is applied. | | [`tidb_ttl_delete_worker_count`](/system-variables.md#tidb_ttl_delete_worker_count-new-in-v650) | Newly added | This variable is used to set the maximum concurrency of TTL jobs on each TiDB node. | -| [`tidb_ttl_job_enable`](/system-variables.md#tidb_ttl_job_enable-new-in-v650) | Newly added | This variable is used to control whether the TTL job is enabled. If it is set to `OFF`, all tables with TTL attributes automatically stops cleaning up expired data. | +| [`tidb_ttl_job_enable`](/system-variables.md#tidb_ttl_job_enable-new-in-v650) | Newly added | This variable is used to control whether to enable TTL jobs. If it is set to `OFF`, all tables with TTL attributes automatically stop cleaning up expired data. | | [`tidb_ttl_job_run_interval`](/system-variables.md#tidb_ttl_job_run_interval-new-in-v650) | Newly added | This variable is used to control the scheduling interval of the TTL job in the background. For example, if the current value is set to `1h0m0s`, each table with TTL attributes will clean up expired data once every hour. | | [`tidb_ttl_job_schedule_window_start_time`](/system-variables.md#tidb_ttl_job_schedule_window_start_time-new-in-v650) | Newly added | This variable is used to control the start time of the scheduling window of the TTL job in the background. When you modify the value of this variable, be cautious that a small window might cause the cleanup of expired data to fail. | | [`tidb_ttl_job_schedule_window_end_time`](/system-variables.md#tidb_ttl_job_schedule_window_end_time-new-in-v650) | Newly added | This variable is used to control the end time of the scheduling window of the TTL job in the background. When you modify the value of this variable, be cautious that a small window might cause the cleanup of expired data to fail. | | [`tidb_ttl_scan_batch_size`](/system-variables.md#tidb_ttl_scan_batch_size-new-in-v650) | Newly added | This variable is used to set the `LIMIT` value of each `SELECT` statement used to scan expired data in a TTL job. | | [`tidb_ttl_scan_worker_count`](/system-variables.md#tidb_ttl_scan_worker_count-new-in-v650) | Newly added | This variable is used to set the maximum concurrency of TTL scan jobs on each TiDB node. | | [`validate_password.check_user_name`](/system-variables.md#validate_passwordcheck_user_name-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password matches the username. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled. The default value is `ON`. | -| [`validate_password.dictionary`](/system-variables.md#validate_passworddictionary-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password matches the dictionary. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `2` (STRONG). The default value is `""`. | +| [`validate_password.dictionary`](/system-variables.md#validate_passworddictionary-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password matches any word in the dictionary. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `2` (STRONG). The default value is `""`. | | [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) | Newly added | This variable controls whether to perform password complexity check. If this variable is set to `ON`, TiDB performs the password complexity check when you set a password. The default value is `OFF`. | -| [`validate_password.length`](/system-variables.md#validate_passwordlength-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password length is sufficient. By default, the minimum password length is `8`. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled. The default value is `8`. | +| [`validate_password.length`](/system-variables.md#validate_passwordlength-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password length is sufficient. By default, the minimum password length is `8`. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled. | | [`validate_password.mixed_case_count`](/system-variables.md#validate_passwordmixed_case_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient uppercase and lowercase letters. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | | [`validate_password.number_count`](/system-variables.md#validate_passwordnumber_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient numbers. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | -| [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) | Newly added | This variable controls the policy for the password complexity check. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled. The default value is `1`. | +| [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) | Newly added | This variable controls the policy for the password complexity check. The value can be `0`, `1`, or `2` (corresponds to LOW, MEDIUM, or STRONG). This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled. The default value is `1`. | | [`validate_password.special_char_count`](/system-variables.md#validate_passwordspecial_char_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient special characters. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | ### Configuration file parameters From d00204b84438b1458ec2de6b4f10236f486df681 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Thu, 15 Dec 2022 18:24:57 +0800 Subject: [PATCH 29/83] Apply suggestions from code review Co-authored-by: Ran --- releases/release-6.5.0.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 958e4670612a3..346eb6c30a8bc 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -278,6 +278,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). | Variable name | Change type | Description | |--------|------------------------------|------| +|[`tidb_enable_amend_pessimistic_txn`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407)| Deprecated | Starting from v6.5.0, this variable is deprecated, and TiDB uses the [Metadata Lock](/metadata-lock.md) feature by default to avoid the `Information schema is changed` error. | | [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Changes the default value from `1` to `2`, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | | [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON`, meaning that the metadata lock feature is enabled by default. | | [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | @@ -314,19 +315,20 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). | Configuration file | Configuration parameter | Change type | Description | | -------- | -------- | -------- | -------- | -| TiDB | [`server-memory-quota`](/tidb-configuration-file.md#server-memory-quota-new-in-v409) | Deprecated | Since v6.5.0, this configuration item is deprecated. Instead, use the system variable [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) for setting. | +| TiDB | [`server-memory-quota`](/tidb-configuration-file.md#server-memory-quota-new-in-v409) | Deprecated | Since v6.5.0, this configuration item is deprecated. Instead, use the system variable [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) to manage memory globally. | | TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password-new-in-v650) | Newly added | Determines whether TiDB disconnects the client connection when the password is expired. The default value is `true`, which means the client connection is disconnected when the password is expired. | +| TiKV | `raw-min-ts-outlier-threshold` | Deleted | Since v6.4.0, this configuration item was deprecated. Since v6.5.0, this configuration item is deleted. | | TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | Change the default value from `1s` to `200ms`. | | TiKV | [`memory-use-ratio`](/tikv-configuration-file.md#memory-use-ratio-introduced-new-in-v650) | Newly added | Indicates the ratio of available memory to total system memory in PITR log recovery. | ### Others -- Starting from v6.4.0, the mysql.user table adds two new columns: `Password_reuse_history` and `Password_reuse_time`. -- The [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) feature is enabled by default and is not compatible with the [PITR (Point-in-time recovery)](/br/br-pitr-guide.md) feature. When using the index acceleration feature, you need to make sure that no PITR backup task is running in the background, otherwise unexpected results might occur. For more information, see [tidb_ddl_enable_fast_reorg](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630). +- Starting from v6.5.0, the `mysql.user` table adds two new columns: `Password_reuse_history` and `Password_reuse_time`. +- The [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) feature is enabled by default and is not compatible with the [PITR (Point-in-time recovery)](/br/br-pitr-guide.md) feature. When using the index acceleration feature, you need to make sure that no PITR backup task is running in the background; otherwise, unexpected results might occur. For more information, see [tidb_ddl_enable_fast_reorg](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630). -## 废弃功能 +## Deprecated feature -即将于 v6.6.0 版本废弃 v4.0.7 版本引入的 Amending Transaction 机制,并使用[元数据锁](/metadata-lock.md) 替代。 +Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407) mechanism introduced in v4.0.7 is deprecated and replaced by [Metadata Lock](/metadata-lock.md). ## Improvements From 1143cc0a1f9d3642fd2acdf3e49ee069077975d4 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Thu, 15 Dec 2022 18:34:11 +0800 Subject: [PATCH 30/83] Apply suggestions from code review Co-authored-by: xixirangrang --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 346eb6c30a8bc..7913fba935175 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -242,7 +242,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). TiCDC supports replicating changed logs to Amazon S3, Azure Blob Storage, NFS, and other S3-compatible storage services. Cloud storage is reasonably priced and easy to use. If you do not want to use Kafka, you can use storage sinks. TiCDC saves the changed logs to a file and then sends it to the storage system. From the storage system, the consumer program reads the newly generated changed log files periodically. - The storage sink supports changed logs in the canal-json and CSV formats. Noticeably, the latency of replicating changed logs from TiCDC to storage can be xx. For more information, see [User document](/ticdc/ticdc-sink-to-cloud-storage.md). + The storage sink supports changed logs in the canal-json and CSV formats. Noticeably, the latency of replicating changed logs from TiCDC to storage can be as short as xx. For more information, see [User document](/ticdc/ticdc-sink-to-cloud-storage.md). * TiCDC supports bidirectional replication across multiple clusters @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** @@ -252,7 +252,7 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * TiCDC performance improves significantly **tw@shichun-0415 - In a test scenario of the TiDB cluster, the performance of TiCDC improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even in TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is xx rows/s, the replication latency in DR can be maintained at x s. + In a test scenario of the TiDB cluster, the performance of TiCDC has improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even during TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is xx rows/s, the replication latency in DR can be maintained at x s. ### Backup and restore @@ -438,7 +438,7 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable - Fix the issue that TiCDC gets stuck when the PD leader crashes [#7470](https://github.com/pingcap/tiflow/issues/7470) @[zeminzhou](https://github.com/zeminzhou) - (dup) Fix data loss occurred in the scenario of executing DDL statements first and then pausing and resuming the changefeed [#7682](https://github.com/pingcap/tiflow/issues/7682) @[asddongmen](https://github.com/asddongmen) - - Fix the issue that TiCDC mistakenly reports an error when there is a higher version of TiFlash [#7744](https://github.com/pingcap/tiflow/issues/7744) @[overvenus](https://github.com/overvenus) + - Fix the issue that TiCDC mistakenly reports an error when there is a later version of TiFlash [#7744](https://github.com/pingcap/tiflow/issues/7744) @[overvenus](https://github.com/overvenus) - (dup) Fix the issue that the sink component gets stuck if the downstream network is unavailable [#7706](https://github.com/pingcap/tiflow/issues/7706) @[hicqu](https://github.com/hicqu) - Fix the issue that data is lost when a user quickly deletes a replication task and then creates another one with the same task name [#7657](https://github.com/pingcap/tiflow/issues/7657) @[overvenus](https://github.com/overvenus) From d77d75a9e0bc3af6786e6e9b6bd2a64eaa5062d5 Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Thu, 15 Dec 2022 18:51:46 +0800 Subject: [PATCH 31/83] Apply suggestions from code review Co-authored-by: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> --- releases/release-6.5.0.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 7913fba935175..8d5218bc4e3c2 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -206,11 +206,11 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). ### Data migration -* Support exporting and importing SQL and CSV files in the following compression formats: gzip, snappy and zstd [#38514](https://github.com/pingcap/tidb/issues/38514) @[lichunzhu](https://github.com/lichunzhu) **tw@hfxsd** +* Support exporting and importing SQL and CSV files in gzip, snappy, and zstd compression formats [#38514](https://github.com/pingcap/tidb/issues/38514) @[lichunzhu](https://github.com/lichunzhu) **tw@hfxsd** Dumpling supports exporting data to compressed SQL and CSV files in the following compression formats: gzip, snappy, and zstd. TiDB Lightning also supports importing compressed files in these formats. - Previously, you had to provide large storage space for exporting or importing data to store the uncompressed CSV and SQL files, resulting in high storage costs. With the release of this feature, you can greatly reduce your storage costs by compressing the storage space. + Previously, you had to provide large storage space for exporting or importing data to store CSV and SQL files, resulting in high storage costs. With the release of this feature, you can greatly reduce your storage costs by compressing the data files. For more information, see [User document](/dumpling-overview.md#improve-export-efficiency-through-concurrency). @@ -218,11 +218,11 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). TiDB can filter out binlog events of the schemas and tables that are not in the migration task, thus improving the parsing efficiency and stability. This policy takes effect by default in v6.5.0. No additional configuration is required. - Previously, even if only a few tables were migrated, the entire binlog file upstream had to be parsed. The binlog events of the tables in the binlog file that did not need to be migrated still had to be parsed, which was not efficient. Meanwhile, if the binlog events of the schemas and tables that are not in the migration task do not support parsing, the task will fail. By only parsing the binlog events of the tables in the migration task, the binlog parsing efficiency can be greatly improved and the task stability can be enhanced. + Previously, even if only a few tables were migrated, the entire binlog file upstream had to be parsed. The binlog events of the tables in the binlog file that did not need to be migrated still had to be parsed, which was not efficient. Meanwhile, if these binlog events do not support parsing, the task will fail. By only parsing the binlog events of the tables in the migration task, the binlog parsing efficiency can be greatly improved and the task stability can be enhanced. -* The disk quota in TiDB Lightning is GA. It can prevent TiDB Lightning tasks from overwriting local disks [#446](https://github.com/pingcap/tidb-lightning/issues/446) @[buchuitoudegou](https://github.com/buchuitoudegou) **tw@hfxsd** +* Disk quota in TiDB Lightning is GA [#446](https://github.com/pingcap/tidb-lightning/issues/446) @[buchuitoudegou](https://github.com/buchuitoudegou) **tw@hfxsd** - You can configure disk quota for TiDB Lightning. When there is not enough disk quota, TiDB Lightning pauses the process of reading the source data and writing temporary files, and writes the sorted key-values to TiKV first, and then continues the import process after TiDB Lightning deletes the local temporary files. + You can configure disk quota for TiDB Lightning. When there is not enough disk quota, TiDB Lightning stops reading source data and writing temporary files. Instead, it writes the sorted key-values to TiKV first, and then continues the import process after TiDB Lightning deletes the local temporary files. Previously, when TiDB Lightning imported data using physical mode, it would create a large number of temporary files on the local disk for encoding, sorting, and splitting the raw data. When your local disk ran out of space, TiDB Lightning would exit with an error due to failing to write to the file. With this feature, TiDB Lightning tasks can avoid overwriting the local disk. @@ -230,9 +230,9 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). * Continuous data validation in DM is GA [#4426](https://github.com/pingcap/tiflow/issues/4426) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** - In the process of migrating incremental data from upstream to downstream databases, there is a small probability that the flow of data causes errors or data loss. In scenarios that rely on strong data consistency, such as credit and securities businesses, you can perform a full volume checksum on the data after the data migration is complete to ensure data consistency. However, in some scenarios with incremental replication, upstream and downstream writes are continuous and uninterrupted because the upstream and downstream data is constantly changing, making it difficult to perform consistency checks on all the data in the tables. + In the process of migrating incremental data from upstream to downstream databases, there is a small probability that data flow might cause errors or data loss. In scenarios where strong data consistency is required, such as credit and securities businesses, you can perform a full volume checksum on the data after migration to ensure data consistency. However, in some incremental replication scenarios, upstream and downstream writes are continuous and uninterrupted because the upstream and downstream data is constantly changing, making it difficult to perform consistency checks on all data. - Previously, you needed to interrupt the business to do the full data verification, which would affect your business. Now, with this feature, you can perform incremental data verification without interrupting the business. + Previously, you needed to interrupt the business to validate the full data, which would affect your business. Now, with this feature, you can perform incremental data validation without interrupting the business. For more information, see [User document](/dm/dm-continuous-data-validation.md). @@ -358,10 +358,10 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable + PD - Optimize the granularity of locks to reduce lock contention and improve the handling capability of heartbeats under high concurrency [#5586](https://github.com/tikv/pd/issues/5586) @[rleungx](https://github.com/rleungx) - - Optimize scheduler performance for large-scale clusters and improve production speed of the scheduling policy [#5473](https://github.com/tikv/pd/issues/5473) @[bufferflies](https://github.com/bufferflies) + - Optimize scheduler performance for large-scale clusters and accelerate the production of scheduling policies [#5473](https://github.com/tikv/pd/issues/5473) @[bufferflies](https://github.com/bufferflies) - Improve the speed of loading Regions [#5606](https://github.com/tikv/pd/issues/5606) @[rleungx](https://github.com/rleungx) - - Improve the performance of handling Region heartbeats [#5648](https://github.com/tikv/pd/issues/5648)@[rleungx](https://github.com/rleungx) - - Add the function to automatically GC the tombstone store [#5348](https://github.com/tikv/pd/issues/5348) @[nolouch](https://github.com/nolouch) + - Reduce unnecessary overhead by optimized handling of Region heartbeats [#5648](https://github.com/tikv/pd/issues/5648)@[rleungx](https://github.com/rleungx) + - Add the feature of automatically garbage collecting tombstone stores [#5348](https://github.com/tikv/pd/issues/5348) @[nolouch](https://github.com/nolouch) + TiFlash @@ -423,8 +423,8 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable + TiFlash - - Fix the issue that minor compaction does not work as expected after TiFlash restarts [#6159](https://github.com/pingcap/tiflash/issues/6159) @[lidezhu](https://github.com/lidezhu) - - Fix the issue that TiFlash Open File OPS is too high [#6345](https://github.com/pingcap/tiflash/issues/6345) @[JaySon-Huang](https://github.com/JaySon-Huang) + - Fix the issue that column files in the delta layer cannot be compacted after restarting TiFlash [#6159](https://github.com/pingcap/tiflash/issues/6159) @[lidezhu](https://github.com/lidezhu) + - Fix the issue that TiFlash File Open OPS is too high [#6345](https://github.com/pingcap/tiflash/issues/6345) @[JaySon-Huang](https://github.com/JaySon-Huang) + Tools From d97fdd307881e480a2bd9b7fa7b85d6620c8dc33 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Thu, 15 Dec 2022 19:05:06 +0800 Subject: [PATCH 32/83] add the missing # before issue --- releases/release-6.5.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 8d5218bc4e3c2..70aba06160aa7 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -432,7 +432,7 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable - (dup) Fix the issue that when BR deletes log backup data, it mistakenly deletes data that should not be deleted [#38939](https://github.com/pingcap/tidb/issues/38939) @[Leavrth](https://github.com/Leavrth) - (dup) Fix the issue that restore tasks fail when using old framework for collations in databases or tables [#39150](https://github.com/pingcap/tidb/issues/39150) @[MoCuishle28](https://github.com/MoCuishle28) - - Fix the issue that backup fails because Alibaba Cloud and Huawei Cloud are not fully compatible with Amazon S3 storage [39545](https://github.com/pingcap/tidb/issues/39545) @[3pointer](https://github.com/3pointer) + - Fix the issue that backup fails because Alibaba Cloud and Huawei Cloud are not fully compatible with Amazon S3 storage [#39545](https://github.com/pingcap/tidb/issues/39545) @[3pointer](https://github.com/3pointer) + TiCDC From e298dd8112d766317562d67623a0b0d863d30454 Mon Sep 17 00:00:00 2001 From: Aolin Date: Thu, 15 Dec 2022 20:20:02 +0800 Subject: [PATCH 33/83] Apply suggestions from code review Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- releases/release-6.5.0.md | 1 - 1 file changed, 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 70aba06160aa7..f99dd0570c55a 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -338,7 +338,6 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable + TiKV - - The default value of `cdc.min-ts-interval` has been changed from `1s` to `200ms` to reduce CDC latency [#12840](https://github.com/tikv/tikv/issues/12840) @[hicqu](https://github.com/hicqu) - Stop writing to Raft Engine when there is insufficient space to avoid exhausting disk space [#13642](https://github.com/tikv/tikv/issues/13642) @[jiayang-zheng](https://github.com/jiayang-zheng) - Support pushing down the `json_valid` function to TiKV [#13571](https://github.com/tikv/tikv/issues/13571) @[lizhenhuan](https://github.com/lizhenhuan) - Support backing up multiple ranges of data in a single backup request [#13701](https://github.com/tikv/tikv/issues/13701) @[Leavrth](https://github.com/Leavrth) From aa5283ef12815bf0b031991cb89b19a9a33879b7 Mon Sep 17 00:00:00 2001 From: qiancai Date: Fri, 16 Dec 2022 00:27:52 +0800 Subject: [PATCH 34/83] translate key features --- releases/release-6.5.0.md | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index f99dd0570c55a..33b0993172599 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -12,17 +12,23 @@ Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.5/quick-start-with- TiDB 6.5.0 is a Long-Term Support Release (LTS). -相比于前一个 LTS (即 6.1.0 版本),6.5.0 版本包含 [6.2.0-DMR](/releases/release-6.2.0.md)、[6.3.0-DMR](/releases/release-6.3.0.md)、[6.4.0-DMR](/releases/release-6.4.0.md) 中已发布的新功能、提升改进和错误修复,并引入了以下关键特性: - -- 优化器代价模型 V2 GA -- TiDB 全局内存控制 GA -- 全局 hint 干预视图内查询的计划生成 -- 满足密码合规审计需求 [密码管理](/password-management.md) -- TiDB 添加索引的速度提升为原来的 10 倍 -- Flashback Cluster 功能兼容 TiCDC 和 PiTR -- 支持通过 `INSERT INTO SELECT` 语句[保存 TiFlash 查询结果](/tiflash/tiflash-results-materialization.md)(实验特性) -- 支持下推 JSON 抽取函数下推至 TiFlash -- 进一步增强索引合并[INDEX MERGE](/glossary.md#index-merge)功能 +Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, improvements, and bug fixes released in [6.2.0-DMR](/releases/release-6.2.0.md), [6.3.0-DMR](/releases/release-6.3.0.md), [6.4.0-DMR](/releases/release -6.4.0.md), but also introduces the following key features and improvements: + +- Enable [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) by default, which improves the performance of adding indexes by 10 times compared with v6.1. +- Support TiDB global memory control via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). +- Support [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md), compatible with TiCDC and PITR. +- Support a high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute, compatible with MySQL. +- Enhance the [optimizer cost model](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature. +- Support [pushing down the `JSON_EXTRACT()` function](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash. +- TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files files. +- TiDB Data Migration (DM) supports [continuous data validation](/dm/dm-continuous-data-validation.md). +- Support [password management](/password-management.md) policies that meet password compliance auditing requirements. +- Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). +- Supports [non-transactional DML statements](/non-transactional-dml.md) to improve cluster stability. +- TiCDC supports [replicating changed logs to storage services](ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS. +- Support [bidirectional replication](/ticdc/ticdc-bidirectional-replication.md) between two or more TiDB clusters. +- Improve the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by x times and reduce RPO to x min. +- Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) by x times and reduces replication latency to x seconds. ## New features From bff3a26675eeced29fbf59cd3f40221b4ddd8977 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Fri, 16 Dec 2022 00:35:07 +0800 Subject: [PATCH 35/83] Apply suggestions from code review --- releases/release-6.5.0.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 33b0993172599..1879118575226 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -12,7 +12,7 @@ Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.5/quick-start-with- TiDB 6.5.0 is a Long-Term Support Release (LTS). -Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, improvements, and bug fixes released in [6.2.0-DMR](/releases/release-6.2.0.md), [6.3.0-DMR](/releases/release-6.3.0.md), [6.4.0-DMR](/releases/release -6.4.0.md), but also introduces the following key features and improvements: +Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, improvements, and bug fixes released in [6.2.0-DMR](/releases/release-6.2.0.md), [6.3.0-DMR](/releases/release-6.3.0.md), [6.4.0-DMR](/releases/release-6.4.0.md), but also introduces the following key features and improvements: - Enable [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) by default, which improves the performance of adding indexes by 10 times compared with v6.1. - Support TiDB global memory control via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). @@ -20,14 +20,14 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Support a high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute, compatible with MySQL. - Enhance the [optimizer cost model](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature. - Support [pushing down the `JSON_EXTRACT()` function](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash. -- TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files files. +- TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. - TiDB Data Migration (DM) supports [continuous data validation](/dm/dm-continuous-data-validation.md). - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. - Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). -- Supports [non-transactional DML statements](/non-transactional-dml.md) to improve cluster stability. +- Support [non-transactional DML statements](/non-transactional-dml.md) to improve cluster stability. - TiCDC supports [replicating changed logs to storage services](ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS. - Support [bidirectional replication](/ticdc/ticdc-bidirectional-replication.md) between two or more TiDB clusters. -- Improve the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by x times and reduce RPO to x min. +- Improve the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by x times and reduce RPO to x minutes. - Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) by x times and reduces replication latency to x seconds. ## New features From e4296f545f472ad17cb50ef45897ec22c8a09455 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Fri, 16 Dec 2022 09:44:59 +0800 Subject: [PATCH 36/83] Apply suggestions from code review Co-authored-by: Aolin --- releases/release-6.5.0.md | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 1879118575226..e7f61546db5cd 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -130,11 +130,11 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### Performance -* Further enhance the [INDEX MERGE](/glossary.md#index-merge) feature. [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** +* [INDEX MERGE](/glossary.md#index-merge) supports conjunctive normal form (expressions connected by `AND`) [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** - Before v6.5.0, TiDB only supported using index merge for the filter conditions connected by `OR`. Starting from v6.5.0, TiDB has supported using index merge for filter conditions connected by`AND` in the `WHERE` clause. In this way, the index merge of TiDB can now cover more general combinations of query filter conditions and is no longer limited to Union (`OR`) relationship. The current v6.5.0 version only supports index merge under `OR` conditions as automatically selected by the optimizer. To enable index merge for `AND` conditions, you need to use the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint. + Before v6.5.0, TiDB only supported using index merge for the filter conditions connected by `OR`. Starting from v6.5.0, TiDB has supported using index merge for filter conditions connected by `AND` in the `WHERE` clause. In this way, the index merge of TiDB can now cover more general combinations of query filter conditions and is no longer limited to union (`OR`) relationship. The current v6.5.0 version only supports index merge under `OR` conditions as automatically selected by the optimizer. To enable index merge for `AND` conditions, you need to use the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint. - For more details about index merge, see [v5.4 release notes](/release-5.4.0#performance) and [Explain Index Merge](/explain-index-merge.md). + For more details about index merge, see [v5.4.0 Release Notes](/release-5.4.0#performance) and [Explain Index Merge](/explain-index-merge.md). * Support pushing down the following [JSON functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** @@ -168,21 +168,21 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr For more information, see [User document](/cost-model.md#cost-model-version-2). -* TiFlash 对获取表行数的操作进行针对优化 [#37165](https://github.com/pingcap/tidb/issues/37165) @[elsa0520](https://github.com/elsa0520) +* TiFlash optimizes the operations of getting the number of table rows [#37165](https://github.com/pingcap/tidb/issues/37165) @[elsa0520](https://github.com/elsa0520) - 在数据分析的场景中,通过无过滤条件的 `count(*)` 获取表的实际行数是一个常见操作。 TiFlash 在新版本中优化了 `count(*)` 的改写,自动选择带有“非空”属性的数据类型最短的列进行计数, 可以有效降低 TiFlash 上发生的 I/O 数量,进而提升获取表行数的执行效率。 + In the scenarios of data analysis, It is a common operation to get the actual number of rows of a table through `COUNT(*)` without filter conditions. In v6.5.0, TiFlash optimizes the rewriting of `COUNT(*)` and automatically selects the not-null columns with the shortest column definition to count the number of rows, which can effectively reduce the number of I/O operations in TiFlash and improve the execution efficiency of getting row count. ### Stability -* The global memory control feature is now GA. [#37816](https://github.com/pingcap/tidb/issues/37816) @[wshwsh12](https://github.com/wshwsh12) **tw@TomShawn** +* The global memory control feature is now GA [#37816](https://github.com/pingcap/tidb/issues/37816) @[wshwsh12](https://github.com/wshwsh12) **tw@TomShawn** - Since v6.5.0, the global memory control feature can track the main memory consumption in TiDB. When the global memory consumption reaches the preset value defined by [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640), TiDB tries to limit the memory usage by GC or canceling SQL operations, to ensure stability. + TiDB v6.4.0 introduces global memory control as an experimental feature. Since v6.5.0, the global memory control feature becomes GA and can track the main memory consumption in TiDB. When the global memory consumption reaches the threshold defined by [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640), TiDB tries to limit the memory usage by GC or canceling SQL operations, to ensure stability. - Note that the memory consumed by the transaction in a session (the maximum value was previously set by the configuration item [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit)) is now tracked by the memory management module: when the memory consumption of a single session reaches the threshold defined by the system variable [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query), the behavior defined by the system variable [`tidb_mem_oom_action`](/system-variables.md#tidb_mem_oom_action-new-in-v610) will be triggered (the default is `CANCEL`, that is, canceling operations). To ensure forward compatibility, when [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) is configured as a non-default value, TiDB will still ensure that transactions can use the memory size set by `txn-total-size-limit`. + Note that the memory consumed by the transaction in a session (the maximum value was previously set by the configuration item [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit)) is now tracked by the memory management module: when the memory consumption of a single session reaches the threshold defined by the system variable [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query), the behavior defined by the system variable [`tidb_mem_oom_action`](/system-variables.md#tidb_mem_oom_action-new-in-v610) will be triggered (the default is `CANCEL`, that is, canceling operations). To ensure forward compatibility, when [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) is configured as a non-default value, TiDB will still ensure that transactions can use the memory set by `txn-total-size-limit`. - If you are running TiDB v6.5.0 or later, it is recommended to remove [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) and not to set a separate limit on the memory usage of transactions. Instead, use the system variables [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) and [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) to manage memory globally, which can improve memory efficiency. + If you are using TiDB v6.5.0 or later, it is recommended to remove [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) and not to set a separate limit on the memory usage of transactions. Instead, use the system variables [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) and [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) to manage global memory, which can improve the efficiency of memory usage. - For more info, see the [user document](/configure-memory-usage.md). + For more information, see the [user document](/configure-memory-usage.md). ### Ease of use @@ -402,15 +402,15 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable - Fix the issue of memory chunk misuse for the chunk reuse feature that occurs in some cases [#38917](https://github.com/pingcap/tidb/issues/38917) @[keeplearning20221](https://github.com/keeplearning20221) - Fix the issue that the internal sessions of `tidb_constraint_check_in_place_pessimistic` might be affected by the global setting [#38766](https://github.com/pingcap/tidb/issues/38766) @[ekexium](https://github.com/ekexium) - - Fix the issue that the `AUTO_INCREMENT` column cannot be used together with the `Check` constraint [#38894](https://github.com/pingcap/tidb/issues/38894) @[YangKeao](https://github.com/YangKeao) - - Fix the issue that using `INSERT IGNORE INTO` to insert data of the `STRING` type into an auto-increment column of the `SMALLINT` type will raise an error [#38483](https://github.com/pingcap/tidb/issues/38483) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that the `AUTO_INCREMENT` column cannot work with the `CHECK` constraint [#38894](https://github.com/pingcap/tidb/issues/38894) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that using `INSERT IGNORE INTO` to insert data of the `STRING` type into an auto-increment column of the `SMALLINT` type will cause an error [#38483](https://github.com/pingcap/tidb/issues/38483) @[hawkingrei](https://github.com/hawkingrei) - Fix the issue that the null pointer error occurs in the operation of renaming the partition column of a partitioned table [#38932](https://github.com/pingcap/tidb/issues/38932) @[mjonss](https://github.com/mjonss) - Fix the issue that modifying the partition column of a partitioned table causes DDL to hang [#38530](https://github.com/pingcap/tidb/issues/38530) @[mjonss](https://github.com/mjonss) - - Fix the issue that the `ADMIN SHOW JOB` operation panics after upgrading from v4.0 to v6.4 [#38980](https://github.com/pingcap/tidb/issues/38980) @[tangenta](https://github.com/tangenta) + - Fix the issue that the `ADMIN SHOW JOB` operation panics after upgrading from v4.0.16 to v6.4.0 [#38980](https://github.com/pingcap/tidb/issues/38980) @[tangenta](https://github.com/tangenta) - Fix the issue that the `tidb_decode_key` function fails to correctly parse the encoding of partitioned tables [#39304](https://github.com/pingcap/tidb/issues/39304) @[Defined2014](https://github.com/Defined2014) - - Fixe the issue that gRPC error log messages are not redirected to the correct log file during log rotation [#38941](https://github.com/pingcap/tidb/issues/38941) @[xhebox](https://github.com/xhebox) - - Fix the issue that TiDB generates an unexpected query plan for the `BEGIN; SELECT... FOR UPDATE;` point query when TiKV is not configured for the read engine [#39344](https://github.com/pingcap/tidb/issues/39344) @[Yisaer](https://github.com/Yisaer) - - Fix the issue that mistakenly pushing down `StreamAgg` to TiFlash causes wrong result [#39266](https://github.com/pingcap/tidb/issues/39266) @[fixdb](https://github.com/fixdb) + - Fixe the issue that gRPC error logs are not redirected to the correct log file during log rotation [#38941](https://github.com/pingcap/tidb/issues/38941) @[xhebox](https://github.com/xhebox) + - Fix the issue that TiDB generates an unexpected execution plan for the `BEGIN; SELECT... FOR UPDATE;` point query when TiKV is not configured as a read engine [#39344](https://github.com/pingcap/tidb/issues/39344) @[Yisaer](https://github.com/Yisaer) + - Fix the issue that mistakenly pushing down `StreamAgg` to TiFlash causes wrong results [#39266](https://github.com/pingcap/tidb/issues/39266) @[fixdb](https://github.com/fixdb) + TiKV From 2d6011f523651169254f38ec15fc2012ee95eeeb Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Fri, 16 Dec 2022 11:52:07 +0800 Subject: [PATCH 37/83] align with Chinese changes --- releases/release-6.5.0.md | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index e7f61546db5cd..7d8a17882d378 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -16,19 +16,17 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Enable [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) by default, which improves the performance of adding indexes by 10 times compared with v6.1. - Support TiDB global memory control via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). -- Support [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md), compatible with TiCDC and PITR. - Support a high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute, compatible with MySQL. +- Support [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md), compatible with TiCDC and PITR. - Enhance the [optimizer cost model](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature. -- Support [pushing down the `JSON_EXTRACT()` function](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash. +- Support pushing down the `JSON_EXTRACT()` function to TiFlash. +- Support [password management](/password-management.md) policies that meet password compliance auditing requirements. - TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. - TiDB Data Migration (DM) supports [continuous data validation](/dm/dm-continuous-data-validation.md). -- Support [password management](/password-management.md) policies that meet password compliance auditing requirements. -- Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). -- Support [non-transactional DML statements](/non-transactional-dml.md) to improve cluster stability. -- TiCDC supports [replicating changed logs to storage services](ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS. -- Support [bidirectional replication](/ticdc/ticdc-bidirectional-replication.md) between two or more TiDB clusters. -- Improve the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by x times and reduce RPO to x minutes. +- TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by x times, and reduces RPO to x minutes. - Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) by x times and reduces replication latency to x seconds. +- Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). +- TiCDC supports [replicating changed logs to object storage ](ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS (experimental). ## New features From 50b45f343769e7a4d564929ee8053a091bbd3fd7 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Fri, 16 Dec 2022 12:04:32 +0800 Subject: [PATCH 38/83] add experimental to FD-888 Co-authored-by: Grace Cai --- releases/release-6.5.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 7d8a17882d378..56348abe9c650 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -242,7 +242,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### TiDB data share subscription -* TiCDC supports replicating changed logs to storage sinks [tiflow#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) **tw@shichun-0415** +* TiCDC supports replicating changed logs to storage sinks (experimental) [tiflow#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) **tw@shichun-0415** TiCDC supports replicating changed logs to Amazon S3, Azure Blob Storage, NFS, and other S3-compatible storage services. Cloud storage is reasonably priced and easy to use. If you do not want to use Kafka, you can use storage sinks. TiCDC saves the changed logs to a file and then sends it to the storage system. From the storage system, the consumer program reads the newly generated changed log files periodically. From ddbe582884f7f1fd191f1d180ae8e023d6b5e2ed Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Fri, 16 Dec 2022 12:09:50 +0800 Subject: [PATCH 39/83] Update releases/release-6.5.0.md Co-authored-by: xixirangrang --- releases/release-6.5.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 56348abe9c650..511b09ca98cee 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -250,7 +250,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * TiCDC supports bidirectional replication across multiple clusters @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** - TiCDC supports bidirectional replication across multiple TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution in multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from each TiDB cluster to other TiDB clusters, you can achieve bidirectional data replication across multiple TiDB clusters. + TiCDC supports bidirectional replication across multiple TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution across multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from each TiDB cluster to the other TiDB clusters, you can achieve bidirectional data replication across multiple TiDB clusters. For more information, refer to [user document](/ticdc/ticdc-bidirectional-replication.md). From 40d4bda4117926af5de581724503cc796c09b551 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Fri, 16 Dec 2022 12:16:55 +0800 Subject: [PATCH 40/83] remove dup label --- releases/release-6.5.0.md | 471 -------------------------------------- 1 file changed, 471 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 511b09ca98cee..e69de29bb2d1d 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -1,471 +0,0 @@ ---- -title: TiDB 6.5.0 Release Notes ---- - -# TiDB 6.5.0 Release Notes - -Release date: xx xx, 2022 - -TiDB version: 6.5.0 - -Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.5/quick-start-with-tidb) | [Production deployment](https://docs.pingcap.com/tidb/v6.5/production-deployment-using-tiup) | [Installation packages](https://www.pingcap.com/download/?version=v6.5.0#version-list) - -TiDB 6.5.0 is a Long-Term Support Release (LTS). - -Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, improvements, and bug fixes released in [6.2.0-DMR](/releases/release-6.2.0.md), [6.3.0-DMR](/releases/release-6.3.0.md), [6.4.0-DMR](/releases/release-6.4.0.md), but also introduces the following key features and improvements: - -- Enable [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) by default, which improves the performance of adding indexes by 10 times compared with v6.1. -- Support TiDB global memory control via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). -- Support a high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute, compatible with MySQL. -- Support [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md), compatible with TiCDC and PITR. -- Enhance the [optimizer cost model](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature. -- Support pushing down the `JSON_EXTRACT()` function to TiFlash. -- Support [password management](/password-management.md) policies that meet password compliance auditing requirements. -- TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. -- TiDB Data Migration (DM) supports [continuous data validation](/dm/dm-continuous-data-validation.md). -- TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by x times, and reduces RPO to x minutes. -- Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) by x times and reduces replication latency to x seconds. -- Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). -- TiCDC supports [replicating changed logs to object storage ](ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS (experimental). - -## New features - -### SQL - -* The performance of TiDB adding indexes is improved by 10 times [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) **tw@Oreoxmt** - - TiDB v6.3.0 introduces the [Add index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) as an experimental feature to improve the speed of backfilling when creating an index. In v6.5.0, this feature becomes GA and is enabled by default, and the performance on large tables is expected to be 10 times faster. The acceleration feature is suitable for scenarios where a single SQL statement adds an index serially. When multiple SQL statements add indexes in parallel, only one of the SQL statements will be accelerated. - -* Provide lightweight metadata lock to improve the DML success rate during DDL change [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) **tw@Oreoxmt** - - TiDB v6.3.0 introduces [Metadata lock](/metadata-lock.md) as an experimental feature. To avoid the `Information schema is changed` error caused by DML statements, TiDB coordinates the priority of DMLs and DDLs during table metadata change, and makes the ongoing DDLs wait for the DMLs with old metadata to commit. In v6.5.0, this feature becomes GA and is enabled by default. It is suitable for various types of DDLs change scenarios. - - For more information, see [User document](/metadata-lock.md). - -* Support restoring a cluster to a specific point in time by using `FLASHBACK CLUSTER TO TIMESTAMP` [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** - - TiDB v6.4.0 introduces the [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement as an experimental feature. You can use this statement to restore a cluster to a specific point in time within the Garbage Collection (GC) life time. In v6.5.0, this statement becomes GA. This feature helps you to easily undo DML misoperations, restore the original cluster in minutes, roll back data at different time points to determine the exact time when data changes, and it is compatible with PITR and TiCDC. - - For more information, see [user document](/sql-statements/sql-statement-flashback-to-timestamp.md). - -* Fully support non-transactional DML statements including `INSERT`, `REPLACE`, `UPDATE`, and `DELETE` [#33485](https://github.com/pingcap/tidb/issues/33485) @[ekexium](https://github.com/ekexium) **tw@Oreoxmt** - - In the scenarios of large data processing, a single SQL statement with a large transaction might have a negative impact on the cluster stability and performance. A non-transactional DML statement is a DML statement split into multiple SQL statements for internal execution. The split statements compromise transaction atomicity and isolation but greatly improve the cluster stability. TiDB supports non-transactional `DELETE` statements since v6.1.0, and supports non-transactional `INSERT`, `REPLACE`, and `UPDATE` statements since v6.5.0. - - For more information, see [Non-Transactional DML statements](/non-transactional-dml.md) and [`BATCH` syntax](/sql-statements/sql-statement-batch.md). - -* Support time to live (TTL) (experimental) [#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** - - TTL provides row-level data lifetime management. In TiDB, a table with the TTL attribute automatically checks data lifetime and deletes expired data at the row level. TTL is designed to help you clean up unnecessary data periodically and in a timely manner without affecting the online read and write workloads. - - For more information, see [User document](/time-to-live.md). - -* Support saving TiFlash query results using the `INSERT INTO SELECT` statement (experimental) [#37515](https://github.com/pingcap/tidb/issues/37515) @[gengliqi](https://github.com/gengliqi) **tw@qiancai** - - Starting from v6.5.0, TiDB supports pushing down the `SELECT` clause (analytical query) of the `INSERT INTO SELECT` statement to TiFlash. In this way, you can easily save the TiFlash query result to a TiDB table specified by `INSERT INTO` for further analysis, which takes effect as result caching (that is, result materialization). For example: - - ```sql - INSERT INTO t2 SELECT Mod(x,y) FROM t1; - ``` - - During the experimental phase, this feature is disabled by default. To enable it, you can set the [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) system variable to `ON`. There are no special restrictions on the result table specified by `INSERT INTO` for this feature, and you are free to add a TiFlash replica to that result table or not. Typical usage scenarios of this feature include: - - - Run complex analytical queries using TiFlash - - Reuse TiFlash query results or deal with highly concurrent online requests - - Need a relatively small result set comparing with the input data size, preferably smaller than 100MiB. - - For more information, see the [user documentation](/tiflash/tiflash-results-materialization.md). - -* Support binding history execution plans (experimental) [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@qiancai** - - For a SQL statement, due to various factors during execution, the optimizer might occasionally choose a new execution plan instead of its previous optimal execution plan, and the SQL performance is impacted. In this case, if the optimal execution plan has not been cleared yet, it still exists in the SQL execution history. - - In v6.5.0, TiDB supports binding historical execution plans by extending the binding object in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statement. When the execution plan of a SQL statement changes, you can bind the original execution plan by specifying `plan_digest` in the `CREATE [GLOBAL | SESSION] BINDING` statement to quickly recover SQL performance, as long as the original execution plan is still in the SQL execution history memory table (for example, `statements_summary`). This feature can simplify the process of handling execution plan change issues and improve your maintenance efficiency. - - For more information, see [user documentation](/sql-plan-management.md#bind-historical-execution-plans). - -### Security - -* Support the password complexity policy [#38928](https://github.com/pingcap/tidb/issues/38928) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** - - After this policy is enabled, when you set a password, TiDB checks the password length, whether uppercase and lowercase letters, numbers, and special characters in the password are sufficient, whether the password matches the dictionary, and whether the password matches the username. This ensures that you set a secure password. - - TiDB provides the SQL function [`VALIDATE_PASSWORD_STRENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_validate-password-strength) to validate the password strength. - - For more information, see [User document](/password-management.md#password-complexity-policy). - -* Support the password expiration policy [#38936](https://github.com/pingcap/tidb/issues/38936) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** - - TiDB supports configuring the password expiration policy, including manual expiration, global-level automatic expiration, and account-level automatic expiration. After this policy is enabled, you must change your passwords periodically. This reduces the risk of password leakage due to long-term use and improves password security. - - For more information, see [User document](/password-management.md#password-expiration-policy). - -* Support the password reuse policy [#38937](https://github.com/pingcap/tidb/issues/38937) @[keeplearning20221](https://github.com/keeplearning20221) **tw@ran-huang** - - TiDB supports configuring the password reuse policy, including global-level password reuse policy and account-level password reuse policy. After this policy is enabled, you cannot use the passwords that you have used within a specified period or the most recent several passwords that you have used. This reduces the risk of password leakage due to repeated use of passwords and improves password security. - - For more information, see [User document](/password-management.md#password-reuse-policy). - -* Support failed-login tracking and temporary account locking policy [#38938](https://github.com/pingcap/tidb/issues/38938) @[lastincisor](https://github.com/lastincisor) **tw@ran-huang** - - After this policy is enabled, if you log in to TiDB with incorrect passwords multiple times consecutively, the account is temporarily locked. After the lock time ends, the account is automatically unlocked. - - For more information, see [User document](/password-management.md#failed-login-tracking-and-temporary-account-locking-policy). - -### Observability - -* TiDB Dashboard can be deployed on Kubernetes as an independent Pod [#1447](https://github.com/pingcap/tidb-dashboard/issues/1447) @[SabaPing](https://github.com/SabaPing) **tw@shichun-0415 - - TiDB v6.5.0 (and later) and TiDB Operator v1.4.0 (and later) support deploying TiDB Dashboard as an independent Pod on Kubernetes. Using TiDB Operator, you can access the IP address of this Pod to start TiDB Dashboard. - - Independently deploying TiDB Dashboard provides the following benefits: - - - The compute work of TiDB Dashboard does not pose pressure on PD nodes. This ensures more stable cluster operation. - - The user can still access TiDB Dashboard for diagnosis even if the PD node is unavailable. - - Accessing TiDB Dashboard in Internet does not involve the privileged interfaces of PD. Therefore, the security risk of the cluster is mitigated. - - For more information, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). - -### Performance - -* [INDEX MERGE](/glossary.md#index-merge) supports conjunctive normal form (expressions connected by `AND`) [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** - - Before v6.5.0, TiDB only supported using index merge for the filter conditions connected by `OR`. Starting from v6.5.0, TiDB has supported using index merge for filter conditions connected by `AND` in the `WHERE` clause. In this way, the index merge of TiDB can now cover more general combinations of query filter conditions and is no longer limited to union (`OR`) relationship. The current v6.5.0 version only supports index merge under `OR` conditions as automatically selected by the optimizer. To enable index merge for `AND` conditions, you need to use the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint. - - For more details about index merge, see [v5.4.0 Release Notes](/release-5.4.0#performance) and [Explain Index Merge](/explain-index-merge.md). - -* Support pushing down the following [JSON functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** - - * `->` - * `->>` - * `JSON_EXTRACT()` - - The JSON format provides a flexible way for application data modeling. Therefore, more and more applications are using the JSON format for data exchange and data storage. By pushing down JSON functions to TiFlash, you can improve the efficiency of analyzing data in the JSON type and use TiDB for more real-time analytics scenarios. - -* Support pushing down the following [string functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** - - * `regexp_like` - * `regexp_instr` - * `regexp_substr` - -* Support the global optimizer hint to interfere with the execution plan generation in [Views](/views.md) [#37887](https://github.com/pingcap/tidb/issues/37887) @[Reminiscent](https://github.com/Reminiscent) **tw@Oreoxmt** - - In some view access scenarios, you need to use optimizer hints to interfere with the execution plan of the query in the view to achieve the best performance. Since v6.5.0, TiDB supports adding global hints for the query blocks in the view, thus the hints defined in the query can be effective in the view. This feature provides a way to inject hints into complex SQL statements that contain nested views, enhances the execution plan control, and stabilizes the performance of complex statements. To use global hints, you need to [name the query blocks](/optimizer-hints.md#step-1-define-the-query-block-name-of-the-view-using-the-qb_name-hint) and [specify hint references](/optimizer-hints.md#step-2-add-the-target-hints). - - For more information, see [User document](/optimizer-hints.md#hints-that-take-effect-globally). - -* Support pushing down sorting operations of [partitioned tables](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** - - Although the [partitioned table](/partitioned-table.md) feature has been GA since v6.1.0, TiDB is continually improving its performance. In v6.5.0, TiDB supports pushing down sorting operations such as `ORDER BY` and `LIMIT` to TiKV for computation and filtering, which reduces network I/O overhead and improves SQL performance when you use partitioned tables. - -* Optimizer introduces a more accurate Cost Model Version 2 [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** - - TiDB v6.2.0 introduces the [Cost Model Version 2](/cost-model.md#cost-model-version-2) as an experimental feature. This model uses a more accurate cost estimation method to help the optimizer choose the optimal execution plan. Especially when TiFlash is deployed, Cost Model Version 2 automatically helps choose the appropriate storage engine and avoids much manual intervention. After real-scene testing for a period of time, this model becomes GA in v6.5.0. SInce v6.5.0, newly-created clusters use Cost Model Version 2 by default. For clusters upgrade to v6.5.0, because Cost Model Version 2 might cause changes to query plans, you can set the [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-new-in-v620) variable to use the new cost model after sufficient performance testing. - - Cost Model Version 2 becomes a generally available feature that significantly improves the overall capability of the TiDB optimizer and helps TiDB evolve towards a more powerful HTAP database. - - For more information, see [User document](/cost-model.md#cost-model-version-2). - -* TiFlash optimizes the operations of getting the number of table rows [#37165](https://github.com/pingcap/tidb/issues/37165) @[elsa0520](https://github.com/elsa0520) - - In the scenarios of data analysis, It is a common operation to get the actual number of rows of a table through `COUNT(*)` without filter conditions. In v6.5.0, TiFlash optimizes the rewriting of `COUNT(*)` and automatically selects the not-null columns with the shortest column definition to count the number of rows, which can effectively reduce the number of I/O operations in TiFlash and improve the execution efficiency of getting row count. - -### Stability - -* The global memory control feature is now GA [#37816](https://github.com/pingcap/tidb/issues/37816) @[wshwsh12](https://github.com/wshwsh12) **tw@TomShawn** - - TiDB v6.4.0 introduces global memory control as an experimental feature. Since v6.5.0, the global memory control feature becomes GA and can track the main memory consumption in TiDB. When the global memory consumption reaches the threshold defined by [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640), TiDB tries to limit the memory usage by GC or canceling SQL operations, to ensure stability. - - Note that the memory consumed by the transaction in a session (the maximum value was previously set by the configuration item [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit)) is now tracked by the memory management module: when the memory consumption of a single session reaches the threshold defined by the system variable [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query), the behavior defined by the system variable [`tidb_mem_oom_action`](/system-variables.md#tidb_mem_oom_action-new-in-v610) will be triggered (the default is `CANCEL`, that is, canceling operations). To ensure forward compatibility, when [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) is configured as a non-default value, TiDB will still ensure that transactions can use the memory set by `txn-total-size-limit`. - - If you are using TiDB v6.5.0 or later, it is recommended to remove [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) and not to set a separate limit on the memory usage of transactions. Instead, use the system variables [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) and [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) to manage global memory, which can improve the efficiency of memory usage. - - For more information, see the [user document](/configure-memory-usage.md). - -### Ease of use - -* Refine the execution information of the TiFlash `TableFullScan` operator in the `EXPLAIN ANALYZE` output [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) **tw@qiancai** - - The `EXPLAIN ANALYZE` statement is used to print execution plans and runtime statistics. In v6.5.0, TiFlash has refined the execution information of the `TableFullScan` operator by adding the DMFile-related execution information. Now the TiFlash data scan status information is presented more intuitively, which helps you analyze TiFlash performance more easily. - - For more information, see [user documentation](sql-statements/sql-statement-explain-analyze.md). - -* Support the output of execution plans in the JSON format [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) **tw@ran-huang** - - In v6.5.0, TiDB extends the output format of execution plans. By using `EXPLAIN FORMAT=tidb_json `, you can output SQL execution plans in the JSON format. With this capability, SQL debugging tools and diagnostic tools can read execution plans more conveniently and accurately, thus improving the ease of use of SQL diagnosis and tuning. - - For more information, see [user document](/sql-statements/sql-statement-explain.md). - -### MySQL compatibility - -* Support a high-performance and globally monotonic `AUTO_INCREMENT` [#38442](https://github.com/pingcap/tidb/issues/38442) @[tiancaiamao](https://github.com/tiancaiamao) **tw@Oreoxmt** - - TiDB v6.4.0 introduces the `AUTO_INCREMENT` MySQL compatibility mode as an experimental feature. This mode introduces a centralized auto-increment ID allocating service that ensures IDs monotonically increase on all TiDB instances. This feature makes it easier to sort query results by auto-increment IDs. In v6.5.0, this feature becomes GA. The insert TPS of a table using this feature is expected to exceed 20,000, and this feature supports elastic scaling to improve the write throughput of a single table and entire clusters. To use the MySQL compatibility mode, you need to set `AUTO_ID_CACHE` to `1` when creating a table. The following is an example: - - ```sql - CREATE TABLE t(a int AUTO_INCREMENT key) AUTO_ID_CACHE 1; - ``` - - For more information, see [user document](/auto-increment.md#mysql-compatibility-mode). - -### Data migration - -* Support exporting and importing SQL and CSV files in gzip, snappy, and zstd compression formats [#38514](https://github.com/pingcap/tidb/issues/38514) @[lichunzhu](https://github.com/lichunzhu) **tw@hfxsd** - - Dumpling supports exporting data to compressed SQL and CSV files in the following compression formats: gzip, snappy, and zstd. TiDB Lightning also supports importing compressed files in these formats. - - Previously, you had to provide large storage space for exporting or importing data to store CSV and SQL files, resulting in high storage costs. With the release of this feature, you can greatly reduce your storage costs by compressing the data files. - - For more information, see [User document](/dumpling-overview.md#improve-export-efficiency-through-concurrency). - -* Optimize binlog parsing capability [#924](https://github.com/pingcap/dm/issues/924) @[gmhdbjd](https://github.com/GMHDBJD) **tw@hfxsd** - - TiDB can filter out binlog events of the schemas and tables that are not in the migration task, thus improving the parsing efficiency and stability. This policy takes effect by default in v6.5.0. No additional configuration is required. - - Previously, even if only a few tables were migrated, the entire binlog file upstream had to be parsed. The binlog events of the tables in the binlog file that did not need to be migrated still had to be parsed, which was not efficient. Meanwhile, if these binlog events do not support parsing, the task will fail. By only parsing the binlog events of the tables in the migration task, the binlog parsing efficiency can be greatly improved and the task stability can be enhanced. - -* Disk quota in TiDB Lightning is GA [#446](https://github.com/pingcap/tidb-lightning/issues/446) @[buchuitoudegou](https://github.com/buchuitoudegou) **tw@hfxsd** - - You can configure disk quota for TiDB Lightning. When there is not enough disk quota, TiDB Lightning stops reading source data and writing temporary files. Instead, it writes the sorted key-values to TiKV first, and then continues the import process after TiDB Lightning deletes the local temporary files. - - Previously, when TiDB Lightning imported data using physical mode, it would create a large number of temporary files on the local disk for encoding, sorting, and splitting the raw data. When your local disk ran out of space, TiDB Lightning would exit with an error due to failing to write to the file. With this feature, TiDB Lightning tasks can avoid overwriting the local disk. - - For more information, see [User document](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620). - -* Continuous data validation in DM is GA [#4426](https://github.com/pingcap/tiflow/issues/4426) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** - - In the process of migrating incremental data from upstream to downstream databases, there is a small probability that data flow might cause errors or data loss. In scenarios where strong data consistency is required, such as credit and securities businesses, you can perform a full volume checksum on the data after migration to ensure data consistency. However, in some incremental replication scenarios, upstream and downstream writes are continuous and uninterrupted because the upstream and downstream data is constantly changing, making it difficult to perform consistency checks on all data. - - Previously, you needed to interrupt the business to validate the full data, which would affect your business. Now, with this feature, you can perform incremental data validation without interrupting the business. - - For more information, see [User document](/dm/dm-continuous-data-validation.md). - -### TiDB data share subscription - -* TiCDC supports replicating changed logs to storage sinks (experimental) [tiflow#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) **tw@shichun-0415** - - TiCDC supports replicating changed logs to Amazon S3, Azure Blob Storage, NFS, and other S3-compatible storage services. Cloud storage is reasonably priced and easy to use. If you do not want to use Kafka, you can use storage sinks. TiCDC saves the changed logs to a file and then sends it to the storage system. From the storage system, the consumer program reads the newly generated changed log files periodically. - - The storage sink supports changed logs in the canal-json and CSV formats. Noticeably, the latency of replicating changed logs from TiCDC to storage can be as short as xx. For more information, see [User document](/ticdc/ticdc-sink-to-cloud-storage.md). - -* TiCDC supports bidirectional replication across multiple clusters @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** - - TiCDC supports bidirectional replication across multiple TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution across multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from each TiDB cluster to the other TiDB clusters, you can achieve bidirectional data replication across multiple TiDB clusters. - - For more information, refer to [user document](/ticdc/ticdc-bidirectional-replication.md). - -* TiCDC performance improves significantly **tw@shichun-0415 - - In a test scenario of the TiDB cluster, the performance of TiCDC has improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even during TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is xx rows/s, the replication latency in DR can be maintained at x s. - -### Backup and restore - -* TiDB Backup & Restore supports snapshot checkpoint backup [#38647](https://github.com/pingcap/tidb/issues/38647) @[Leavrth](https://github.com/Leavrth) **tw@shichun-0415 - - TiDB snapshot backup supports resuming backup from a checkpoint. When Backup & Restore (BR) encounters a recoverable error, it retries backup. However, BR exits if the retry fails for several times. The checkpoint backup feature allows for longer recoverable failures to be retried, for example, a network failure of tens of minutes. - - Note that if you do not recover the system from a failure within one hour after BR exits, the snapshot data to be backed up might be recycled by the GC mechanism, causing the backup to fail. For more information, see [User document](/br/br-checkpoint.md). - -* PITR performance improved remarkably **tw@shichun-0415 - - In the log restore stage, the restore speed of one TiKV can reach xx MB/s, which is x times faster than before. The restore speed is scalable and the RTO in DR scenarios is reduced greatly. The RPO in DR scenarios can be as short as 5 minutes. In normal cluster operation and maintenance (OM), for example, a rolling upgrade is performed or only one TiKV is down, the RPO can be 5 minutes. - -* TiKV-BR GA: Supports backing up and restoring RawKV [#67](https://github.com/tikv/migration/issues/67) @[pingyu](https://github.com/pingyu) @[haojinming](https://github.com/haojinming) **tw@shichun-0415** - - TiKV-BR is a backup and restore tool used in TiKV clusters. TiKV and PD can constitute a KV database when used without TiDB, which is called RawKV. TiKV-BR supports data backup and restore for products that use RawKV. TiKV-BR can also upgrade the [`api-version`](/tikv-configuration-file.md#api-version-new-in-v610) from `API V1` to `API V2` for TiKV cluster. - - For more information, see [User document](https://tikv.org/docs/latest/concepts/explore-tikv-features/backup-restore/). - -## Compatibility changes - -### System variables - -| Variable name | Change type | Description | -|--------|------------------------------|------| -|[`tidb_enable_amend_pessimistic_txn`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407)| Deprecated | Starting from v6.5.0, this variable is deprecated, and TiDB uses the [Metadata Lock](/metadata-lock.md) feature by default to avoid the `Information schema is changed` error. | -| [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Changes the default value from `1` to `2`, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | -| [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON`, meaning that the metadata lock feature is enabled by default. | -| [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | -| [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Changes the default value from `OFF` to `ON`, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | -| [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | Modified | For versions earlier than TiDB v6.5.0, this variable is used to set the threshold value of memory quota for a query. For TiDB v6.5.0 and later versions, this variable is used to set the threshold value of memory quota for a session. | -| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | Starting from v6.5.0, when this variable is set to`closest-adaptive` and the estimated result of a read request is greater than or equal to [`tidb_adaptive_closest_read_threshold`](/system-variables.md#tidb_adaptive_closest_read_threshold-new-in-v630), the number of TiDB nodes whose `closest-adaptive` configuration takes effect is limited in each availability zone, which is always the same as the number of TiDB nodes in the availability zone with the fewest TiDB nodes, and the other TiDB nodes automatically read from the leader replica. | -| [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) | Modified | Changes the default value from `0` to `80%`, meaning that the memory limit for a TiDB instance is 80% of the total memory by default. | -| [`default_password_lifetime`](/system-variables.md#default_password_lifetime-new-in-v650) | Newly added | Sets the global policy for automatic password expiration to require users to change passwords periodically. The default value `0` indicates that passwords never expire. | -| [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-new-in-v650) | Newly added | Indicates whether TiDB disconnects the client connection when the password is expired. This variable is read-only. | -| [`password_history`](/system-variables.md#password_history-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on the number of password changes. The default value `0` means disabling the password reuse policy based on the number of password changes. | -| [`password_reuse_interval`](/system-variables.md#password_reuse_interval-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on time elapsed. The default value `0` means disabling the password reuse policy based on time elapsed. | -| [`tidb_cdc_write_source`](/system-variables.md#tidb_cdc_write_source-new-in-v650) | Newly added | When this variable is set to a value other than 0, data written in this session is considered to be written by TiCDC. This variable can only be modified by TiCDC. Do not manually modify this variable in any case. | -| [`tidb_index_merge_intersection_concurrency`](/system-variables.md#tidb_index_merge_intersection_concurrency-new-in-v650) | Newly added | Sets the maximum concurrency for the intersection operations that index merge performs. It is effective only when TiDB accesses partitioned tables in the dynamic pruning mode. | -| [`tidb_source_id`](/system-variables.md#tidb_source_id-new-in-v650) | Newly added | This variable is used to configure the different cluster IDs in a [bi-directional replication](/ticdc/manage-ticdc.md#bi-directional-replication) cluster.| -| [`tidb_ttl_delete_batch_size`](/system-variables.md#tidb_ttl_delete_batch_size-new-in-v650) | Newly added | This variable is used to set the maximum number of rows that can be deleted in a single `DELETE` transaction in a TTL job. | -| [`tidb_ttl_delete_rate_limit`](/system-variables.md#tidb_ttl_delete_rate_limit-new-in-v650) | Newly added | This variable is used to limit the maximum number of `DELETE` statements allowed per second in a single node in a TTL job. When this variable is set to `0`, no limit is applied. | -| [`tidb_ttl_delete_worker_count`](/system-variables.md#tidb_ttl_delete_worker_count-new-in-v650) | Newly added | This variable is used to set the maximum concurrency of TTL jobs on each TiDB node. | -| [`tidb_ttl_job_enable`](/system-variables.md#tidb_ttl_job_enable-new-in-v650) | Newly added | This variable is used to control whether to enable TTL jobs. If it is set to `OFF`, all tables with TTL attributes automatically stop cleaning up expired data. | -| [`tidb_ttl_job_run_interval`](/system-variables.md#tidb_ttl_job_run_interval-new-in-v650) | Newly added | This variable is used to control the scheduling interval of the TTL job in the background. For example, if the current value is set to `1h0m0s`, each table with TTL attributes will clean up expired data once every hour. | -| [`tidb_ttl_job_schedule_window_start_time`](/system-variables.md#tidb_ttl_job_schedule_window_start_time-new-in-v650) | Newly added | This variable is used to control the start time of the scheduling window of the TTL job in the background. When you modify the value of this variable, be cautious that a small window might cause the cleanup of expired data to fail. | -| [`tidb_ttl_job_schedule_window_end_time`](/system-variables.md#tidb_ttl_job_schedule_window_end_time-new-in-v650) | Newly added | This variable is used to control the end time of the scheduling window of the TTL job in the background. When you modify the value of this variable, be cautious that a small window might cause the cleanup of expired data to fail. | -| [`tidb_ttl_scan_batch_size`](/system-variables.md#tidb_ttl_scan_batch_size-new-in-v650) | Newly added | This variable is used to set the `LIMIT` value of each `SELECT` statement used to scan expired data in a TTL job. | -| [`tidb_ttl_scan_worker_count`](/system-variables.md#tidb_ttl_scan_worker_count-new-in-v650) | Newly added | This variable is used to set the maximum concurrency of TTL scan jobs on each TiDB node. | -| [`validate_password.check_user_name`](/system-variables.md#validate_passwordcheck_user_name-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password matches the username. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled. The default value is `ON`. | -| [`validate_password.dictionary`](/system-variables.md#validate_passworddictionary-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password matches any word in the dictionary. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `2` (STRONG). The default value is `""`. | -| [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) | Newly added | This variable controls whether to perform password complexity check. If this variable is set to `ON`, TiDB performs the password complexity check when you set a password. The default value is `OFF`. | -| [`validate_password.length`](/system-variables.md#validate_passwordlength-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password length is sufficient. By default, the minimum password length is `8`. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled. | -| [`validate_password.mixed_case_count`](/system-variables.md#validate_passwordmixed_case_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient uppercase and lowercase letters. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | -| [`validate_password.number_count`](/system-variables.md#validate_passwordnumber_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient numbers. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | -| [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) | Newly added | This variable controls the policy for the password complexity check. The value can be `0`, `1`, or `2` (corresponds to LOW, MEDIUM, or STRONG). This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled. The default value is `1`. | -| [`validate_password.special_char_count`](/system-variables.md#validate_passwordspecial_char_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient special characters. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | - -### Configuration file parameters - -| Configuration file | Configuration parameter | Change type | Description | -| -------- | -------- | -------- | -------- | -| TiDB | [`server-memory-quota`](/tidb-configuration-file.md#server-memory-quota-new-in-v409) | Deprecated | Since v6.5.0, this configuration item is deprecated. Instead, use the system variable [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) to manage memory globally. | -| TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password-new-in-v650) | Newly added | Determines whether TiDB disconnects the client connection when the password is expired. The default value is `true`, which means the client connection is disconnected when the password is expired. | -| TiKV | `raw-min-ts-outlier-threshold` | Deleted | Since v6.4.0, this configuration item was deprecated. Since v6.5.0, this configuration item is deleted. | -| TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | Change the default value from `1s` to `200ms`. | -| TiKV | [`memory-use-ratio`](/tikv-configuration-file.md#memory-use-ratio-introduced-new-in-v650) | Newly added | Indicates the ratio of available memory to total system memory in PITR log recovery. | - -### Others - -- Starting from v6.5.0, the `mysql.user` table adds two new columns: `Password_reuse_history` and `Password_reuse_time`. -- The [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) feature is enabled by default and is not compatible with the [PITR (Point-in-time recovery)](/br/br-pitr-guide.md) feature. When using the index acceleration feature, you need to make sure that no PITR backup task is running in the background; otherwise, unexpected results might occur. For more information, see [tidb_ddl_enable_fast_reorg](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630). - -## Deprecated feature - -Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407) mechanism introduced in v4.0.7 is deprecated and replaced by [Metadata Lock](/metadata-lock.md). - -## Improvements - -+ TiDB - - - For `BIT` and `CHAR` columns, make the result of `INFORMATION_SCHEMA.COLUMNS` consistent with MySQL [#25472](https://github.com/pingcap/tidb/issues/25472) @[hawkingrei](https://github.com/hawkingrei) - -+ TiKV - - - Stop writing to Raft Engine when there is insufficient space to avoid exhausting disk space [#13642](https://github.com/tikv/tikv/issues/13642) @[jiayang-zheng](https://github.com/jiayang-zheng) - - Support pushing down the `json_valid` function to TiKV [#13571](https://github.com/tikv/tikv/issues/13571) @[lizhenhuan](https://github.com/lizhenhuan) - - Support backing up multiple ranges of data in a single backup request [#13701](https://github.com/tikv/tikv/issues/13701) @[Leavrth](https://github.com/Leavrth) - - Support backing up data to the Asia Pacific region (ap-southeast-3) of AWS by updating the rusoto library [#13751](https://github.com/tikv/tikv/issues/13751) @[3pointer](https://github.com/3pointer) - - Reduce pessimistic transaction conflicts [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) - - Improve recovery performance by caching external storage objects [#13798](https://github.com/tikv/tikv/issues/13798) @[YuJuncen](https://github.com/YuJuncen) - - Run the CheckLeader in a dedicated thread to reduce TiCDC replication latency [#13774](https://github.com/tikv/tikv/issues/13774) @[overvenus](https://github.com/overvenus) - - Support pull model for Checkpoints [#13824](https://github.com/tikv/tikv/issues/13824) @[YuJuncen](https://github.com/YuJuncen) - - Avoid spinning issues on the sender side by updating crossbeam-channel [#13815](https://github.com/tikv/tikv/issues/13815) @[sticnarf](https://github.com/sticnarf) - - Support batch Coprocessor tasks processing in TiKV [#13849](https://github.com/tikv/tikv/issues/13849) @[cfzjywxk](https://github.com/cfzjywxk) - - Reduce waiting time on failure recovery by notifying TiKV to wake up Regions [#13648](https://github.com/tikv/tikv/issues/13648) @[LykxSassinator](https://github.com/LykxSassinator) - - Reduce the requested size of memory usage by code optimization [#13827](https://github.com/tikv/tikv/issues/13827) @[BusyJay](https://github.com/BusyJay) - - Introduce the Raft extension to improve code extensibility [#13827](https://github.com/tikv/tikv/issues/13827) @[BusyJay](https://github.com/BusyJay) - - Support using tikv-ctl to query which Regions are included in a certain key range [#13760](https://github.com/tikv/tikv/issues/13760) [@HuSharp](https://github.com/HuSharp) - - Improve read and write performance for rows that are not updated but locked continuously [#13694](https://github.com/tikv/tikv/issues/13694) [@sticnarf](https://github.com/sticnarf) - -+ PD - - - Optimize the granularity of locks to reduce lock contention and improve the handling capability of heartbeats under high concurrency [#5586](https://github.com/tikv/pd/issues/5586) @[rleungx](https://github.com/rleungx) - - Optimize scheduler performance for large-scale clusters and accelerate the production of scheduling policies [#5473](https://github.com/tikv/pd/issues/5473) @[bufferflies](https://github.com/bufferflies) - - Improve the speed of loading Regions [#5606](https://github.com/tikv/pd/issues/5606) @[rleungx](https://github.com/rleungx) - - Reduce unnecessary overhead by optimized handling of Region heartbeats [#5648](https://github.com/tikv/pd/issues/5648)@[rleungx](https://github.com/rleungx) - - Add the feature of automatically garbage collecting tombstone stores [#5348](https://github.com/tikv/pd/issues/5348) @[nolouch](https://github.com/nolouch) - -+ TiFlash - - - Improve write performance in scenarios where there is no batch processing on the SQL side [#6404](https://github.com/pingcap/tiflash/issues/6404) @[lidezhu](https://github.com/lidezhu) - - Add more details for TableFullScan in the `explain analyze` output [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) - -+ Tools - - + TiDB Dashboard - - - Add three new fields to the slow query page: "Is Prepared?","Is Plan from Cache?","Is Plan from Binding?" [#1451](https://github.com/pingcap/tidb-dashboard/issues/1451) @[shhdgit](https://github.com/shhdgit) - - + Backup & Restore (BR) - - - Optimize BR memory usage during the process of cleaning backup log data [#38869](https://github.com/pingcap/tidb/issues/38869) @[Leavrth](https://github.com/Leavrth) - - (dup) Fix the restoration failure issue caused by PD leader switch during the restoration process [#36910](https://github.com/pingcap/tidb/issues/36910) @[MoCuishle28](https://github.com/MoCuishle28) - - Improve TLS compatibility by using the OpenSSL protocol in log backup [#13867](https://github.com/tikv/tikv/issues/13867) @[YuJuncen](https://github.com/YuJuncen) - - + TiCDC - - - (dup) Improve the performance of Kafka protocol encoder [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7532](https://github.com/pingcap/tiflow/issues/7532) [#7543](https://github.com/pingcap/tiflow/issues/7543) @[3AceShowHand](https://github.com/3AceShowHand) @[sdojjy](https://github.com/sdojjy) - - + TiDB Data Migration (DM) - - - Improve the data replication performance for DM by not parsing the data of tables in the block list [#7622](https://github.com/pingcap/tiflow/pull/7622) @[GMHDBJD](https://github.com/GMHDBJD) - - Improve the write efficiency of DM relay by using asynchronous write and batch write [#7580](https://github.com/pingcap/tiflow/pull/7580) @[GMHDBJD](https://github.com/GMHDBJD) - - Optimize the error messages in DM precheck [#7621](https://github.com/pingcap/tiflow/issues/7621) @[buchuitoudegou](https://github.com/buchuitoudegou) - - Improve the compatibility of `SHOW SLAVE HOSTS` for old MySQL versions [#5017](https://github.com/pingcap/tiflow/issues/5017) @[lyzx2001](https://github.com/lyzx2001) - -## Bug fixes - -+ TiDB - - - Fix the issue of memory chunk misuse for the chunk reuse feature that occurs in some cases [#38917](https://github.com/pingcap/tidb/issues/38917) @[keeplearning20221](https://github.com/keeplearning20221) - - Fix the issue that the internal sessions of `tidb_constraint_check_in_place_pessimistic` might be affected by the global setting [#38766](https://github.com/pingcap/tidb/issues/38766) @[ekexium](https://github.com/ekexium) - - Fix the issue that the `AUTO_INCREMENT` column cannot work with the `CHECK` constraint [#38894](https://github.com/pingcap/tidb/issues/38894) @[YangKeao](https://github.com/YangKeao) - - Fix the issue that using `INSERT IGNORE INTO` to insert data of the `STRING` type into an auto-increment column of the `SMALLINT` type will cause an error [#38483](https://github.com/pingcap/tidb/issues/38483) @[hawkingrei](https://github.com/hawkingrei) - - Fix the issue that the null pointer error occurs in the operation of renaming the partition column of a partitioned table [#38932](https://github.com/pingcap/tidb/issues/38932) @[mjonss](https://github.com/mjonss) - - Fix the issue that modifying the partition column of a partitioned table causes DDL to hang [#38530](https://github.com/pingcap/tidb/issues/38530) @[mjonss](https://github.com/mjonss) - - Fix the issue that the `ADMIN SHOW JOB` operation panics after upgrading from v4.0.16 to v6.4.0 [#38980](https://github.com/pingcap/tidb/issues/38980) @[tangenta](https://github.com/tangenta) - - Fix the issue that the `tidb_decode_key` function fails to correctly parse the encoding of partitioned tables [#39304](https://github.com/pingcap/tidb/issues/39304) @[Defined2014](https://github.com/Defined2014) - - Fixe the issue that gRPC error logs are not redirected to the correct log file during log rotation [#38941](https://github.com/pingcap/tidb/issues/38941) @[xhebox](https://github.com/xhebox) - - Fix the issue that TiDB generates an unexpected execution plan for the `BEGIN; SELECT... FOR UPDATE;` point query when TiKV is not configured as a read engine [#39344](https://github.com/pingcap/tidb/issues/39344) @[Yisaer](https://github.com/Yisaer) - - Fix the issue that mistakenly pushing down `StreamAgg` to TiFlash causes wrong results [#39266](https://github.com/pingcap/tidb/issues/39266) @[fixdb](https://github.com/fixdb) - -+ TiKV - - - Fix an error in Raft Engine ctl [#11119](https://github.com/tikv/tikv/issues/11119) @[tabokie](https://github.com/tabokie) - - Fix the `Get raft db is not allowed` error when executing the `compact raft` command in tikv-ctl [#13515](https://github.com/tikv/tikv/issues/13515) @[guoxiangCN](https://github.com/guoxiangCN) - - Fix the issue that log backup does not work when TLS is enabled [#13867](https://github.com/tikv/tikv/issues/13867) @[YuJuncen](https://github.com/YuJuncen) - - Fix the support issue of the Geometry field type [#13651](https://github.com/tikv/tikv/issues/13651) @[dveeden](https://github.com/dveeden) - - Fix the issue that `_` in the `LIKE` operator cannot match non-ASCII characters when new collation is not enabled [#13769](https://github.com/tikv/tikv/issues/13769) @[YangKeao](https://github.com/YangKeao) - - Fix the issue that tikv-ctl is terminated unexpectedly when executing the `reset-to-version` command [#13829](https://github.com/tikv/tikv/issues/13829) @[tabokie](https://github.com/tabokie) - -+ PD - - - Fix the issue that the `balance-hot-region-scheduler` configuration is not persisted if not modified [#5701](https://github.com/tikv/pd/issues/5701) @[HunDunDM](https://github.com/HunDunDM) - - Fix the issue that `rank-formula-version` does not retain the pre-upgrade configuration during the upgrade process [#5698](https://github.com/tikv/pd/issues/5698) @[HunDunDM](https://github.com/HunDunDM) - -+ TiFlash - - - Fix the issue that column files in the delta layer cannot be compacted after restarting TiFlash [#6159](https://github.com/pingcap/tiflash/issues/6159) @[lidezhu](https://github.com/lidezhu) - - Fix the issue that TiFlash File Open OPS is too high [#6345](https://github.com/pingcap/tiflash/issues/6345) @[JaySon-Huang](https://github.com/JaySon-Huang) - -+ Tools - - + Backup & Restore (BR) - - - (dup) Fix the issue that when BR deletes log backup data, it mistakenly deletes data that should not be deleted [#38939](https://github.com/pingcap/tidb/issues/38939) @[Leavrth](https://github.com/Leavrth) - - (dup) Fix the issue that restore tasks fail when using old framework for collations in databases or tables [#39150](https://github.com/pingcap/tidb/issues/39150) @[MoCuishle28](https://github.com/MoCuishle28) - - Fix the issue that backup fails because Alibaba Cloud and Huawei Cloud are not fully compatible with Amazon S3 storage [#39545](https://github.com/pingcap/tidb/issues/39545) @[3pointer](https://github.com/3pointer) - - + TiCDC - - - Fix the issue that TiCDC gets stuck when the PD leader crashes [#7470](https://github.com/pingcap/tiflow/issues/7470) @[zeminzhou](https://github.com/zeminzhou) - - (dup) Fix data loss occurred in the scenario of executing DDL statements first and then pausing and resuming the changefeed [#7682](https://github.com/pingcap/tiflow/issues/7682) @[asddongmen](https://github.com/asddongmen) - - Fix the issue that TiCDC mistakenly reports an error when there is a later version of TiFlash [#7744](https://github.com/pingcap/tiflow/issues/7744) @[overvenus](https://github.com/overvenus) - - (dup) Fix the issue that the sink component gets stuck if the downstream network is unavailable [#7706](https://github.com/pingcap/tiflow/issues/7706) @[hicqu](https://github.com/hicqu) - - Fix the issue that data is lost when a user quickly deletes a replication task and then creates another one with the same task name [#7657](https://github.com/pingcap/tiflow/issues/7657) @[overvenus](https://github.com/overvenus) - - + TiDB Data Migration (DM) - - - Fix the issue that a `task-mode:all` task cannot be started when the upstream database enables the GTID mode but does not have any data [#7037](https://github.com/pingcap/tiflow/issues/7037) @[liumengya94](https://github.com/liumengya94) - - Fix the issue that data is replicated for multiple times when a new DM worker is scheduled before the existing worker exits [#7658](https://github.com/pingcap/tiflow/issues/7658) @[GMHDBJD](https://github.com/GMHDBJD) - - Fix the issue that DM precheck is not passed when the upstream database uses regular expressions to grant privileges [#7645](https://github.com/pingcap/tiflow/issues/7645) @[lance6716](https://github.com/lance6716) - - + TiDB Lightning - - - Fix the memory leakage issue when TiDB Lightning imports a huge source data file [#39331](https://github.com/pingcap/tidb/issues/39331) @[dsdashun](https://github.com/dsdashun) - - Fix the issue that TiDB Lightning cannot detect conflicts correctly when importing data in parallel [#39476](https://github.com/pingcap/tidb/issues/39476) @[dsdashun](https://github.com/dsdashun) - -## Contributors - -We would like to thank the following contributors from the TiDB community: - -- [e1ijah1](https://github.com/e1ijah1) -- [guoxiangCN](https://github.com/guoxiangCN) (First-time contributor) -- [jiayang-zheng](https://github.com/jiayang-zheng) -- [jiyfhust](https://github.com/jiyfhust) -- [mikechengwei](https://github.com/mikechengwei) -- [pingandb](https://github.com/pingandb) -- [sashashura](https://github.com/sashashura) -- [sourcelliu](https://github.com/sourcelliu) -- [wxbty](https://github.com/wxbty) From b4eccb963e95bcffa0ad91e445d8bf2f3fcd4a42 Mon Sep 17 00:00:00 2001 From: Ran Date: Fri, 16 Dec 2022 13:18:24 +0800 Subject: [PATCH 41/83] Revert "remove dup label" This reverts commit 60655c72f8710bc3ba17ad811044df152bfcf4e5. --- releases/release-6.5.0.md | 471 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 471 insertions(+) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index e69de29bb2d1d..511b09ca98cee 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -0,0 +1,471 @@ +--- +title: TiDB 6.5.0 Release Notes +--- + +# TiDB 6.5.0 Release Notes + +Release date: xx xx, 2022 + +TiDB version: 6.5.0 + +Quick access: [Quick start](https://docs.pingcap.com/tidb/v6.5/quick-start-with-tidb) | [Production deployment](https://docs.pingcap.com/tidb/v6.5/production-deployment-using-tiup) | [Installation packages](https://www.pingcap.com/download/?version=v6.5.0#version-list) + +TiDB 6.5.0 is a Long-Term Support Release (LTS). + +Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, improvements, and bug fixes released in [6.2.0-DMR](/releases/release-6.2.0.md), [6.3.0-DMR](/releases/release-6.3.0.md), [6.4.0-DMR](/releases/release-6.4.0.md), but also introduces the following key features and improvements: + +- Enable [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) by default, which improves the performance of adding indexes by 10 times compared with v6.1. +- Support TiDB global memory control via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). +- Support a high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute, compatible with MySQL. +- Support [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md), compatible with TiCDC and PITR. +- Enhance the [optimizer cost model](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature. +- Support pushing down the `JSON_EXTRACT()` function to TiFlash. +- Support [password management](/password-management.md) policies that meet password compliance auditing requirements. +- TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. +- TiDB Data Migration (DM) supports [continuous data validation](/dm/dm-continuous-data-validation.md). +- TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by x times, and reduces RPO to x minutes. +- Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) by x times and reduces replication latency to x seconds. +- Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). +- TiCDC supports [replicating changed logs to object storage ](ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS (experimental). + +## New features + +### SQL + +* The performance of TiDB adding indexes is improved by 10 times [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) **tw@Oreoxmt** + + TiDB v6.3.0 introduces the [Add index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) as an experimental feature to improve the speed of backfilling when creating an index. In v6.5.0, this feature becomes GA and is enabled by default, and the performance on large tables is expected to be 10 times faster. The acceleration feature is suitable for scenarios where a single SQL statement adds an index serially. When multiple SQL statements add indexes in parallel, only one of the SQL statements will be accelerated. + +* Provide lightweight metadata lock to improve the DML success rate during DDL change [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) **tw@Oreoxmt** + + TiDB v6.3.0 introduces [Metadata lock](/metadata-lock.md) as an experimental feature. To avoid the `Information schema is changed` error caused by DML statements, TiDB coordinates the priority of DMLs and DDLs during table metadata change, and makes the ongoing DDLs wait for the DMLs with old metadata to commit. In v6.5.0, this feature becomes GA and is enabled by default. It is suitable for various types of DDLs change scenarios. + + For more information, see [User document](/metadata-lock.md). + +* Support restoring a cluster to a specific point in time by using `FLASHBACK CLUSTER TO TIMESTAMP` [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** + + TiDB v6.4.0 introduces the [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement as an experimental feature. You can use this statement to restore a cluster to a specific point in time within the Garbage Collection (GC) life time. In v6.5.0, this statement becomes GA. This feature helps you to easily undo DML misoperations, restore the original cluster in minutes, roll back data at different time points to determine the exact time when data changes, and it is compatible with PITR and TiCDC. + + For more information, see [user document](/sql-statements/sql-statement-flashback-to-timestamp.md). + +* Fully support non-transactional DML statements including `INSERT`, `REPLACE`, `UPDATE`, and `DELETE` [#33485](https://github.com/pingcap/tidb/issues/33485) @[ekexium](https://github.com/ekexium) **tw@Oreoxmt** + + In the scenarios of large data processing, a single SQL statement with a large transaction might have a negative impact on the cluster stability and performance. A non-transactional DML statement is a DML statement split into multiple SQL statements for internal execution. The split statements compromise transaction atomicity and isolation but greatly improve the cluster stability. TiDB supports non-transactional `DELETE` statements since v6.1.0, and supports non-transactional `INSERT`, `REPLACE`, and `UPDATE` statements since v6.5.0. + + For more information, see [Non-Transactional DML statements](/non-transactional-dml.md) and [`BATCH` syntax](/sql-statements/sql-statement-batch.md). + +* Support time to live (TTL) (experimental) [#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** + + TTL provides row-level data lifetime management. In TiDB, a table with the TTL attribute automatically checks data lifetime and deletes expired data at the row level. TTL is designed to help you clean up unnecessary data periodically and in a timely manner without affecting the online read and write workloads. + + For more information, see [User document](/time-to-live.md). + +* Support saving TiFlash query results using the `INSERT INTO SELECT` statement (experimental) [#37515](https://github.com/pingcap/tidb/issues/37515) @[gengliqi](https://github.com/gengliqi) **tw@qiancai** + + Starting from v6.5.0, TiDB supports pushing down the `SELECT` clause (analytical query) of the `INSERT INTO SELECT` statement to TiFlash. In this way, you can easily save the TiFlash query result to a TiDB table specified by `INSERT INTO` for further analysis, which takes effect as result caching (that is, result materialization). For example: + + ```sql + INSERT INTO t2 SELECT Mod(x,y) FROM t1; + ``` + + During the experimental phase, this feature is disabled by default. To enable it, you can set the [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) system variable to `ON`. There are no special restrictions on the result table specified by `INSERT INTO` for this feature, and you are free to add a TiFlash replica to that result table or not. Typical usage scenarios of this feature include: + + - Run complex analytical queries using TiFlash + - Reuse TiFlash query results or deal with highly concurrent online requests + - Need a relatively small result set comparing with the input data size, preferably smaller than 100MiB. + + For more information, see the [user documentation](/tiflash/tiflash-results-materialization.md). + +* Support binding history execution plans (experimental) [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@qiancai** + + For a SQL statement, due to various factors during execution, the optimizer might occasionally choose a new execution plan instead of its previous optimal execution plan, and the SQL performance is impacted. In this case, if the optimal execution plan has not been cleared yet, it still exists in the SQL execution history. + + In v6.5.0, TiDB supports binding historical execution plans by extending the binding object in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statement. When the execution plan of a SQL statement changes, you can bind the original execution plan by specifying `plan_digest` in the `CREATE [GLOBAL | SESSION] BINDING` statement to quickly recover SQL performance, as long as the original execution plan is still in the SQL execution history memory table (for example, `statements_summary`). This feature can simplify the process of handling execution plan change issues and improve your maintenance efficiency. + + For more information, see [user documentation](/sql-plan-management.md#bind-historical-execution-plans). + +### Security + +* Support the password complexity policy [#38928](https://github.com/pingcap/tidb/issues/38928) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** + + After this policy is enabled, when you set a password, TiDB checks the password length, whether uppercase and lowercase letters, numbers, and special characters in the password are sufficient, whether the password matches the dictionary, and whether the password matches the username. This ensures that you set a secure password. + + TiDB provides the SQL function [`VALIDATE_PASSWORD_STRENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_validate-password-strength) to validate the password strength. + + For more information, see [User document](/password-management.md#password-complexity-policy). + +* Support the password expiration policy [#38936](https://github.com/pingcap/tidb/issues/38936) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** + + TiDB supports configuring the password expiration policy, including manual expiration, global-level automatic expiration, and account-level automatic expiration. After this policy is enabled, you must change your passwords periodically. This reduces the risk of password leakage due to long-term use and improves password security. + + For more information, see [User document](/password-management.md#password-expiration-policy). + +* Support the password reuse policy [#38937](https://github.com/pingcap/tidb/issues/38937) @[keeplearning20221](https://github.com/keeplearning20221) **tw@ran-huang** + + TiDB supports configuring the password reuse policy, including global-level password reuse policy and account-level password reuse policy. After this policy is enabled, you cannot use the passwords that you have used within a specified period or the most recent several passwords that you have used. This reduces the risk of password leakage due to repeated use of passwords and improves password security. + + For more information, see [User document](/password-management.md#password-reuse-policy). + +* Support failed-login tracking and temporary account locking policy [#38938](https://github.com/pingcap/tidb/issues/38938) @[lastincisor](https://github.com/lastincisor) **tw@ran-huang** + + After this policy is enabled, if you log in to TiDB with incorrect passwords multiple times consecutively, the account is temporarily locked. After the lock time ends, the account is automatically unlocked. + + For more information, see [User document](/password-management.md#failed-login-tracking-and-temporary-account-locking-policy). + +### Observability + +* TiDB Dashboard can be deployed on Kubernetes as an independent Pod [#1447](https://github.com/pingcap/tidb-dashboard/issues/1447) @[SabaPing](https://github.com/SabaPing) **tw@shichun-0415 + + TiDB v6.5.0 (and later) and TiDB Operator v1.4.0 (and later) support deploying TiDB Dashboard as an independent Pod on Kubernetes. Using TiDB Operator, you can access the IP address of this Pod to start TiDB Dashboard. + + Independently deploying TiDB Dashboard provides the following benefits: + + - The compute work of TiDB Dashboard does not pose pressure on PD nodes. This ensures more stable cluster operation. + - The user can still access TiDB Dashboard for diagnosis even if the PD node is unavailable. + - Accessing TiDB Dashboard in Internet does not involve the privileged interfaces of PD. Therefore, the security risk of the cluster is mitigated. + + For more information, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). + +### Performance + +* [INDEX MERGE](/glossary.md#index-merge) supports conjunctive normal form (expressions connected by `AND`) [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** + + Before v6.5.0, TiDB only supported using index merge for the filter conditions connected by `OR`. Starting from v6.5.0, TiDB has supported using index merge for filter conditions connected by `AND` in the `WHERE` clause. In this way, the index merge of TiDB can now cover more general combinations of query filter conditions and is no longer limited to union (`OR`) relationship. The current v6.5.0 version only supports index merge under `OR` conditions as automatically selected by the optimizer. To enable index merge for `AND` conditions, you need to use the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint. + + For more details about index merge, see [v5.4.0 Release Notes](/release-5.4.0#performance) and [Explain Index Merge](/explain-index-merge.md). + +* Support pushing down the following [JSON functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** + + * `->` + * `->>` + * `JSON_EXTRACT()` + + The JSON format provides a flexible way for application data modeling. Therefore, more and more applications are using the JSON format for data exchange and data storage. By pushing down JSON functions to TiFlash, you can improve the efficiency of analyzing data in the JSON type and use TiDB for more real-time analytics scenarios. + +* Support pushing down the following [string functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** + + * `regexp_like` + * `regexp_instr` + * `regexp_substr` + +* Support the global optimizer hint to interfere with the execution plan generation in [Views](/views.md) [#37887](https://github.com/pingcap/tidb/issues/37887) @[Reminiscent](https://github.com/Reminiscent) **tw@Oreoxmt** + + In some view access scenarios, you need to use optimizer hints to interfere with the execution plan of the query in the view to achieve the best performance. Since v6.5.0, TiDB supports adding global hints for the query blocks in the view, thus the hints defined in the query can be effective in the view. This feature provides a way to inject hints into complex SQL statements that contain nested views, enhances the execution plan control, and stabilizes the performance of complex statements. To use global hints, you need to [name the query blocks](/optimizer-hints.md#step-1-define-the-query-block-name-of-the-view-using-the-qb_name-hint) and [specify hint references](/optimizer-hints.md#step-2-add-the-target-hints). + + For more information, see [User document](/optimizer-hints.md#hints-that-take-effect-globally). + +* Support pushing down sorting operations of [partitioned tables](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** + + Although the [partitioned table](/partitioned-table.md) feature has been GA since v6.1.0, TiDB is continually improving its performance. In v6.5.0, TiDB supports pushing down sorting operations such as `ORDER BY` and `LIMIT` to TiKV for computation and filtering, which reduces network I/O overhead and improves SQL performance when you use partitioned tables. + +* Optimizer introduces a more accurate Cost Model Version 2 [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** + + TiDB v6.2.0 introduces the [Cost Model Version 2](/cost-model.md#cost-model-version-2) as an experimental feature. This model uses a more accurate cost estimation method to help the optimizer choose the optimal execution plan. Especially when TiFlash is deployed, Cost Model Version 2 automatically helps choose the appropriate storage engine and avoids much manual intervention. After real-scene testing for a period of time, this model becomes GA in v6.5.0. SInce v6.5.0, newly-created clusters use Cost Model Version 2 by default. For clusters upgrade to v6.5.0, because Cost Model Version 2 might cause changes to query plans, you can set the [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-new-in-v620) variable to use the new cost model after sufficient performance testing. + + Cost Model Version 2 becomes a generally available feature that significantly improves the overall capability of the TiDB optimizer and helps TiDB evolve towards a more powerful HTAP database. + + For more information, see [User document](/cost-model.md#cost-model-version-2). + +* TiFlash optimizes the operations of getting the number of table rows [#37165](https://github.com/pingcap/tidb/issues/37165) @[elsa0520](https://github.com/elsa0520) + + In the scenarios of data analysis, It is a common operation to get the actual number of rows of a table through `COUNT(*)` without filter conditions. In v6.5.0, TiFlash optimizes the rewriting of `COUNT(*)` and automatically selects the not-null columns with the shortest column definition to count the number of rows, which can effectively reduce the number of I/O operations in TiFlash and improve the execution efficiency of getting row count. + +### Stability + +* The global memory control feature is now GA [#37816](https://github.com/pingcap/tidb/issues/37816) @[wshwsh12](https://github.com/wshwsh12) **tw@TomShawn** + + TiDB v6.4.0 introduces global memory control as an experimental feature. Since v6.5.0, the global memory control feature becomes GA and can track the main memory consumption in TiDB. When the global memory consumption reaches the threshold defined by [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640), TiDB tries to limit the memory usage by GC or canceling SQL operations, to ensure stability. + + Note that the memory consumed by the transaction in a session (the maximum value was previously set by the configuration item [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit)) is now tracked by the memory management module: when the memory consumption of a single session reaches the threshold defined by the system variable [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query), the behavior defined by the system variable [`tidb_mem_oom_action`](/system-variables.md#tidb_mem_oom_action-new-in-v610) will be triggered (the default is `CANCEL`, that is, canceling operations). To ensure forward compatibility, when [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) is configured as a non-default value, TiDB will still ensure that transactions can use the memory set by `txn-total-size-limit`. + + If you are using TiDB v6.5.0 or later, it is recommended to remove [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) and not to set a separate limit on the memory usage of transactions. Instead, use the system variables [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) and [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) to manage global memory, which can improve the efficiency of memory usage. + + For more information, see the [user document](/configure-memory-usage.md). + +### Ease of use + +* Refine the execution information of the TiFlash `TableFullScan` operator in the `EXPLAIN ANALYZE` output [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) **tw@qiancai** + + The `EXPLAIN ANALYZE` statement is used to print execution plans and runtime statistics. In v6.5.0, TiFlash has refined the execution information of the `TableFullScan` operator by adding the DMFile-related execution information. Now the TiFlash data scan status information is presented more intuitively, which helps you analyze TiFlash performance more easily. + + For more information, see [user documentation](sql-statements/sql-statement-explain-analyze.md). + +* Support the output of execution plans in the JSON format [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) **tw@ran-huang** + + In v6.5.0, TiDB extends the output format of execution plans. By using `EXPLAIN FORMAT=tidb_json `, you can output SQL execution plans in the JSON format. With this capability, SQL debugging tools and diagnostic tools can read execution plans more conveniently and accurately, thus improving the ease of use of SQL diagnosis and tuning. + + For more information, see [user document](/sql-statements/sql-statement-explain.md). + +### MySQL compatibility + +* Support a high-performance and globally monotonic `AUTO_INCREMENT` [#38442](https://github.com/pingcap/tidb/issues/38442) @[tiancaiamao](https://github.com/tiancaiamao) **tw@Oreoxmt** + + TiDB v6.4.0 introduces the `AUTO_INCREMENT` MySQL compatibility mode as an experimental feature. This mode introduces a centralized auto-increment ID allocating service that ensures IDs monotonically increase on all TiDB instances. This feature makes it easier to sort query results by auto-increment IDs. In v6.5.0, this feature becomes GA. The insert TPS of a table using this feature is expected to exceed 20,000, and this feature supports elastic scaling to improve the write throughput of a single table and entire clusters. To use the MySQL compatibility mode, you need to set `AUTO_ID_CACHE` to `1` when creating a table. The following is an example: + + ```sql + CREATE TABLE t(a int AUTO_INCREMENT key) AUTO_ID_CACHE 1; + ``` + + For more information, see [user document](/auto-increment.md#mysql-compatibility-mode). + +### Data migration + +* Support exporting and importing SQL and CSV files in gzip, snappy, and zstd compression formats [#38514](https://github.com/pingcap/tidb/issues/38514) @[lichunzhu](https://github.com/lichunzhu) **tw@hfxsd** + + Dumpling supports exporting data to compressed SQL and CSV files in the following compression formats: gzip, snappy, and zstd. TiDB Lightning also supports importing compressed files in these formats. + + Previously, you had to provide large storage space for exporting or importing data to store CSV and SQL files, resulting in high storage costs. With the release of this feature, you can greatly reduce your storage costs by compressing the data files. + + For more information, see [User document](/dumpling-overview.md#improve-export-efficiency-through-concurrency). + +* Optimize binlog parsing capability [#924](https://github.com/pingcap/dm/issues/924) @[gmhdbjd](https://github.com/GMHDBJD) **tw@hfxsd** + + TiDB can filter out binlog events of the schemas and tables that are not in the migration task, thus improving the parsing efficiency and stability. This policy takes effect by default in v6.5.0. No additional configuration is required. + + Previously, even if only a few tables were migrated, the entire binlog file upstream had to be parsed. The binlog events of the tables in the binlog file that did not need to be migrated still had to be parsed, which was not efficient. Meanwhile, if these binlog events do not support parsing, the task will fail. By only parsing the binlog events of the tables in the migration task, the binlog parsing efficiency can be greatly improved and the task stability can be enhanced. + +* Disk quota in TiDB Lightning is GA [#446](https://github.com/pingcap/tidb-lightning/issues/446) @[buchuitoudegou](https://github.com/buchuitoudegou) **tw@hfxsd** + + You can configure disk quota for TiDB Lightning. When there is not enough disk quota, TiDB Lightning stops reading source data and writing temporary files. Instead, it writes the sorted key-values to TiKV first, and then continues the import process after TiDB Lightning deletes the local temporary files. + + Previously, when TiDB Lightning imported data using physical mode, it would create a large number of temporary files on the local disk for encoding, sorting, and splitting the raw data. When your local disk ran out of space, TiDB Lightning would exit with an error due to failing to write to the file. With this feature, TiDB Lightning tasks can avoid overwriting the local disk. + + For more information, see [User document](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620). + +* Continuous data validation in DM is GA [#4426](https://github.com/pingcap/tiflow/issues/4426) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** + + In the process of migrating incremental data from upstream to downstream databases, there is a small probability that data flow might cause errors or data loss. In scenarios where strong data consistency is required, such as credit and securities businesses, you can perform a full volume checksum on the data after migration to ensure data consistency. However, in some incremental replication scenarios, upstream and downstream writes are continuous and uninterrupted because the upstream and downstream data is constantly changing, making it difficult to perform consistency checks on all data. + + Previously, you needed to interrupt the business to validate the full data, which would affect your business. Now, with this feature, you can perform incremental data validation without interrupting the business. + + For more information, see [User document](/dm/dm-continuous-data-validation.md). + +### TiDB data share subscription + +* TiCDC supports replicating changed logs to storage sinks (experimental) [tiflow#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) **tw@shichun-0415** + + TiCDC supports replicating changed logs to Amazon S3, Azure Blob Storage, NFS, and other S3-compatible storage services. Cloud storage is reasonably priced and easy to use. If you do not want to use Kafka, you can use storage sinks. TiCDC saves the changed logs to a file and then sends it to the storage system. From the storage system, the consumer program reads the newly generated changed log files periodically. + + The storage sink supports changed logs in the canal-json and CSV formats. Noticeably, the latency of replicating changed logs from TiCDC to storage can be as short as xx. For more information, see [User document](/ticdc/ticdc-sink-to-cloud-storage.md). + +* TiCDC supports bidirectional replication across multiple clusters @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** + + TiCDC supports bidirectional replication across multiple TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution across multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from each TiDB cluster to the other TiDB clusters, you can achieve bidirectional data replication across multiple TiDB clusters. + + For more information, refer to [user document](/ticdc/ticdc-bidirectional-replication.md). + +* TiCDC performance improves significantly **tw@shichun-0415 + + In a test scenario of the TiDB cluster, the performance of TiCDC has improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even during TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is xx rows/s, the replication latency in DR can be maintained at x s. + +### Backup and restore + +* TiDB Backup & Restore supports snapshot checkpoint backup [#38647](https://github.com/pingcap/tidb/issues/38647) @[Leavrth](https://github.com/Leavrth) **tw@shichun-0415 + + TiDB snapshot backup supports resuming backup from a checkpoint. When Backup & Restore (BR) encounters a recoverable error, it retries backup. However, BR exits if the retry fails for several times. The checkpoint backup feature allows for longer recoverable failures to be retried, for example, a network failure of tens of minutes. + + Note that if you do not recover the system from a failure within one hour after BR exits, the snapshot data to be backed up might be recycled by the GC mechanism, causing the backup to fail. For more information, see [User document](/br/br-checkpoint.md). + +* PITR performance improved remarkably **tw@shichun-0415 + + In the log restore stage, the restore speed of one TiKV can reach xx MB/s, which is x times faster than before. The restore speed is scalable and the RTO in DR scenarios is reduced greatly. The RPO in DR scenarios can be as short as 5 minutes. In normal cluster operation and maintenance (OM), for example, a rolling upgrade is performed or only one TiKV is down, the RPO can be 5 minutes. + +* TiKV-BR GA: Supports backing up and restoring RawKV [#67](https://github.com/tikv/migration/issues/67) @[pingyu](https://github.com/pingyu) @[haojinming](https://github.com/haojinming) **tw@shichun-0415** + + TiKV-BR is a backup and restore tool used in TiKV clusters. TiKV and PD can constitute a KV database when used without TiDB, which is called RawKV. TiKV-BR supports data backup and restore for products that use RawKV. TiKV-BR can also upgrade the [`api-version`](/tikv-configuration-file.md#api-version-new-in-v610) from `API V1` to `API V2` for TiKV cluster. + + For more information, see [User document](https://tikv.org/docs/latest/concepts/explore-tikv-features/backup-restore/). + +## Compatibility changes + +### System variables + +| Variable name | Change type | Description | +|--------|------------------------------|------| +|[`tidb_enable_amend_pessimistic_txn`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407)| Deprecated | Starting from v6.5.0, this variable is deprecated, and TiDB uses the [Metadata Lock](/metadata-lock.md) feature by default to avoid the `Information schema is changed` error. | +| [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Changes the default value from `1` to `2`, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | +| [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON`, meaning that the metadata lock feature is enabled by default. | +| [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | +| [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Changes the default value from `OFF` to `ON`, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | +| [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | Modified | For versions earlier than TiDB v6.5.0, this variable is used to set the threshold value of memory quota for a query. For TiDB v6.5.0 and later versions, this variable is used to set the threshold value of memory quota for a session. | +| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | Starting from v6.5.0, when this variable is set to`closest-adaptive` and the estimated result of a read request is greater than or equal to [`tidb_adaptive_closest_read_threshold`](/system-variables.md#tidb_adaptive_closest_read_threshold-new-in-v630), the number of TiDB nodes whose `closest-adaptive` configuration takes effect is limited in each availability zone, which is always the same as the number of TiDB nodes in the availability zone with the fewest TiDB nodes, and the other TiDB nodes automatically read from the leader replica. | +| [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) | Modified | Changes the default value from `0` to `80%`, meaning that the memory limit for a TiDB instance is 80% of the total memory by default. | +| [`default_password_lifetime`](/system-variables.md#default_password_lifetime-new-in-v650) | Newly added | Sets the global policy for automatic password expiration to require users to change passwords periodically. The default value `0` indicates that passwords never expire. | +| [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-new-in-v650) | Newly added | Indicates whether TiDB disconnects the client connection when the password is expired. This variable is read-only. | +| [`password_history`](/system-variables.md#password_history-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on the number of password changes. The default value `0` means disabling the password reuse policy based on the number of password changes. | +| [`password_reuse_interval`](/system-variables.md#password_reuse_interval-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on time elapsed. The default value `0` means disabling the password reuse policy based on time elapsed. | +| [`tidb_cdc_write_source`](/system-variables.md#tidb_cdc_write_source-new-in-v650) | Newly added | When this variable is set to a value other than 0, data written in this session is considered to be written by TiCDC. This variable can only be modified by TiCDC. Do not manually modify this variable in any case. | +| [`tidb_index_merge_intersection_concurrency`](/system-variables.md#tidb_index_merge_intersection_concurrency-new-in-v650) | Newly added | Sets the maximum concurrency for the intersection operations that index merge performs. It is effective only when TiDB accesses partitioned tables in the dynamic pruning mode. | +| [`tidb_source_id`](/system-variables.md#tidb_source_id-new-in-v650) | Newly added | This variable is used to configure the different cluster IDs in a [bi-directional replication](/ticdc/manage-ticdc.md#bi-directional-replication) cluster.| +| [`tidb_ttl_delete_batch_size`](/system-variables.md#tidb_ttl_delete_batch_size-new-in-v650) | Newly added | This variable is used to set the maximum number of rows that can be deleted in a single `DELETE` transaction in a TTL job. | +| [`tidb_ttl_delete_rate_limit`](/system-variables.md#tidb_ttl_delete_rate_limit-new-in-v650) | Newly added | This variable is used to limit the maximum number of `DELETE` statements allowed per second in a single node in a TTL job. When this variable is set to `0`, no limit is applied. | +| [`tidb_ttl_delete_worker_count`](/system-variables.md#tidb_ttl_delete_worker_count-new-in-v650) | Newly added | This variable is used to set the maximum concurrency of TTL jobs on each TiDB node. | +| [`tidb_ttl_job_enable`](/system-variables.md#tidb_ttl_job_enable-new-in-v650) | Newly added | This variable is used to control whether to enable TTL jobs. If it is set to `OFF`, all tables with TTL attributes automatically stop cleaning up expired data. | +| [`tidb_ttl_job_run_interval`](/system-variables.md#tidb_ttl_job_run_interval-new-in-v650) | Newly added | This variable is used to control the scheduling interval of the TTL job in the background. For example, if the current value is set to `1h0m0s`, each table with TTL attributes will clean up expired data once every hour. | +| [`tidb_ttl_job_schedule_window_start_time`](/system-variables.md#tidb_ttl_job_schedule_window_start_time-new-in-v650) | Newly added | This variable is used to control the start time of the scheduling window of the TTL job in the background. When you modify the value of this variable, be cautious that a small window might cause the cleanup of expired data to fail. | +| [`tidb_ttl_job_schedule_window_end_time`](/system-variables.md#tidb_ttl_job_schedule_window_end_time-new-in-v650) | Newly added | This variable is used to control the end time of the scheduling window of the TTL job in the background. When you modify the value of this variable, be cautious that a small window might cause the cleanup of expired data to fail. | +| [`tidb_ttl_scan_batch_size`](/system-variables.md#tidb_ttl_scan_batch_size-new-in-v650) | Newly added | This variable is used to set the `LIMIT` value of each `SELECT` statement used to scan expired data in a TTL job. | +| [`tidb_ttl_scan_worker_count`](/system-variables.md#tidb_ttl_scan_worker_count-new-in-v650) | Newly added | This variable is used to set the maximum concurrency of TTL scan jobs on each TiDB node. | +| [`validate_password.check_user_name`](/system-variables.md#validate_passwordcheck_user_name-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password matches the username. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled. The default value is `ON`. | +| [`validate_password.dictionary`](/system-variables.md#validate_passworddictionary-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password matches any word in the dictionary. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `2` (STRONG). The default value is `""`. | +| [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) | Newly added | This variable controls whether to perform password complexity check. If this variable is set to `ON`, TiDB performs the password complexity check when you set a password. The default value is `OFF`. | +| [`validate_password.length`](/system-variables.md#validate_passwordlength-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password length is sufficient. By default, the minimum password length is `8`. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled. | +| [`validate_password.mixed_case_count`](/system-variables.md#validate_passwordmixed_case_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient uppercase and lowercase letters. This variable takes effect only when [`validate_password.enable`](/system-variables.md#validate_passwordenable-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | +| [`validate_password.number_count`](/system-variables.md#validate_passwordnumber_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient numbers. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | +| [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) | Newly added | This variable controls the policy for the password complexity check. The value can be `0`, `1`, or `2` (corresponds to LOW, MEDIUM, or STRONG). This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled. The default value is `1`. | +| [`validate_password.special_char_count`](/system-variables.md#validate_passwordspecial_char_count-new-in-v650) | Newly added | A check item in the password complexity check. It checks whether the password contains sufficient special characters. This variable takes effect only when [`validate_password.enable`](/system-variables.md#password_reuse_interval-new-in-v650) is enabled and [`validate_password.policy`](/system-variables.md#validate_passwordpolicy-new-in-v650) is set to `1` (MEDIUM) or larger. The default value is `1`. | + +### Configuration file parameters + +| Configuration file | Configuration parameter | Change type | Description | +| -------- | -------- | -------- | -------- | +| TiDB | [`server-memory-quota`](/tidb-configuration-file.md#server-memory-quota-new-in-v409) | Deprecated | Since v6.5.0, this configuration item is deprecated. Instead, use the system variable [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) to manage memory globally. | +| TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password-new-in-v650) | Newly added | Determines whether TiDB disconnects the client connection when the password is expired. The default value is `true`, which means the client connection is disconnected when the password is expired. | +| TiKV | `raw-min-ts-outlier-threshold` | Deleted | Since v6.4.0, this configuration item was deprecated. Since v6.5.0, this configuration item is deleted. | +| TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | Change the default value from `1s` to `200ms`. | +| TiKV | [`memory-use-ratio`](/tikv-configuration-file.md#memory-use-ratio-introduced-new-in-v650) | Newly added | Indicates the ratio of available memory to total system memory in PITR log recovery. | + +### Others + +- Starting from v6.5.0, the `mysql.user` table adds two new columns: `Password_reuse_history` and `Password_reuse_time`. +- The [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) feature is enabled by default and is not compatible with the [PITR (Point-in-time recovery)](/br/br-pitr-guide.md) feature. When using the index acceleration feature, you need to make sure that no PITR backup task is running in the background; otherwise, unexpected results might occur. For more information, see [tidb_ddl_enable_fast_reorg](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630). + +## Deprecated feature + +Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407) mechanism introduced in v4.0.7 is deprecated and replaced by [Metadata Lock](/metadata-lock.md). + +## Improvements + ++ TiDB + + - For `BIT` and `CHAR` columns, make the result of `INFORMATION_SCHEMA.COLUMNS` consistent with MySQL [#25472](https://github.com/pingcap/tidb/issues/25472) @[hawkingrei](https://github.com/hawkingrei) + ++ TiKV + + - Stop writing to Raft Engine when there is insufficient space to avoid exhausting disk space [#13642](https://github.com/tikv/tikv/issues/13642) @[jiayang-zheng](https://github.com/jiayang-zheng) + - Support pushing down the `json_valid` function to TiKV [#13571](https://github.com/tikv/tikv/issues/13571) @[lizhenhuan](https://github.com/lizhenhuan) + - Support backing up multiple ranges of data in a single backup request [#13701](https://github.com/tikv/tikv/issues/13701) @[Leavrth](https://github.com/Leavrth) + - Support backing up data to the Asia Pacific region (ap-southeast-3) of AWS by updating the rusoto library [#13751](https://github.com/tikv/tikv/issues/13751) @[3pointer](https://github.com/3pointer) + - Reduce pessimistic transaction conflicts [#13298](https://github.com/tikv/tikv/issues/13298) @[MyonKeminta](https://github.com/MyonKeminta) + - Improve recovery performance by caching external storage objects [#13798](https://github.com/tikv/tikv/issues/13798) @[YuJuncen](https://github.com/YuJuncen) + - Run the CheckLeader in a dedicated thread to reduce TiCDC replication latency [#13774](https://github.com/tikv/tikv/issues/13774) @[overvenus](https://github.com/overvenus) + - Support pull model for Checkpoints [#13824](https://github.com/tikv/tikv/issues/13824) @[YuJuncen](https://github.com/YuJuncen) + - Avoid spinning issues on the sender side by updating crossbeam-channel [#13815](https://github.com/tikv/tikv/issues/13815) @[sticnarf](https://github.com/sticnarf) + - Support batch Coprocessor tasks processing in TiKV [#13849](https://github.com/tikv/tikv/issues/13849) @[cfzjywxk](https://github.com/cfzjywxk) + - Reduce waiting time on failure recovery by notifying TiKV to wake up Regions [#13648](https://github.com/tikv/tikv/issues/13648) @[LykxSassinator](https://github.com/LykxSassinator) + - Reduce the requested size of memory usage by code optimization [#13827](https://github.com/tikv/tikv/issues/13827) @[BusyJay](https://github.com/BusyJay) + - Introduce the Raft extension to improve code extensibility [#13827](https://github.com/tikv/tikv/issues/13827) @[BusyJay](https://github.com/BusyJay) + - Support using tikv-ctl to query which Regions are included in a certain key range [#13760](https://github.com/tikv/tikv/issues/13760) [@HuSharp](https://github.com/HuSharp) + - Improve read and write performance for rows that are not updated but locked continuously [#13694](https://github.com/tikv/tikv/issues/13694) [@sticnarf](https://github.com/sticnarf) + ++ PD + + - Optimize the granularity of locks to reduce lock contention and improve the handling capability of heartbeats under high concurrency [#5586](https://github.com/tikv/pd/issues/5586) @[rleungx](https://github.com/rleungx) + - Optimize scheduler performance for large-scale clusters and accelerate the production of scheduling policies [#5473](https://github.com/tikv/pd/issues/5473) @[bufferflies](https://github.com/bufferflies) + - Improve the speed of loading Regions [#5606](https://github.com/tikv/pd/issues/5606) @[rleungx](https://github.com/rleungx) + - Reduce unnecessary overhead by optimized handling of Region heartbeats [#5648](https://github.com/tikv/pd/issues/5648)@[rleungx](https://github.com/rleungx) + - Add the feature of automatically garbage collecting tombstone stores [#5348](https://github.com/tikv/pd/issues/5348) @[nolouch](https://github.com/nolouch) + ++ TiFlash + + - Improve write performance in scenarios where there is no batch processing on the SQL side [#6404](https://github.com/pingcap/tiflash/issues/6404) @[lidezhu](https://github.com/lidezhu) + - Add more details for TableFullScan in the `explain analyze` output [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) + ++ Tools + + + TiDB Dashboard + + - Add three new fields to the slow query page: "Is Prepared?","Is Plan from Cache?","Is Plan from Binding?" [#1451](https://github.com/pingcap/tidb-dashboard/issues/1451) @[shhdgit](https://github.com/shhdgit) + + + Backup & Restore (BR) + + - Optimize BR memory usage during the process of cleaning backup log data [#38869](https://github.com/pingcap/tidb/issues/38869) @[Leavrth](https://github.com/Leavrth) + - (dup) Fix the restoration failure issue caused by PD leader switch during the restoration process [#36910](https://github.com/pingcap/tidb/issues/36910) @[MoCuishle28](https://github.com/MoCuishle28) + - Improve TLS compatibility by using the OpenSSL protocol in log backup [#13867](https://github.com/tikv/tikv/issues/13867) @[YuJuncen](https://github.com/YuJuncen) + + + TiCDC + + - (dup) Improve the performance of Kafka protocol encoder [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7532](https://github.com/pingcap/tiflow/issues/7532) [#7543](https://github.com/pingcap/tiflow/issues/7543) @[3AceShowHand](https://github.com/3AceShowHand) @[sdojjy](https://github.com/sdojjy) + + + TiDB Data Migration (DM) + + - Improve the data replication performance for DM by not parsing the data of tables in the block list [#7622](https://github.com/pingcap/tiflow/pull/7622) @[GMHDBJD](https://github.com/GMHDBJD) + - Improve the write efficiency of DM relay by using asynchronous write and batch write [#7580](https://github.com/pingcap/tiflow/pull/7580) @[GMHDBJD](https://github.com/GMHDBJD) + - Optimize the error messages in DM precheck [#7621](https://github.com/pingcap/tiflow/issues/7621) @[buchuitoudegou](https://github.com/buchuitoudegou) + - Improve the compatibility of `SHOW SLAVE HOSTS` for old MySQL versions [#5017](https://github.com/pingcap/tiflow/issues/5017) @[lyzx2001](https://github.com/lyzx2001) + +## Bug fixes + ++ TiDB + + - Fix the issue of memory chunk misuse for the chunk reuse feature that occurs in some cases [#38917](https://github.com/pingcap/tidb/issues/38917) @[keeplearning20221](https://github.com/keeplearning20221) + - Fix the issue that the internal sessions of `tidb_constraint_check_in_place_pessimistic` might be affected by the global setting [#38766](https://github.com/pingcap/tidb/issues/38766) @[ekexium](https://github.com/ekexium) + - Fix the issue that the `AUTO_INCREMENT` column cannot work with the `CHECK` constraint [#38894](https://github.com/pingcap/tidb/issues/38894) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that using `INSERT IGNORE INTO` to insert data of the `STRING` type into an auto-increment column of the `SMALLINT` type will cause an error [#38483](https://github.com/pingcap/tidb/issues/38483) @[hawkingrei](https://github.com/hawkingrei) + - Fix the issue that the null pointer error occurs in the operation of renaming the partition column of a partitioned table [#38932](https://github.com/pingcap/tidb/issues/38932) @[mjonss](https://github.com/mjonss) + - Fix the issue that modifying the partition column of a partitioned table causes DDL to hang [#38530](https://github.com/pingcap/tidb/issues/38530) @[mjonss](https://github.com/mjonss) + - Fix the issue that the `ADMIN SHOW JOB` operation panics after upgrading from v4.0.16 to v6.4.0 [#38980](https://github.com/pingcap/tidb/issues/38980) @[tangenta](https://github.com/tangenta) + - Fix the issue that the `tidb_decode_key` function fails to correctly parse the encoding of partitioned tables [#39304](https://github.com/pingcap/tidb/issues/39304) @[Defined2014](https://github.com/Defined2014) + - Fixe the issue that gRPC error logs are not redirected to the correct log file during log rotation [#38941](https://github.com/pingcap/tidb/issues/38941) @[xhebox](https://github.com/xhebox) + - Fix the issue that TiDB generates an unexpected execution plan for the `BEGIN; SELECT... FOR UPDATE;` point query when TiKV is not configured as a read engine [#39344](https://github.com/pingcap/tidb/issues/39344) @[Yisaer](https://github.com/Yisaer) + - Fix the issue that mistakenly pushing down `StreamAgg` to TiFlash causes wrong results [#39266](https://github.com/pingcap/tidb/issues/39266) @[fixdb](https://github.com/fixdb) + ++ TiKV + + - Fix an error in Raft Engine ctl [#11119](https://github.com/tikv/tikv/issues/11119) @[tabokie](https://github.com/tabokie) + - Fix the `Get raft db is not allowed` error when executing the `compact raft` command in tikv-ctl [#13515](https://github.com/tikv/tikv/issues/13515) @[guoxiangCN](https://github.com/guoxiangCN) + - Fix the issue that log backup does not work when TLS is enabled [#13867](https://github.com/tikv/tikv/issues/13867) @[YuJuncen](https://github.com/YuJuncen) + - Fix the support issue of the Geometry field type [#13651](https://github.com/tikv/tikv/issues/13651) @[dveeden](https://github.com/dveeden) + - Fix the issue that `_` in the `LIKE` operator cannot match non-ASCII characters when new collation is not enabled [#13769](https://github.com/tikv/tikv/issues/13769) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that tikv-ctl is terminated unexpectedly when executing the `reset-to-version` command [#13829](https://github.com/tikv/tikv/issues/13829) @[tabokie](https://github.com/tabokie) + ++ PD + + - Fix the issue that the `balance-hot-region-scheduler` configuration is not persisted if not modified [#5701](https://github.com/tikv/pd/issues/5701) @[HunDunDM](https://github.com/HunDunDM) + - Fix the issue that `rank-formula-version` does not retain the pre-upgrade configuration during the upgrade process [#5698](https://github.com/tikv/pd/issues/5698) @[HunDunDM](https://github.com/HunDunDM) + ++ TiFlash + + - Fix the issue that column files in the delta layer cannot be compacted after restarting TiFlash [#6159](https://github.com/pingcap/tiflash/issues/6159) @[lidezhu](https://github.com/lidezhu) + - Fix the issue that TiFlash File Open OPS is too high [#6345](https://github.com/pingcap/tiflash/issues/6345) @[JaySon-Huang](https://github.com/JaySon-Huang) + ++ Tools + + + Backup & Restore (BR) + + - (dup) Fix the issue that when BR deletes log backup data, it mistakenly deletes data that should not be deleted [#38939](https://github.com/pingcap/tidb/issues/38939) @[Leavrth](https://github.com/Leavrth) + - (dup) Fix the issue that restore tasks fail when using old framework for collations in databases or tables [#39150](https://github.com/pingcap/tidb/issues/39150) @[MoCuishle28](https://github.com/MoCuishle28) + - Fix the issue that backup fails because Alibaba Cloud and Huawei Cloud are not fully compatible with Amazon S3 storage [#39545](https://github.com/pingcap/tidb/issues/39545) @[3pointer](https://github.com/3pointer) + + + TiCDC + + - Fix the issue that TiCDC gets stuck when the PD leader crashes [#7470](https://github.com/pingcap/tiflow/issues/7470) @[zeminzhou](https://github.com/zeminzhou) + - (dup) Fix data loss occurred in the scenario of executing DDL statements first and then pausing and resuming the changefeed [#7682](https://github.com/pingcap/tiflow/issues/7682) @[asddongmen](https://github.com/asddongmen) + - Fix the issue that TiCDC mistakenly reports an error when there is a later version of TiFlash [#7744](https://github.com/pingcap/tiflow/issues/7744) @[overvenus](https://github.com/overvenus) + - (dup) Fix the issue that the sink component gets stuck if the downstream network is unavailable [#7706](https://github.com/pingcap/tiflow/issues/7706) @[hicqu](https://github.com/hicqu) + - Fix the issue that data is lost when a user quickly deletes a replication task and then creates another one with the same task name [#7657](https://github.com/pingcap/tiflow/issues/7657) @[overvenus](https://github.com/overvenus) + + + TiDB Data Migration (DM) + + - Fix the issue that a `task-mode:all` task cannot be started when the upstream database enables the GTID mode but does not have any data [#7037](https://github.com/pingcap/tiflow/issues/7037) @[liumengya94](https://github.com/liumengya94) + - Fix the issue that data is replicated for multiple times when a new DM worker is scheduled before the existing worker exits [#7658](https://github.com/pingcap/tiflow/issues/7658) @[GMHDBJD](https://github.com/GMHDBJD) + - Fix the issue that DM precheck is not passed when the upstream database uses regular expressions to grant privileges [#7645](https://github.com/pingcap/tiflow/issues/7645) @[lance6716](https://github.com/lance6716) + + + TiDB Lightning + + - Fix the memory leakage issue when TiDB Lightning imports a huge source data file [#39331](https://github.com/pingcap/tidb/issues/39331) @[dsdashun](https://github.com/dsdashun) + - Fix the issue that TiDB Lightning cannot detect conflicts correctly when importing data in parallel [#39476](https://github.com/pingcap/tidb/issues/39476) @[dsdashun](https://github.com/dsdashun) + +## Contributors + +We would like to thank the following contributors from the TiDB community: + +- [e1ijah1](https://github.com/e1ijah1) +- [guoxiangCN](https://github.com/guoxiangCN) (First-time contributor) +- [jiayang-zheng](https://github.com/jiayang-zheng) +- [jiyfhust](https://github.com/jiyfhust) +- [mikechengwei](https://github.com/mikechengwei) +- [pingandb](https://github.com/pingandb) +- [sashashura](https://github.com/sashashura) +- [sourcelliu](https://github.com/sourcelliu) +- [wxbty](https://github.com/wxbty) From 9d63fd86f4554c75743ca3781cf4479c7df738b1 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Fri, 16 Dec 2022 13:27:55 +0800 Subject: [PATCH 42/83] remove the dup label for br and ticdc --- releases/release-6.5.0.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 511b09ca98cee..26809c577959e 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -380,12 +380,12 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable + Backup & Restore (BR) - Optimize BR memory usage during the process of cleaning backup log data [#38869](https://github.com/pingcap/tidb/issues/38869) @[Leavrth](https://github.com/Leavrth) - - (dup) Fix the restoration failure issue caused by PD leader switch during the restoration process [#36910](https://github.com/pingcap/tidb/issues/36910) @[MoCuishle28](https://github.com/MoCuishle28) + - Fix the restoration failure issue caused by PD leader switch during the restoration process [#36910](https://github.com/pingcap/tidb/issues/36910) @[MoCuishle28](https://github.com/MoCuishle28) - Improve TLS compatibility by using the OpenSSL protocol in log backup [#13867](https://github.com/tikv/tikv/issues/13867) @[YuJuncen](https://github.com/YuJuncen) + TiCDC - - (dup) Improve the performance of Kafka protocol encoder [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7532](https://github.com/pingcap/tiflow/issues/7532) [#7543](https://github.com/pingcap/tiflow/issues/7543) @[3AceShowHand](https://github.com/3AceShowHand) @[sdojjy](https://github.com/sdojjy) + - Improve the performance of Kafka protocol encoder [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7532](https://github.com/pingcap/tiflow/issues/7532) [#7543](https://github.com/pingcap/tiflow/issues/7543) @[3AceShowHand](https://github.com/3AceShowHand) @[sdojjy](https://github.com/sdojjy) + TiDB Data Migration (DM) @@ -433,16 +433,16 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable + Backup & Restore (BR) - - (dup) Fix the issue that when BR deletes log backup data, it mistakenly deletes data that should not be deleted [#38939](https://github.com/pingcap/tidb/issues/38939) @[Leavrth](https://github.com/Leavrth) - - (dup) Fix the issue that restore tasks fail when using old framework for collations in databases or tables [#39150](https://github.com/pingcap/tidb/issues/39150) @[MoCuishle28](https://github.com/MoCuishle28) + - Fix the issue that when BR deletes log backup data, it mistakenly deletes data that should not be deleted [#38939](https://github.com/pingcap/tidb/issues/38939) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that restore tasks fail when using old framework for collations in databases or tables [#39150](https://github.com/pingcap/tidb/issues/39150) @[MoCuishle28](https://github.com/MoCuishle28) - Fix the issue that backup fails because Alibaba Cloud and Huawei Cloud are not fully compatible with Amazon S3 storage [#39545](https://github.com/pingcap/tidb/issues/39545) @[3pointer](https://github.com/3pointer) + TiCDC - Fix the issue that TiCDC gets stuck when the PD leader crashes [#7470](https://github.com/pingcap/tiflow/issues/7470) @[zeminzhou](https://github.com/zeminzhou) - - (dup) Fix data loss occurred in the scenario of executing DDL statements first and then pausing and resuming the changefeed [#7682](https://github.com/pingcap/tiflow/issues/7682) @[asddongmen](https://github.com/asddongmen) + - Fix data loss occurred in the scenario of executing DDL statements first and then pausing and resuming the changefeed [#7682](https://github.com/pingcap/tiflow/issues/7682) @[asddongmen](https://github.com/asddongmen) - Fix the issue that TiCDC mistakenly reports an error when there is a later version of TiFlash [#7744](https://github.com/pingcap/tiflow/issues/7744) @[overvenus](https://github.com/overvenus) - - (dup) Fix the issue that the sink component gets stuck if the downstream network is unavailable [#7706](https://github.com/pingcap/tiflow/issues/7706) @[hicqu](https://github.com/hicqu) + - Fix the issue that the sink component gets stuck if the downstream network is unavailable [#7706](https://github.com/pingcap/tiflow/issues/7706) @[hicqu](https://github.com/hicqu) - Fix the issue that data is lost when a user quickly deletes a replication task and then creates another one with the same task name [#7657](https://github.com/pingcap/tiflow/issues/7657) @[overvenus](https://github.com/overvenus) + TiDB Data Migration (DM) From c85e2c0b1b688501721b57aff96c3edcd97e8bbc Mon Sep 17 00:00:00 2001 From: xixirangrang Date: Fri, 16 Dec 2022 14:11:55 +0800 Subject: [PATCH 43/83] Update releases/release-6.5.0.md --- releases/release-6.5.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 26809c577959e..2b2d8e952181f 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -22,7 +22,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Support pushing down the `JSON_EXTRACT()` function to TiFlash. - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. - TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. -- TiDB Data Migration (DM) supports [continuous data validation](/dm/dm-continuous-data-validation.md). +- TiDB Data Migration (DM) [continuous data validation](/dm/dm-continuous-data-validation.md) is now in General Availability (GA). - TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by x times, and reduces RPO to x minutes. - Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) by x times and reduces replication latency to x seconds. - Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). From f955849c838fb1dd16545e83deaa3f1be0486644 Mon Sep 17 00:00:00 2001 From: Aolin Date: Fri, 16 Dec 2022 14:13:41 +0800 Subject: [PATCH 44/83] Apply suggestions from code review --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 2b2d8e952181f..5cd29330e0c49 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -14,11 +14,11 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, improvements, and bug fixes released in [6.2.0-DMR](/releases/release-6.2.0.md), [6.3.0-DMR](/releases/release-6.3.0.md), [6.4.0-DMR](/releases/release-6.4.0.md), but also introduces the following key features and improvements: -- Enable [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) by default, which improves the performance of adding indexes by 10 times compared with v6.1. +- Enable [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) by default, which improves the performance of adding indexes by about 10 times compared with v6.1.0. - Support TiDB global memory control via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). - Support a high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute, compatible with MySQL. -- Support [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md), compatible with TiCDC and PITR. -- Enhance the [optimizer cost model](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature. +- Support restoring a cluster to a specific point in time by using [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md), compatible with TiCDC and PITR. +- Optimizer further enhances the more accurate [Cost Model] version 2](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature to support conjunctive normal form. - Support pushing down the `JSON_EXTRACT()` function to TiFlash. - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. - TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. From 3c98b9f5f4d07e6da9643ee0c4e65f48463c278a Mon Sep 17 00:00:00 2001 From: Aolin Date: Fri, 16 Dec 2022 14:15:36 +0800 Subject: [PATCH 45/83] Apply suggestions from code review --- releases/release-6.5.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 5cd29330e0c49..1529b87561174 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -16,8 +16,8 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Enable [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) by default, which improves the performance of adding indexes by about 10 times compared with v6.1.0. - Support TiDB global memory control via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). -- Support a high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute, compatible with MySQL. -- Support restoring a cluster to a specific point in time by using [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md), compatible with TiCDC and PITR. +- Support a high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute, which is compatible with MySQL. +- Support restoring a cluster to a specific point in time by using [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md), which is compatible with TiCDC and PITR. - Optimizer further enhances the more accurate [Cost Model] version 2](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature to support conjunctive normal form. - Support pushing down the `JSON_EXTRACT()` function to TiFlash. - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. From 7dbc90631fceb6745ea234170e1501ae49c15c16 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Mon, 19 Dec 2022 16:17:13 +0800 Subject: [PATCH 46/83] Update releases/release-6.5.0.md Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- releases/release-6.5.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 1529b87561174..8c2f89b32d0d5 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -18,7 +18,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Support TiDB global memory control via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). - Support a high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute, which is compatible with MySQL. - Support restoring a cluster to a specific point in time by using [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md), which is compatible with TiCDC and PITR. -- Optimizer further enhances the more accurate [Cost Model] version 2](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature to support conjunctive normal form. +- Optimizer further enhances the more accurate [Cost Model] version 2](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature to support the expressions connected by `AND`. - Support pushing down the `JSON_EXTRACT()` function to TiFlash. - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. - TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. From 4866fbddbcb57c87cdb236016fbadf2ea871514f Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Mon, 19 Dec 2022 16:25:44 +0800 Subject: [PATCH 47/83] Apply suggestions from code review Co-authored-by: Aolin --- releases/release-6.5.0.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 8c2f89b32d0d5..3677633cb8422 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -14,10 +14,10 @@ TiDB 6.5.0 is a Long-Term Support Release (LTS). Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, improvements, and bug fixes released in [6.2.0-DMR](/releases/release-6.2.0.md), [6.3.0-DMR](/releases/release-6.3.0.md), [6.4.0-DMR](/releases/release-6.4.0.md), but also introduces the following key features and improvements: -- Enable [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) by default, which improves the performance of adding indexes by about 10 times compared with v6.1.0. -- Support TiDB global memory control via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). -- Support a high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute, which is compatible with MySQL. -- Support restoring a cluster to a specific point in time by using [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md), which is compatible with TiCDC and PITR. +- The [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) feature becomes generally available (GA), which improves the performance of adding indexes by about 10 times compared with v6.1.0. +- The TiDB global memory control becomes GA, and you can control the memory consumption threshold via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). +- The high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute becomes GA, which is compatible with MySQL. +- Support restoring a cluster to a specific point in time by using [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) (GA), which is compatible with TiCDC and PITR. - Optimizer further enhances the more accurate [Cost Model] version 2](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature to support the expressions connected by `AND`. - Support pushing down the `JSON_EXTRACT()` function to TiFlash. - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. From eff9dd9336a128bbff29be1739e5e54c059898a7 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Mon, 19 Dec 2022 16:35:41 +0800 Subject: [PATCH 48/83] Apply suggestions from code review Co-authored-by: Aolin Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- releases/release-6.5.0.md | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 3677633cb8422..413c6a436f913 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -32,17 +32,17 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### SQL -* The performance of TiDB adding indexes is improved by 10 times [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) **tw@Oreoxmt** +* The performance of TiDB adding indexes is improved by 10 times (GA) [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) **tw@Oreoxmt** TiDB v6.3.0 introduces the [Add index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) as an experimental feature to improve the speed of backfilling when creating an index. In v6.5.0, this feature becomes GA and is enabled by default, and the performance on large tables is expected to be 10 times faster. The acceleration feature is suitable for scenarios where a single SQL statement adds an index serially. When multiple SQL statements add indexes in parallel, only one of the SQL statements will be accelerated. -* Provide lightweight metadata lock to improve the DML success rate during DDL change [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) **tw@Oreoxmt** +* Provide lightweight metadata lock to improve the DML success rate during DDL change (GA) [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) **tw@Oreoxmt** TiDB v6.3.0 introduces [Metadata lock](/metadata-lock.md) as an experimental feature. To avoid the `Information schema is changed` error caused by DML statements, TiDB coordinates the priority of DMLs and DDLs during table metadata change, and makes the ongoing DDLs wait for the DMLs with old metadata to commit. In v6.5.0, this feature becomes GA and is enabled by default. It is suitable for various types of DDLs change scenarios. For more information, see [User document](/metadata-lock.md). -* Support restoring a cluster to a specific point in time by using `FLASHBACK CLUSTER TO TIMESTAMP` [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** +* Support restoring a cluster to a specific point in time by using `FLASHBACK CLUSTER TO TIMESTAMP` (GA) [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** TiDB v6.4.0 introduces the [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement as an experimental feature. You can use this statement to restore a cluster to a specific point in time within the Garbage Collection (GC) life time. In v6.5.0, this statement becomes GA. This feature helps you to easily undo DML misoperations, restore the original cluster in minutes, roll back data at different time points to determine the exact time when data changes, and it is compatible with PITR and TiCDC. @@ -54,7 +54,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr For more information, see [Non-Transactional DML statements](/non-transactional-dml.md) and [`BATCH` syntax](/sql-statements/sql-statement-batch.md). -* Support time to live (TTL) (experimental) [#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** +* Support time to live (TTL) (experimental) [#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** TTL provides row-level data lifetime management. In TiDB, a table with the TTL attribute automatically checks data lifetime and deletes expired data at the row level. TTL is designed to help you clean up unnecessary data periodically and in a timely manner without affecting the online read and write workloads. @@ -74,7 +74,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Reuse TiFlash query results or deal with highly concurrent online requests - Need a relatively small result set comparing with the input data size, preferably smaller than 100MiB. - For more information, see the [user documentation](/tiflash/tiflash-results-materialization.md). + For more information, see the [user documentation](/tiflash/tiflash-results-materialization.md). * Support binding history execution plans (experimental) [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@qiancai** @@ -124,11 +124,11 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - The user can still access TiDB Dashboard for diagnosis even if the PD node is unavailable. - Accessing TiDB Dashboard in Internet does not involve the privileged interfaces of PD. Therefore, the security risk of the cluster is mitigated. - For more information, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). + For more information, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). ### Performance -* [INDEX MERGE](/glossary.md#index-merge) supports conjunctive normal form (expressions connected by `AND`) [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** +* [INDEX MERGE](/glossary.md#index-merge) supports expressions connected by `AND` [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** Before v6.5.0, TiDB only supported using index merge for the filter conditions connected by `OR`. Starting from v6.5.0, TiDB has supported using index merge for filter conditions connected by `AND` in the `WHERE` clause. In this way, the index merge of TiDB can now cover more general combinations of query filter conditions and is no longer limited to union (`OR`) relationship. The current v6.5.0 version only supports index merge under `OR` conditions as automatically selected by the optimizer. To enable index merge for `AND` conditions, you need to use the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint. @@ -140,7 +140,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * `->>` * `JSON_EXTRACT()` - The JSON format provides a flexible way for application data modeling. Therefore, more and more applications are using the JSON format for data exchange and data storage. By pushing down JSON functions to TiFlash, you can improve the efficiency of analyzing data in the JSON type and use TiDB for more real-time analytics scenarios. + The JSON format provides a flexible way for application data modeling. Therefore, more and more applications are using the JSON format for data exchange and data storage. By pushing down JSON functions to TiFlash, you can improve the efficiency of analyzing data in the JSON type and use TiDB for more real-time analytics scenarios. * Support pushing down the following [string functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** @@ -156,7 +156,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * Support pushing down sorting operations of [partitioned tables](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** - Although the [partitioned table](/partitioned-table.md) feature has been GA since v6.1.0, TiDB is continually improving its performance. In v6.5.0, TiDB supports pushing down sorting operations such as `ORDER BY` and `LIMIT` to TiKV for computation and filtering, which reduces network I/O overhead and improves SQL performance when you use partitioned tables. + Although the [partitioned table](/partitioned-table.md) feature has been GA since v6.1.0, TiDB is continually improving its performance. In v6.5.0, TiDB supports pushing down sorting operations such as `ORDER BY` and `LIMIT` to TiKV for computation and filtering, which reduces network I/O overhead and improves SQL performance when you use partitioned tables. * Optimizer introduces a more accurate Cost Model Version 2 [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** @@ -198,7 +198,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### MySQL compatibility -* Support a high-performance and globally monotonic `AUTO_INCREMENT` [#38442](https://github.com/pingcap/tidb/issues/38442) @[tiancaiamao](https://github.com/tiancaiamao) **tw@Oreoxmt** +* Support a high-performance and globally monotonic `AUTO_INCREMENT` (GA) [#38442](https://github.com/pingcap/tidb/issues/38442) @[tiancaiamao](https://github.com/tiancaiamao) **tw@Oreoxmt** TiDB v6.4.0 introduces the `AUTO_INCREMENT` MySQL compatibility mode as an experimental feature. This mode introduces a centralized auto-increment ID allocating service that ensures IDs monotonically increase on all TiDB instances. This feature makes it easier to sort query results by auto-increment IDs. In v6.5.0, this feature becomes GA. The insert TPS of a table using this feature is expected to exceed 20,000, and this feature supports elastic scaling to improve the write throughput of a single table and entire clusters. To use the MySQL compatibility mode, you need to set `AUTO_ID_CACHE` to `1` when creating a table. The following is an example: @@ -268,7 +268,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * PITR performance improved remarkably **tw@shichun-0415 - In the log restore stage, the restore speed of one TiKV can reach xx MB/s, which is x times faster than before. The restore speed is scalable and the RTO in DR scenarios is reduced greatly. The RPO in DR scenarios can be as short as 5 minutes. In normal cluster operation and maintenance (OM), for example, a rolling upgrade is performed or only one TiKV is down, the RPO can be 5 minutes. + In the log restore stage, the restore speed of one TiKV can reach xx MB/s, which is x times faster than before. The restore speed is scalable and the RTO in DR scenarios is reduced greatly. The RPO in DR scenarios can be as short as 5 minutes. In normal cluster operation and maintenance (OM), for example, a rolling upgrade is performed or only one TiKV is down, the RPO can be 5 minutes. * TiKV-BR GA: Supports backing up and restoring RawKV [#67](https://github.com/tikv/migration/issues/67) @[pingyu](https://github.com/pingyu) @[haojinming](https://github.com/haojinming) **tw@shichun-0415** @@ -416,12 +416,12 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable - Fix the `Get raft db is not allowed` error when executing the `compact raft` command in tikv-ctl [#13515](https://github.com/tikv/tikv/issues/13515) @[guoxiangCN](https://github.com/guoxiangCN) - Fix the issue that log backup does not work when TLS is enabled [#13867](https://github.com/tikv/tikv/issues/13867) @[YuJuncen](https://github.com/YuJuncen) - Fix the support issue of the Geometry field type [#13651](https://github.com/tikv/tikv/issues/13651) @[dveeden](https://github.com/dveeden) - - Fix the issue that `_` in the `LIKE` operator cannot match non-ASCII characters when new collation is not enabled [#13769](https://github.com/tikv/tikv/issues/13769) @[YangKeao](https://github.com/YangKeao) + - Fix the issue that `_` in the `LIKE` operator cannot match non-ASCII characters when new collation is not enabled [#13769](https://github.com/tikv/tikv/issues/13769) @[YangKeao](https://github.com/YangKeao) - Fix the issue that tikv-ctl is terminated unexpectedly when executing the `reset-to-version` command [#13829](https://github.com/tikv/tikv/issues/13829) @[tabokie](https://github.com/tabokie) + PD - - Fix the issue that the `balance-hot-region-scheduler` configuration is not persisted if not modified [#5701](https://github.com/tikv/pd/issues/5701) @[HunDunDM](https://github.com/HunDunDM) + - Fix the issue that the `balance-hot-region-scheduler` configuration is not persisted if not modified [#5701](https://github.com/tikv/pd/issues/5701) @[HunDunDM](https://github.com/HunDunDM) - Fix the issue that `rank-formula-version` does not retain the pre-upgrade configuration during the upgrade process [#5698](https://github.com/tikv/pd/issues/5698) @[HunDunDM](https://github.com/HunDunDM) + TiFlash @@ -433,7 +433,7 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable + Backup & Restore (BR) - - Fix the issue that when BR deletes log backup data, it mistakenly deletes data that should not be deleted [#38939](https://github.com/pingcap/tidb/issues/38939) @[Leavrth](https://github.com/Leavrth) + - Fix the issue that when BR deletes log backup data, it mistakenly deletes data that should not be deleted [#38939](https://github.com/pingcap/tidb/issues/38939) @[Leavrth](https://github.com/Leavrth) - Fix the issue that restore tasks fail when using old framework for collations in databases or tables [#39150](https://github.com/pingcap/tidb/issues/39150) @[MoCuishle28](https://github.com/MoCuishle28) - Fix the issue that backup fails because Alibaba Cloud and Huawei Cloud are not fully compatible with Amazon S3 storage [#39545](https://github.com/pingcap/tidb/issues/39545) @[3pointer](https://github.com/3pointer) From da6626f1b5e152737dd7d0d521cd433e9146a446 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Mon, 19 Dec 2022 17:05:14 +0800 Subject: [PATCH 49/83] Apply suggestions from code review --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 413c6a436f913..07c4b339ea279 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -18,7 +18,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - The TiDB global memory control becomes GA, and you can control the memory consumption threshold via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). - The high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute becomes GA, which is compatible with MySQL. - Support restoring a cluster to a specific point in time by using [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) (GA), which is compatible with TiCDC and PITR. -- Optimizer further enhances the more accurate [Cost Model] version 2](/cost-model.md#cost-model-version-2) and further enhance the [INDEX MERGE](/glossary.md#index-merge) feature to support the expressions connected by `AND`. +- Enhance TiDB optimizer by making the more accurate [Cost Model version 2](/cost-model.md#cost-model-version-2) generally available and supporting expressions connected by `AND` for [INDEX MERGE](/explain-index-merge.md). - Support pushing down the `JSON_EXTRACT()` function to TiFlash. - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. - TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. @@ -134,7 +134,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr For more details about index merge, see [v5.4.0 Release Notes](/release-5.4.0#performance) and [Explain Index Merge](/explain-index-merge.md). -* Support pushing down the following [JSON functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** +* Support pushing down the following JSON functions to TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** * `->` * `->>` @@ -142,7 +142,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr The JSON format provides a flexible way for application data modeling. Therefore, more and more applications are using the JSON format for data exchange and data storage. By pushing down JSON functions to TiFlash, you can improve the efficiency of analyzing data in the JSON type and use TiDB for more real-time analytics scenarios. -* Support pushing down the following [string functions](/tiflash/tiflash-supported-pushdown-calculations.md) to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** +* Support pushing down the following string functions to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** * `regexp_like` * `regexp_instr` From 18fa5d9000fa43e9909e1e54b15ea6a8a0f8c944 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Mon, 19 Dec 2022 17:08:52 +0800 Subject: [PATCH 50/83] Update releases/release-6.5.0.md --- releases/release-6.5.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 07c4b339ea279..d86c50504fbaf 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -22,7 +22,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Support pushing down the `JSON_EXTRACT()` function to TiFlash. - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. - TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. -- TiDB Data Migration (DM) [continuous data validation](/dm/dm-continuous-data-validation.md) is now in General Availability (GA). +- TiDB Data Migration (DM) [continuous data validation](/dm/dm-continuous-data-validation.md) becomes GA. - TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by x times, and reduces RPO to x minutes. - Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) by x times and reduces replication latency to x seconds. - Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). From f0c0b9b4c3fc48af2e31b875727eefa5f2064469 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Tue, 20 Dec 2022 14:26:43 +0800 Subject: [PATCH 51/83] Apply suggestions from code review Co-authored-by: Aolin --- releases/release-6.5.0.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index d86c50504fbaf..82dbae3b4d416 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -248,11 +248,15 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr The storage sink supports changed logs in the canal-json and CSV formats. Noticeably, the latency of replicating changed logs from TiCDC to storage can be as short as xx. For more information, see [User document](/ticdc/ticdc-sink-to-cloud-storage.md). -* TiCDC supports bidirectional replication across multiple clusters @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** +* TiCDC supports bidirectional replication across multiple clusters @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** TiCDC supports bidirectional replication across multiple TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution across multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from each TiDB cluster to the other TiDB clusters, you can achieve bidirectional data replication across multiple TiDB clusters. - For more information, refer to [user document](/ticdc/ticdc-bidirectional-replication.md). + For more information, see [user document](/ticdc/ticdc-bidirectional-replication.md). + +* TiCDC supports updating TLS online [#7908](https://github.com/pingcap/tiflow/issues/7908) @[CharlesCheung96](https://github.com/CharlesCheung96) **tw@shichun-0415** + + TiCDC supports online updates of TLS certificates. To keep data secure, you will set an expiration policy for the certificate used by the system. After the expiration period, the system uses a new certificate. TiCDC v6.5.0 supports online updates of TLS certificates. Without interrupting the replication tasks, TiCDC can automatically detect and update the certificate, without the need for manual intervention. * TiCDC performance improves significantly **tw@shichun-0415 From ac825f6e453e9f089b9745cb96c827eb36902fba Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 20 Dec 2022 16:13:45 +0800 Subject: [PATCH 52/83] add tiflash/cdc performance overview description --- releases/release-6.5.0.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 82dbae3b4d416..eb80eb8b1c3c1 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -126,6 +126,17 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr For more information, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). +* Performance Overview dashboard adds TiFlash and CDC (Change Data Capture) panels + + Since v6.1.0, TiDB has introduced the Performance Overview dashboard in Grafana, which provides a system-level entry for overall performance diagnosis of TiDB, TiKV, and PD. In v6.5.0, the Performance Overview dashboard adds TiFlash and CDC (Change Data Capture) panels. With these panels, starting from v6.5.0, you can use the Performance Overview dashboard to analyze the performance of all components in a TiDB cluster. + + The TiFlash and CDC panels reorganize with TiFlash and TiCDC monitoring related information, which can help you grealty improve the efficiency of performance analysis and troubleshooting for TiFlash and TiCDC. + + - On the [TiFlash panels](/grafana-performance-overview-dashboard.md#tiflash), you can easily view the request types, latency analysis, and resource usage overview of your TiFlash cluster. + - On the [CDC panels](/grafana-performance-overview-dashboard.md#cdc), you can easily view the health, synchronization latency, data flow, and downstream write latency of your TiCDC cluster. + + For more information, see [user document](performance-tuning-method.md/). + ### Performance * [INDEX MERGE](/glossary.md#index-merge) supports expressions connected by `AND` [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** From 53fc0b263989205635bd243a3308aadeac441ef2 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Tue, 20 Dec 2022 16:20:14 +0800 Subject: [PATCH 53/83] Apply suggestions from code review --- releases/release-6.5.0.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index eb80eb8b1c3c1..e1ecf2d61e279 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -126,16 +126,16 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr For more information, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). -* Performance Overview dashboard adds TiFlash and CDC (Change Data Capture) panels +* Performance Overview dashboard adds TiFlash and CDC (Change Data Capture) panels **tw@qiancai** - Since v6.1.0, TiDB has introduced the Performance Overview dashboard in Grafana, which provides a system-level entry for overall performance diagnosis of TiDB, TiKV, and PD. In v6.5.0, the Performance Overview dashboard adds TiFlash and CDC (Change Data Capture) panels. With these panels, starting from v6.5.0, you can use the Performance Overview dashboard to analyze the performance of all components in a TiDB cluster. + Since v6.1.0, TiDB has introduced the Performance Overview dashboard in Grafana, which provides a system-level entry for overall performance diagnosis of TiDB, TiKV, and PD. In v6.5.0, the Performance Overview dashboard adds TiFlash and CDC panels. With these panels, starting from v6.5.0, you can use the Performance Overview dashboard to analyze the performance of all components in a TiDB cluster. - The TiFlash and CDC panels reorganize with TiFlash and TiCDC monitoring related information, which can help you grealty improve the efficiency of performance analysis and troubleshooting for TiFlash and TiCDC. + The TiFlash and CDC panels reorganize with TiFlash and TiCDC monitoring information, which can help you greatly improve the efficiency of performance analysis and troubleshooting for TiFlash and TiCDC. - On the [TiFlash panels](/grafana-performance-overview-dashboard.md#tiflash), you can easily view the request types, latency analysis, and resource usage overview of your TiFlash cluster. - On the [CDC panels](/grafana-performance-overview-dashboard.md#cdc), you can easily view the health, synchronization latency, data flow, and downstream write latency of your TiCDC cluster. - For more information, see [user document](performance-tuning-method.md/). + For more information, see [user document](performance-tuning-method.md). ### Performance From 87a370487156d7b5f88e3b63538adc0114036246 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Tue, 20 Dec 2022 16:36:28 +0800 Subject: [PATCH 54/83] Update releases/release-6.5.0.md Co-authored-by: xixirangrang --- releases/release-6.5.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index e1ecf2d61e279..2eac574df224d 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -267,7 +267,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * TiCDC supports updating TLS online [#7908](https://github.com/pingcap/tiflow/issues/7908) @[CharlesCheung96](https://github.com/CharlesCheung96) **tw@shichun-0415** - TiCDC supports online updates of TLS certificates. To keep data secure, you will set an expiration policy for the certificate used by the system. After the expiration period, the system uses a new certificate. TiCDC v6.5.0 supports online updates of TLS certificates. Without interrupting the replication tasks, TiCDC can automatically detect and update the certificate, without the need for manual intervention. + To keep data secure, you need to set an expiration policy for the certificate used by the system. After the expiration period, the system needs a new certificate. TiCDC v6.5.0 supports online updates of TLS certificates. Without interrupting the replication tasks, TiCDC can automatically detect and update the certificate, without the need for manual intervention. * TiCDC performance improves significantly **tw@shichun-0415 From 648b98a9cfc00dad9049000ee80b605d6cb770f8 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Tue, 20 Dec 2022 16:38:46 +0800 Subject: [PATCH 55/83] Apply suggestions from code review --- releases/release-6.5.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 2eac574df224d..835a534167240 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -126,11 +126,11 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr For more information, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). -* Performance Overview dashboard adds TiFlash and CDC (Change Data Capture) panels **tw@qiancai** +* Performance Overview dashboard adds TiFlash and CDC (Change Data Capture) panels [#39230](https://github.com/pingcap/tidb/issues/39230) @[dbsid](https://github.com/dbsid) **tw@qiancai** Since v6.1.0, TiDB has introduced the Performance Overview dashboard in Grafana, which provides a system-level entry for overall performance diagnosis of TiDB, TiKV, and PD. In v6.5.0, the Performance Overview dashboard adds TiFlash and CDC panels. With these panels, starting from v6.5.0, you can use the Performance Overview dashboard to analyze the performance of all components in a TiDB cluster. - The TiFlash and CDC panels reorganize with TiFlash and TiCDC monitoring information, which can help you greatly improve the efficiency of performance analysis and troubleshooting for TiFlash and TiCDC. + The TiFlash and CDC panels reorganize with TiFlash and TiCDC monitoring information, which can help you greatly improve the efficiency of analyzing and troubleshooting TiFlash and TiCDC performance issues. - On the [TiFlash panels](/grafana-performance-overview-dashboard.md#tiflash), you can easily view the request types, latency analysis, and resource usage overview of your TiFlash cluster. - On the [CDC panels](/grafana-performance-overview-dashboard.md#cdc), you can easily view the health, synchronization latency, data flow, and downstream write latency of your TiCDC cluster. From 5f7aac5c634b05beab2b50e1866ed17433554215 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Wed, 21 Dec 2022 13:46:34 +0800 Subject: [PATCH 56/83] Apply suggestions from code review --- releases/release-6.5.0.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 835a534167240..98cf563a58c55 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -298,6 +298,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr | Variable name | Change type | Description | |--------|------------------------------|------| |[`tidb_enable_amend_pessimistic_txn`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407)| Deprecated | Starting from v6.5.0, this variable is deprecated, and TiDB uses the [Metadata Lock](/metadata-lock.md) feature by default to avoid the `Information schema is changed` error. | +| [`tidb_enable_outer_join_reorder`](/system-variables.md#tidb_enable_outer_join_reorder-new-in-v610) | Modified | Changes the default value from `OFF` to `ON`, meaning that the support of Outer Join for the [Join Reorder](/join-reorder.md) algorithm is enabled by default. | | [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Changes the default value from `1` to `2`, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | | [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON`, meaning that the metadata lock feature is enabled by default. | | [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | @@ -354,6 +355,7 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable + TiDB - For `BIT` and `CHAR` columns, make the result of `INFORMATION_SCHEMA.COLUMNS` consistent with MySQL [#25472](https://github.com/pingcap/tidb/issues/25472) @[hawkingrei](https://github.com/hawkingrei) + - Optimize the TiDB probing mechanism for TiFlash nodes in the TiFlash MPP mode to mitigate the performance impact when nodes are abnormal [#39686](https://github.com/pingcap/tidb/issues/39686) @[hackersean](https://github.com/hackersean) + TiKV From 58b2e58fd3cecc5a4357ea26e719484920ab4ffb Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Wed, 21 Dec 2022 18:35:33 +0800 Subject: [PATCH 57/83] Apply suggestions from code review Co-authored-by: Ran --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 98cf563a58c55..fd476ed5fb058 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -130,12 +130,12 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr Since v6.1.0, TiDB has introduced the Performance Overview dashboard in Grafana, which provides a system-level entry for overall performance diagnosis of TiDB, TiKV, and PD. In v6.5.0, the Performance Overview dashboard adds TiFlash and CDC panels. With these panels, starting from v6.5.0, you can use the Performance Overview dashboard to analyze the performance of all components in a TiDB cluster. - The TiFlash and CDC panels reorganize with TiFlash and TiCDC monitoring information, which can help you greatly improve the efficiency of analyzing and troubleshooting TiFlash and TiCDC performance issues. + The TiFlash and CDC panels reorganize the TiFlash and TiCDC monitoring information, which can help you greatly improve the efficiency of analyzing and troubleshooting TiFlash and TiCDC performance issues. - On the [TiFlash panels](/grafana-performance-overview-dashboard.md#tiflash), you can easily view the request types, latency analysis, and resource usage overview of your TiFlash cluster. - - On the [CDC panels](/grafana-performance-overview-dashboard.md#cdc), you can easily view the health, synchronization latency, data flow, and downstream write latency of your TiCDC cluster. + - On the [CDC panels](/grafana-performance-overview-dashboard.md#cdc), you can easily view the health, replication latency, data flow, and downstream write latency of your TiCDC cluster. - For more information, see [user document](performance-tuning-method.md). + For more information, see [user document](/performance-tuning-method.md). ### Performance From e4698e2276f914d50674f07c0f8872883d470389 Mon Sep 17 00:00:00 2001 From: Aolin Date: Thu, 22 Dec 2022 09:35:48 +0800 Subject: [PATCH 58/83] Apply suggestions from code review Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index fd476ed5fb058..ad313c673afd9 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -40,7 +40,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr TiDB v6.3.0 introduces [Metadata lock](/metadata-lock.md) as an experimental feature. To avoid the `Information schema is changed` error caused by DML statements, TiDB coordinates the priority of DMLs and DDLs during table metadata change, and makes the ongoing DDLs wait for the DMLs with old metadata to commit. In v6.5.0, this feature becomes GA and is enabled by default. It is suitable for various types of DDLs change scenarios. - For more information, see [User document](/metadata-lock.md). + For more information, see [user document](/metadata-lock.md). * Support restoring a cluster to a specific point in time by using `FLASHBACK CLUSTER TO TIMESTAMP` (GA) [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** @@ -163,7 +163,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr In some view access scenarios, you need to use optimizer hints to interfere with the execution plan of the query in the view to achieve the best performance. Since v6.5.0, TiDB supports adding global hints for the query blocks in the view, thus the hints defined in the query can be effective in the view. This feature provides a way to inject hints into complex SQL statements that contain nested views, enhances the execution plan control, and stabilizes the performance of complex statements. To use global hints, you need to [name the query blocks](/optimizer-hints.md#step-1-define-the-query-block-name-of-the-view-using-the-qb_name-hint) and [specify hint references](/optimizer-hints.md#step-2-add-the-target-hints). - For more information, see [User document](/optimizer-hints.md#hints-that-take-effect-globally). + For more information, see [user document](/optimizer-hints.md#hints-that-take-effect-globally). * Support pushing down sorting operations of [partitioned tables](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** @@ -171,7 +171,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * Optimizer introduces a more accurate Cost Model Version 2 [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** - TiDB v6.2.0 introduces the [Cost Model Version 2](/cost-model.md#cost-model-version-2) as an experimental feature. This model uses a more accurate cost estimation method to help the optimizer choose the optimal execution plan. Especially when TiFlash is deployed, Cost Model Version 2 automatically helps choose the appropriate storage engine and avoids much manual intervention. After real-scene testing for a period of time, this model becomes GA in v6.5.0. SInce v6.5.0, newly-created clusters use Cost Model Version 2 by default. For clusters upgrade to v6.5.0, because Cost Model Version 2 might cause changes to query plans, you can set the [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-new-in-v620) variable to use the new cost model after sufficient performance testing. + TiDB v6.2.0 introduces the [Cost Model Version 2](/cost-model.md#cost-model-version-2) as an experimental feature. This model uses a more accurate cost estimation method to help the optimizer choose the optimal execution plan. Especially when TiFlash is deployed, Cost Model Version 2 automatically helps choose the appropriate storage engine and avoids much manual intervention. After real-scene testing for a period of time, this model becomes GA in v6.5.0. Since v6.5.0, newly-created clusters use Cost Model Version 2 by default. For clusters upgrade to v6.5.0, because Cost Model Version 2 might cause changes to query plans, you can set the [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-new-in-v620) variable to use the new cost model after sufficient performance testing. Cost Model Version 2 becomes a generally available feature that significantly improves the overall capability of the TiDB optimizer and helps TiDB evolve towards a more powerful HTAP database. From bd4d80b82450587bccb84f20bca8c560e818eb26 Mon Sep 17 00:00:00 2001 From: Aolin Date: Thu, 22 Dec 2022 14:26:26 +0800 Subject: [PATCH 59/83] add the reason for changing the default value --- releases/release-6.5.0.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index ad313c673afd9..61d95e391e6d6 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -299,10 +299,10 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr |--------|------------------------------|------| |[`tidb_enable_amend_pessimistic_txn`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407)| Deprecated | Starting from v6.5.0, this variable is deprecated, and TiDB uses the [Metadata Lock](/metadata-lock.md) feature by default to avoid the `Information schema is changed` error. | | [`tidb_enable_outer_join_reorder`](/system-variables.md#tidb_enable_outer_join_reorder-new-in-v610) | Modified | Changes the default value from `OFF` to `ON`, meaning that the support of Outer Join for the [Join Reorder](/join-reorder.md) algorithm is enabled by default. | -| [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Changes the default value from `1` to `2`, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | -| [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON`, meaning that the metadata lock feature is enabled by default. | +| [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Changes the default value from `1` to `2` after further tests, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | +| [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the metadata lock feature is enabled by default. | | [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | -| [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Changes the default value from `OFF` to `ON`, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | +| [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | | [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | Modified | For versions earlier than TiDB v6.5.0, this variable is used to set the threshold value of memory quota for a query. For TiDB v6.5.0 and later versions, this variable is used to set the threshold value of memory quota for a session. | | [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | Starting from v6.5.0, when this variable is set to`closest-adaptive` and the estimated result of a read request is greater than or equal to [`tidb_adaptive_closest_read_threshold`](/system-variables.md#tidb_adaptive_closest_read_threshold-new-in-v630), the number of TiDB nodes whose `closest-adaptive` configuration takes effect is limited in each availability zone, which is always the same as the number of TiDB nodes in the availability zone with the fewest TiDB nodes, and the other TiDB nodes automatically read from the leader replica. | | [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) | Modified | Changes the default value from `0` to `80%`, meaning that the memory limit for a TiDB instance is 80% of the total memory by default. | @@ -338,7 +338,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr | TiDB | [`server-memory-quota`](/tidb-configuration-file.md#server-memory-quota-new-in-v409) | Deprecated | Since v6.5.0, this configuration item is deprecated. Instead, use the system variable [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) to manage memory globally. | | TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password-new-in-v650) | Newly added | Determines whether TiDB disconnects the client connection when the password is expired. The default value is `true`, which means the client connection is disconnected when the password is expired. | | TiKV | `raw-min-ts-outlier-threshold` | Deleted | Since v6.4.0, this configuration item was deprecated. Since v6.5.0, this configuration item is deleted. | -| TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | Change the default value from `1s` to `200ms`. | +| TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | To reduce CDC latency, the default value is changed from `1s` to `200ms`. | | TiKV | [`memory-use-ratio`](/tikv-configuration-file.md#memory-use-ratio-introduced-new-in-v650) | Newly added | Indicates the ratio of available memory to total system memory in PITR log recovery. | ### Others From 7eaf23c975011ec0bd639983beeb73c4546f9452 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Thu, 22 Dec 2022 14:44:28 +0800 Subject: [PATCH 60/83] Apply suggestions from code review Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- releases/release-6.5.0.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 61d95e391e6d6..8ba5a6061a49e 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -68,11 +68,11 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr INSERT INTO t2 SELECT Mod(x,y) FROM t1; ``` - During the experimental phase, this feature is disabled by default. To enable it, you can set the [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) system variable to `ON`. There are no special restrictions on the result table specified by `INSERT INTO` for this feature, and you are free to add a TiFlash replica to that result table or not. Typical usage scenarios of this feature include: + During the experimental phase, this feature is disabled by default. To enable it, you can set the [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) system variable to `ON`. There are no special restrictions on the result table specified by `INSERT INTO` for this feature, and you are free to add or not add a TiFlash replica to that result table. Typical usage scenarios of this feature include: - Run complex analytical queries using TiFlash - Reuse TiFlash query results or deal with highly concurrent online requests - - Need a relatively small result set comparing with the input data size, preferably smaller than 100MiB. + - Need a relatively small result set compared with the input data size, preferably smaller than 100 MiB. For more information, see the [user documentation](/tiflash/tiflash-results-materialization.md). @@ -82,7 +82,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr In v6.5.0, TiDB supports binding historical execution plans by extending the binding object in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statement. When the execution plan of a SQL statement changes, you can bind the original execution plan by specifying `plan_digest` in the `CREATE [GLOBAL | SESSION] BINDING` statement to quickly recover SQL performance, as long as the original execution plan is still in the SQL execution history memory table (for example, `statements_summary`). This feature can simplify the process of handling execution plan change issues and improve your maintenance efficiency. - For more information, see [user documentation](/sql-plan-management.md#bind-historical-execution-plans). + For more information, see [user document](/sql-plan-management.md#bind-historical-execution-plans). ### Security @@ -92,25 +92,25 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr TiDB provides the SQL function [`VALIDATE_PASSWORD_STRENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_validate-password-strength) to validate the password strength. - For more information, see [User document](/password-management.md#password-complexity-policy). + For more information, see [user document](/password-management.md#password-complexity-policy). * Support the password expiration policy [#38936](https://github.com/pingcap/tidb/issues/38936) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** TiDB supports configuring the password expiration policy, including manual expiration, global-level automatic expiration, and account-level automatic expiration. After this policy is enabled, you must change your passwords periodically. This reduces the risk of password leakage due to long-term use and improves password security. - For more information, see [User document](/password-management.md#password-expiration-policy). + For more information, see [user document](/password-management.md#password-expiration-policy). * Support the password reuse policy [#38937](https://github.com/pingcap/tidb/issues/38937) @[keeplearning20221](https://github.com/keeplearning20221) **tw@ran-huang** TiDB supports configuring the password reuse policy, including global-level password reuse policy and account-level password reuse policy. After this policy is enabled, you cannot use the passwords that you have used within a specified period or the most recent several passwords that you have used. This reduces the risk of password leakage due to repeated use of passwords and improves password security. - For more information, see [User document](/password-management.md#password-reuse-policy). + For more information, see [user document](/password-management.md#password-reuse-policy). * Support failed-login tracking and temporary account locking policy [#38938](https://github.com/pingcap/tidb/issues/38938) @[lastincisor](https://github.com/lastincisor) **tw@ran-huang** After this policy is enabled, if you log in to TiDB with incorrect passwords multiple times consecutively, the account is temporarily locked. After the lock time ends, the account is automatically unlocked. - For more information, see [User document](/password-management.md#failed-login-tracking-and-temporary-account-locking-policy). + For more information, see [user document](/password-management.md#failed-login-tracking-and-temporary-account-locking-policy). ### Observability @@ -122,7 +122,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - The compute work of TiDB Dashboard does not pose pressure on PD nodes. This ensures more stable cluster operation. - The user can still access TiDB Dashboard for diagnosis even if the PD node is unavailable. - - Accessing TiDB Dashboard in Internet does not involve the privileged interfaces of PD. Therefore, the security risk of the cluster is mitigated. + - Accessing TiDB Dashboard in Internet does not involve the privileged interfaces of PD. Therefore, the security risk of the cluster is reduced. For more information, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). @@ -298,13 +298,13 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr | Variable name | Change type | Description | |--------|------------------------------|------| |[`tidb_enable_amend_pessimistic_txn`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407)| Deprecated | Starting from v6.5.0, this variable is deprecated, and TiDB uses the [Metadata Lock](/metadata-lock.md) feature by default to avoid the `Information schema is changed` error. | -| [`tidb_enable_outer_join_reorder`](/system-variables.md#tidb_enable_outer_join_reorder-new-in-v610) | Modified | Changes the default value from `OFF` to `ON`, meaning that the support of Outer Join for the [Join Reorder](/join-reorder.md) algorithm is enabled by default. | +| [`tidb_enable_outer_join_reorder`](/system-variables.md#tidb_enable_outer_join_reorder-new-in-v610) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the support of Outer Join for the [Join Reorder](/join-reorder.md) algorithm is enabled by default. | | [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Changes the default value from `1` to `2` after further tests, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | | [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the metadata lock feature is enabled by default. | | [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | | [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | | [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | Modified | For versions earlier than TiDB v6.5.0, this variable is used to set the threshold value of memory quota for a query. For TiDB v6.5.0 and later versions, this variable is used to set the threshold value of memory quota for a session. | -| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | Starting from v6.5.0, when this variable is set to`closest-adaptive` and the estimated result of a read request is greater than or equal to [`tidb_adaptive_closest_read_threshold`](/system-variables.md#tidb_adaptive_closest_read_threshold-new-in-v630), the number of TiDB nodes whose `closest-adaptive` configuration takes effect is limited in each availability zone, which is always the same as the number of TiDB nodes in the availability zone with the fewest TiDB nodes, and the other TiDB nodes automatically read from the leader replica. | +| [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | Starting from v6.5.0, to optimize load balancing across TiDB nodes, when this variable is set to`closest-adaptive` and the estimated result of a read request is greater than or equal to [`tidb_adaptive_closest_read_threshold`](/system-variables.md#tidb_adaptive_closest_read_threshold-new-in-v630), the number of TiDB nodes whose `closest-adaptive` configuration takes effect is limited in each availability zone, which is always the same as the number of TiDB nodes in the availability zone with the fewest TiDB nodes, and the other TiDB nodes automatically read from the leader replica. | | [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) | Modified | Changes the default value from `0` to `80%`, meaning that the memory limit for a TiDB instance is 80% of the total memory by default. | | [`default_password_lifetime`](/system-variables.md#default_password_lifetime-new-in-v650) | Newly added | Sets the global policy for automatic password expiration to require users to change passwords periodically. The default value `0` indicates that passwords never expire. | | [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-new-in-v650) | Newly added | Indicates whether TiDB disconnects the client connection when the password is expired. This variable is read-only. | From 7fd9bc94bc05ae3bb33926aba54175ee619b5d49 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Thu, 22 Dec 2022 15:49:41 +0800 Subject: [PATCH 61/83] Apply suggestions from code review Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> Co-authored-by: xixirangrang --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 8ba5a6061a49e..17844dba1f609 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -209,7 +209,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### MySQL compatibility -* Support a high-performance and globally monotonic `AUTO_INCREMENT` (GA) [#38442](https://github.com/pingcap/tidb/issues/38442) @[tiancaiamao](https://github.com/tiancaiamao) **tw@Oreoxmt** +* Support a high-performance and globally monotonic `AUTO_INCREMENT` column attribute (GA) [#38442](https://github.com/pingcap/tidb/issues/38442) @[tiancaiamao](https://github.com/tiancaiamao) **tw@Oreoxmt** TiDB v6.4.0 introduces the `AUTO_INCREMENT` MySQL compatibility mode as an experimental feature. This mode introduces a centralized auto-increment ID allocating service that ensures IDs monotonically increase on all TiDB instances. This feature makes it easier to sort query results by auto-increment IDs. In v6.5.0, this feature becomes GA. The insert TPS of a table using this feature is expected to exceed 20,000, and this feature supports elastic scaling to improve the write throughput of a single table and entire clusters. To use the MySQL compatibility mode, you need to set `AUTO_ID_CACHE` to `1` when creating a table. The following is an example: @@ -303,9 +303,9 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr | [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the metadata lock feature is enabled by default. | | [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | | [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | -| [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | Modified | For versions earlier than TiDB v6.5.0, this variable is used to set the threshold value of memory quota for a query. For TiDB v6.5.0 and later versions, this variable is used to set the threshold value of memory quota for a session. | +| [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) | Modified | For versions earlier than TiDB v6.5.0, this variable is used to set the threshold value of memory quota for a query. For TiDB v6.5.0 and later versions, to control the memory of DML statements more accurately, this variable is used to set the threshold value of memory quota for a session. | | [`tidb_replica_read`](/system-variables.md#tidb_replica_read-new-in-v40) | Modified | Starting from v6.5.0, to optimize load balancing across TiDB nodes, when this variable is set to`closest-adaptive` and the estimated result of a read request is greater than or equal to [`tidb_adaptive_closest_read_threshold`](/system-variables.md#tidb_adaptive_closest_read_threshold-new-in-v630), the number of TiDB nodes whose `closest-adaptive` configuration takes effect is limited in each availability zone, which is always the same as the number of TiDB nodes in the availability zone with the fewest TiDB nodes, and the other TiDB nodes automatically read from the leader replica. | -| [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) | Modified | Changes the default value from `0` to `80%`, meaning that the memory limit for a TiDB instance is 80% of the total memory by default. | +| [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) | Modified | Changes the default value from `0` to `80%`. As the TiDB global memory control becomes GA, this default value change enables the memory control by default and sets the memory limit for a TiDB instance to 80% of the total memory by default. | | [`default_password_lifetime`](/system-variables.md#default_password_lifetime-new-in-v650) | Newly added | Sets the global policy for automatic password expiration to require users to change passwords periodically. The default value `0` indicates that passwords never expire. | | [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-new-in-v650) | Newly added | Indicates whether TiDB disconnects the client connection when the password is expired. This variable is read-only. | | [`password_history`](/system-variables.md#password_history-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on the number of password changes. The default value `0` means disabling the password reuse policy based on the number of password changes. | From 7af7509059a7dd289da537cc1c4ab0b6bb2d67ed Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Thu, 22 Dec 2022 17:09:34 +0800 Subject: [PATCH 62/83] Apply suggestions from code review Co-authored-by: Aolin --- releases/release-6.5.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 17844dba1f609..f9a0abd3b5b5c 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -392,7 +392,7 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable + TiDB Dashboard - - Add three new fields to the slow query page: "Is Prepared?","Is Plan from Cache?","Is Plan from Binding?" [#1451](https://github.com/pingcap/tidb-dashboard/issues/1451) @[shhdgit](https://github.com/shhdgit) + - Add three new fields to the slow query page: "Is Prepared?", "Is Plan from Cache?", "Is Plan from Binding?" [#1451](https://github.com/pingcap/tidb-dashboard/issues/1451) @[shhdgit](https://github.com/shhdgit) + Backup & Restore (BR) @@ -423,7 +423,7 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable - Fix the issue that modifying the partition column of a partitioned table causes DDL to hang [#38530](https://github.com/pingcap/tidb/issues/38530) @[mjonss](https://github.com/mjonss) - Fix the issue that the `ADMIN SHOW JOB` operation panics after upgrading from v4.0.16 to v6.4.0 [#38980](https://github.com/pingcap/tidb/issues/38980) @[tangenta](https://github.com/tangenta) - Fix the issue that the `tidb_decode_key` function fails to correctly parse the encoding of partitioned tables [#39304](https://github.com/pingcap/tidb/issues/39304) @[Defined2014](https://github.com/Defined2014) - - Fixe the issue that gRPC error logs are not redirected to the correct log file during log rotation [#38941](https://github.com/pingcap/tidb/issues/38941) @[xhebox](https://github.com/xhebox) + - Fix the issue that gRPC error logs are not redirected to the correct log file during log rotation [#38941](https://github.com/pingcap/tidb/issues/38941) @[xhebox](https://github.com/xhebox) - Fix the issue that TiDB generates an unexpected execution plan for the `BEGIN; SELECT... FOR UPDATE;` point query when TiKV is not configured as a read engine [#39344](https://github.com/pingcap/tidb/issues/39344) @[Yisaer](https://github.com/Yisaer) - Fix the issue that mistakenly pushing down `StreamAgg` to TiFlash causes wrong results [#39266](https://github.com/pingcap/tidb/issues/39266) @[fixdb](https://github.com/fixdb) From e598457a0ed3ae19949fece0486b4c64de593b5f Mon Sep 17 00:00:00 2001 From: qiancai Date: Thu, 22 Dec 2022 17:16:30 +0800 Subject: [PATCH 63/83] unify the usage of "see documentation" --- releases/release-6.5.0.md | 54 +++++++++++++++++++-------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index f9a0abd3b5b5c..5bdcc26ee9704 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -40,13 +40,13 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr TiDB v6.3.0 introduces [Metadata lock](/metadata-lock.md) as an experimental feature. To avoid the `Information schema is changed` error caused by DML statements, TiDB coordinates the priority of DMLs and DDLs during table metadata change, and makes the ongoing DDLs wait for the DMLs with old metadata to commit. In v6.5.0, this feature becomes GA and is enabled by default. It is suitable for various types of DDLs change scenarios. - For more information, see [user document](/metadata-lock.md). + For more information, see [documentation](/metadata-lock.md). * Support restoring a cluster to a specific point in time by using `FLASHBACK CLUSTER TO TIMESTAMP` (GA) [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** TiDB v6.4.0 introduces the [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement as an experimental feature. You can use this statement to restore a cluster to a specific point in time within the Garbage Collection (GC) life time. In v6.5.0, this statement becomes GA. This feature helps you to easily undo DML misoperations, restore the original cluster in minutes, roll back data at different time points to determine the exact time when data changes, and it is compatible with PITR and TiCDC. - For more information, see [user document](/sql-statements/sql-statement-flashback-to-timestamp.md). + For more information, see [documentation](/sql-statements/sql-statement-flashback-to-timestamp.md). * Fully support non-transactional DML statements including `INSERT`, `REPLACE`, `UPDATE`, and `DELETE` [#33485](https://github.com/pingcap/tidb/issues/33485) @[ekexium](https://github.com/ekexium) **tw@Oreoxmt** @@ -58,7 +58,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr TTL provides row-level data lifetime management. In TiDB, a table with the TTL attribute automatically checks data lifetime and deletes expired data at the row level. TTL is designed to help you clean up unnecessary data periodically and in a timely manner without affecting the online read and write workloads. - For more information, see [User document](/time-to-live.md). + For more information, see [documentation](/time-to-live.md). * Support saving TiFlash query results using the `INSERT INTO SELECT` statement (experimental) [#37515](https://github.com/pingcap/tidb/issues/37515) @[gengliqi](https://github.com/gengliqi) **tw@qiancai** @@ -74,7 +74,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Reuse TiFlash query results or deal with highly concurrent online requests - Need a relatively small result set compared with the input data size, preferably smaller than 100 MiB. - For more information, see the [user documentation](/tiflash/tiflash-results-materialization.md). + For more information, see [documentation](/tiflash/tiflash-results-materialization.md). * Support binding history execution plans (experimental) [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@qiancai** @@ -82,7 +82,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr In v6.5.0, TiDB supports binding historical execution plans by extending the binding object in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statement. When the execution plan of a SQL statement changes, you can bind the original execution plan by specifying `plan_digest` in the `CREATE [GLOBAL | SESSION] BINDING` statement to quickly recover SQL performance, as long as the original execution plan is still in the SQL execution history memory table (for example, `statements_summary`). This feature can simplify the process of handling execution plan change issues and improve your maintenance efficiency. - For more information, see [user document](/sql-plan-management.md#bind-historical-execution-plans). + For more information, see [documentation](/sql-plan-management.md#bind-historical-execution-plans). ### Security @@ -92,25 +92,25 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr TiDB provides the SQL function [`VALIDATE_PASSWORD_STRENGTH()`](https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_validate-password-strength) to validate the password strength. - For more information, see [user document](/password-management.md#password-complexity-policy). + For more information, see [documentation](/password-management.md#password-complexity-policy). * Support the password expiration policy [#38936](https://github.com/pingcap/tidb/issues/38936) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** TiDB supports configuring the password expiration policy, including manual expiration, global-level automatic expiration, and account-level automatic expiration. After this policy is enabled, you must change your passwords periodically. This reduces the risk of password leakage due to long-term use and improves password security. - For more information, see [user document](/password-management.md#password-expiration-policy). + For more information, see [documentation](/password-management.md#password-expiration-policy). * Support the password reuse policy [#38937](https://github.com/pingcap/tidb/issues/38937) @[keeplearning20221](https://github.com/keeplearning20221) **tw@ran-huang** TiDB supports configuring the password reuse policy, including global-level password reuse policy and account-level password reuse policy. After this policy is enabled, you cannot use the passwords that you have used within a specified period or the most recent several passwords that you have used. This reduces the risk of password leakage due to repeated use of passwords and improves password security. - For more information, see [user document](/password-management.md#password-reuse-policy). + For more information, see [documentation](/password-management.md#password-reuse-policy). * Support failed-login tracking and temporary account locking policy [#38938](https://github.com/pingcap/tidb/issues/38938) @[lastincisor](https://github.com/lastincisor) **tw@ran-huang** After this policy is enabled, if you log in to TiDB with incorrect passwords multiple times consecutively, the account is temporarily locked. After the lock time ends, the account is automatically unlocked. - For more information, see [user document](/password-management.md#failed-login-tracking-and-temporary-account-locking-policy). + For more information, see [documentation](/password-management.md#failed-login-tracking-and-temporary-account-locking-policy). ### Observability @@ -124,7 +124,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - The user can still access TiDB Dashboard for diagnosis even if the PD node is unavailable. - Accessing TiDB Dashboard in Internet does not involve the privileged interfaces of PD. Therefore, the security risk of the cluster is reduced. - For more information, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). + For more information, see [documentation](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). * Performance Overview dashboard adds TiFlash and CDC (Change Data Capture) panels [#39230](https://github.com/pingcap/tidb/issues/39230) @[dbsid](https://github.com/dbsid) **tw@qiancai** @@ -135,7 +135,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - On the [TiFlash panels](/grafana-performance-overview-dashboard.md#tiflash), you can easily view the request types, latency analysis, and resource usage overview of your TiFlash cluster. - On the [CDC panels](/grafana-performance-overview-dashboard.md#cdc), you can easily view the health, replication latency, data flow, and downstream write latency of your TiCDC cluster. - For more information, see [user document](/performance-tuning-method.md). + For more information, see [documentation](/performance-tuning-method.md). ### Performance @@ -163,7 +163,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr In some view access scenarios, you need to use optimizer hints to interfere with the execution plan of the query in the view to achieve the best performance. Since v6.5.0, TiDB supports adding global hints for the query blocks in the view, thus the hints defined in the query can be effective in the view. This feature provides a way to inject hints into complex SQL statements that contain nested views, enhances the execution plan control, and stabilizes the performance of complex statements. To use global hints, you need to [name the query blocks](/optimizer-hints.md#step-1-define-the-query-block-name-of-the-view-using-the-qb_name-hint) and [specify hint references](/optimizer-hints.md#step-2-add-the-target-hints). - For more information, see [user document](/optimizer-hints.md#hints-that-take-effect-globally). + For more information, see [documentation](/optimizer-hints.md#hints-that-take-effect-globally). * Support pushing down sorting operations of [partitioned tables](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** @@ -175,7 +175,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr Cost Model Version 2 becomes a generally available feature that significantly improves the overall capability of the TiDB optimizer and helps TiDB evolve towards a more powerful HTAP database. - For more information, see [User document](/cost-model.md#cost-model-version-2). + For more information, see [documentation](/cost-model.md#cost-model-version-2). * TiFlash optimizes the operations of getting the number of table rows [#37165](https://github.com/pingcap/tidb/issues/37165) @[elsa0520](https://github.com/elsa0520) @@ -191,7 +191,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr If you are using TiDB v6.5.0 or later, it is recommended to remove [`txn-total-size-limit`](/tidb-configuration-file.md#txn-total-size-limit) and not to set a separate limit on the memory usage of transactions. Instead, use the system variables [`tidb_mem_quota_query`](/system-variables.md#tidb_mem_quota_query) and [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640) to manage global memory, which can improve the efficiency of memory usage. - For more information, see the [user document](/configure-memory-usage.md). + For more information, see [documentation](/configure-memory-usage.md). ### Ease of use @@ -199,13 +199,13 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr The `EXPLAIN ANALYZE` statement is used to print execution plans and runtime statistics. In v6.5.0, TiFlash has refined the execution information of the `TableFullScan` operator by adding the DMFile-related execution information. Now the TiFlash data scan status information is presented more intuitively, which helps you analyze TiFlash performance more easily. - For more information, see [user documentation](sql-statements/sql-statement-explain-analyze.md). + For more information, see [documentation](sql-statements/sql-statement-explain-analyze.md). * Support the output of execution plans in the JSON format [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) **tw@ran-huang** In v6.5.0, TiDB extends the output format of execution plans. By using `EXPLAIN FORMAT=tidb_json `, you can output SQL execution plans in the JSON format. With this capability, SQL debugging tools and diagnostic tools can read execution plans more conveniently and accurately, thus improving the ease of use of SQL diagnosis and tuning. - For more information, see [user document](/sql-statements/sql-statement-explain.md). + For more information, see [documentation](/sql-statements/sql-statement-explain.md). ### MySQL compatibility @@ -217,7 +217,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr CREATE TABLE t(a int AUTO_INCREMENT key) AUTO_ID_CACHE 1; ``` - For more information, see [user document](/auto-increment.md#mysql-compatibility-mode). + For more information, see [documentation](/auto-increment.md#mysql-compatibility-mode). ### Data migration @@ -227,7 +227,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr Previously, you had to provide large storage space for exporting or importing data to store CSV and SQL files, resulting in high storage costs. With the release of this feature, you can greatly reduce your storage costs by compressing the data files. - For more information, see [User document](/dumpling-overview.md#improve-export-efficiency-through-concurrency). + For more information, see [documentation](/dumpling-overview.md#improve-export-efficiency-through-concurrency). * Optimize binlog parsing capability [#924](https://github.com/pingcap/dm/issues/924) @[gmhdbjd](https://github.com/GMHDBJD) **tw@hfxsd** @@ -241,7 +241,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr Previously, when TiDB Lightning imported data using physical mode, it would create a large number of temporary files on the local disk for encoding, sorting, and splitting the raw data. When your local disk ran out of space, TiDB Lightning would exit with an error due to failing to write to the file. With this feature, TiDB Lightning tasks can avoid overwriting the local disk. - For more information, see [User document](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620). + For more information, see [documentation](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620). * Continuous data validation in DM is GA [#4426](https://github.com/pingcap/tiflow/issues/4426) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** @@ -249,7 +249,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr Previously, you needed to interrupt the business to validate the full data, which would affect your business. Now, with this feature, you can perform incremental data validation without interrupting the business. - For more information, see [User document](/dm/dm-continuous-data-validation.md). + For more information, see [documentation](/dm/dm-continuous-data-validation.md). ### TiDB data share subscription @@ -257,13 +257,13 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr TiCDC supports replicating changed logs to Amazon S3, Azure Blob Storage, NFS, and other S3-compatible storage services. Cloud storage is reasonably priced and easy to use. If you do not want to use Kafka, you can use storage sinks. TiCDC saves the changed logs to a file and then sends it to the storage system. From the storage system, the consumer program reads the newly generated changed log files periodically. - The storage sink supports changed logs in the canal-json and CSV formats. Noticeably, the latency of replicating changed logs from TiCDC to storage can be as short as xx. For more information, see [User document](/ticdc/ticdc-sink-to-cloud-storage.md). + The storage sink supports changed logs in the canal-json and CSV formats. Noticeably, the latency of replicating changed logs from TiCDC to storage can be as short as xx. For more information, see [documentation](/ticdc/ticdc-sink-to-cloud-storage.md). * TiCDC supports bidirectional replication across multiple clusters @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** TiCDC supports bidirectional replication across multiple TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution across multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from each TiDB cluster to the other TiDB clusters, you can achieve bidirectional data replication across multiple TiDB clusters. - For more information, see [user document](/ticdc/ticdc-bidirectional-replication.md). + For more information, see [documentation](/ticdc/ticdc-bidirectional-replication.md). * TiCDC supports updating TLS online [#7908](https://github.com/pingcap/tiflow/issues/7908) @[CharlesCheung96](https://github.com/CharlesCheung96) **tw@shichun-0415** @@ -279,7 +279,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr TiDB snapshot backup supports resuming backup from a checkpoint. When Backup & Restore (BR) encounters a recoverable error, it retries backup. However, BR exits if the retry fails for several times. The checkpoint backup feature allows for longer recoverable failures to be retried, for example, a network failure of tens of minutes. - Note that if you do not recover the system from a failure within one hour after BR exits, the snapshot data to be backed up might be recycled by the GC mechanism, causing the backup to fail. For more information, see [User document](/br/br-checkpoint.md). + Note that if you do not recover the system from a failure within one hour after BR exits, the snapshot data to be backed up might be recycled by the GC mechanism, causing the backup to fail. For more information, see [documentation](/br/br-checkpoint.md). * PITR performance improved remarkably **tw@shichun-0415 @@ -289,7 +289,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr TiKV-BR is a backup and restore tool used in TiKV clusters. TiKV and PD can constitute a KV database when used without TiDB, which is called RawKV. TiKV-BR supports data backup and restore for products that use RawKV. TiKV-BR can also upgrade the [`api-version`](/tikv-configuration-file.md#api-version-new-in-v610) from `API V1` to `API V2` for TiKV cluster. - For more information, see [User document](https://tikv.org/docs/latest/concepts/explore-tikv-features/backup-restore/). + For more information, see [documentation](https://tikv.org/docs/latest/concepts/explore-tikv-features/backup-restore/). ## Compatibility changes @@ -339,12 +339,12 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr | TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password-new-in-v650) | Newly added | Determines whether TiDB disconnects the client connection when the password is expired. The default value is `true`, which means the client connection is disconnected when the password is expired. | | TiKV | `raw-min-ts-outlier-threshold` | Deleted | Since v6.4.0, this configuration item was deprecated. Since v6.5.0, this configuration item is deleted. | | TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | To reduce CDC latency, the default value is changed from `1s` to `200ms`. | -| TiKV | [`memory-use-ratio`](/tikv-configuration-file.md#memory-use-ratio-introduced-new-in-v650) | Newly added | Indicates the ratio of available memory to total system memory in PITR log recovery. | +| TiKV | [`memory-use-ratio`](/tikv-configuration-file.md#memory-use-ratio-introduced-new-in-v650) | Newly added | Indicates the ratio of available memory to total system memory in PITR log recovery. | ### Others -- Starting from v6.5.0, the `mysql.user` table adds two new columns: `Password_reuse_history` and `Password_reuse_time`. -- The [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) feature is enabled by default and is not compatible with the [PITR (Point-in-time recovery)](/br/br-pitr-guide.md) feature. When using the index acceleration feature, you need to make sure that no PITR backup task is running in the background; otherwise, unexpected results might occur. For more information, see [tidb_ddl_enable_fast_reorg](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630). +- Starting from v6.5.0, the `mysql.user` table adds two new columns: `Password_reuse_history` and `Password_reuse_time`. +- The [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) feature is enabled by default and is not compatible with the [PITR (Point-in-time recovery)](/br/br-pitr-guide.md) feature. When using the index acceleration feature, you need to make sure that no PITR backup task is running in the background; otherwise, unexpected results might occur. For more information, see [documentation](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630). ## Deprecated feature From ef99b572dc8ad8fa0b9bdbb68cb733c0adcd148a Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Fri, 23 Dec 2022 10:15:13 +0800 Subject: [PATCH 64/83] Apply suggestions from code review Co-authored-by: Aolin Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- releases/release-6.5.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 5bdcc26ee9704..334980b348d62 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -171,7 +171,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * Optimizer introduces a more accurate Cost Model Version 2 [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** - TiDB v6.2.0 introduces the [Cost Model Version 2](/cost-model.md#cost-model-version-2) as an experimental feature. This model uses a more accurate cost estimation method to help the optimizer choose the optimal execution plan. Especially when TiFlash is deployed, Cost Model Version 2 automatically helps choose the appropriate storage engine and avoids much manual intervention. After real-scene testing for a period of time, this model becomes GA in v6.5.0. Since v6.5.0, newly-created clusters use Cost Model Version 2 by default. For clusters upgrade to v6.5.0, because Cost Model Version 2 might cause changes to query plans, you can set the [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-new-in-v620) variable to use the new cost model after sufficient performance testing. + TiDB v6.2.0 introduces the [Cost Model Version 2](/cost-model.md#cost-model-version-2) as an experimental feature. This model uses a more accurate cost estimation method to help the optimizer choose the optimal execution plan. Especially when TiFlash is deployed, Cost Model Version 2 automatically helps choose the appropriate storage engine and avoids much manual intervention. After real-scene testing for a period of time, this model becomes GA in v6.5.0. Since v6.5.0, newly created clusters use Cost Model Version 2 by default. For clusters upgrade to v6.5.0, because Cost Model Version 2 might cause changes to query plans, you can set the [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-new-in-v620) variable to use the new cost model after sufficient performance testing. Cost Model Version 2 becomes a generally available feature that significantly improves the overall capability of the TiDB optimizer and helps TiDB evolve towards a more powerful HTAP database. @@ -223,7 +223,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * Support exporting and importing SQL and CSV files in gzip, snappy, and zstd compression formats [#38514](https://github.com/pingcap/tidb/issues/38514) @[lichunzhu](https://github.com/lichunzhu) **tw@hfxsd** - Dumpling supports exporting data to compressed SQL and CSV files in the following compression formats: gzip, snappy, and zstd. TiDB Lightning also supports importing compressed files in these formats. + Dumpling supports exporting data to compressed SQL and CSV files in these compression formats: gzip, snappy, and zstd. TiDB Lightning also supports importing compressed files in these formats. Previously, you had to provide large storage space for exporting or importing data to store CSV and SQL files, resulting in high storage costs. With the release of this feature, you can greatly reduce your storage costs by compressing the data files. From be39bc583b1f16f6d1ff1cd4756bac12c0a6cf96 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Fri, 23 Dec 2022 14:22:27 +0800 Subject: [PATCH 65/83] add BR test data --- releases/release-6.5.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 334980b348d62..0cc17cd2950eb 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -23,7 +23,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. - TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. - TiDB Data Migration (DM) [continuous data validation](/dm/dm-continuous-data-validation.md) becomes GA. -- TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by x times, and reduces RPO to x minutes. +- TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by 50%, and reduces the RPO to as short as 5 minutes. - Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) by x times and reduces replication latency to x seconds. - Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). - TiCDC supports [replicating changed logs to object storage ](ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS (experimental). @@ -283,7 +283,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * PITR performance improved remarkably **tw@shichun-0415 - In the log restore stage, the restore speed of one TiKV can reach xx MB/s, which is x times faster than before. The restore speed is scalable and the RTO in DR scenarios is reduced greatly. The RPO in DR scenarios can be as short as 5 minutes. In normal cluster operation and maintenance (OM), for example, a rolling upgrade is performed or only one TiKV is down, the RPO can be 5 minutes. + In the log restore stage, the restore speed of one TiKV can reach 9 MiB/s, which is 50% faster than before. The restore speed is scalable and the RTO in DR scenarios is reduced greatly. The RPO in DR scenarios can be as short as 5 minutes. In normal cluster operation and maintenance (OM), for example, a rolling upgrade is performed or only one TiKV is down, the RPO can be 5 minutes. * TiKV-BR GA: Supports backing up and restoring RawKV [#67](https://github.com/tikv/migration/issues/67) @[pingyu](https://github.com/pingyu) @[haojinming](https://github.com/haojinming) **tw@shichun-0415** From a955703e4ef923228cbf7aab67d019bad5b61279 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Mon, 26 Dec 2022 19:28:17 +0800 Subject: [PATCH 66/83] Apply suggestions from code review Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- releases/release-6.5.0.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 0cc17cd2950eb..23854d41101d0 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -26,7 +26,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by 50%, and reduces the RPO to as short as 5 minutes. - Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) by x times and reduces replication latency to x seconds. - Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). -- TiCDC supports [replicating changed logs to object storage ](ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS (experimental). +- TiCDC supports [replicating changed logs to object storage](/ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS (experimental). ## New features @@ -310,9 +310,11 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr | [`disconnect_on_expired_password`](/system-variables.md#disconnect_on_expired_password-new-in-v650) | Newly added | Indicates whether TiDB disconnects the client connection when the password is expired. This variable is read-only. | | [`password_history`](/system-variables.md#password_history-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on the number of password changes. The default value `0` means disabling the password reuse policy based on the number of password changes. | | [`password_reuse_interval`](/system-variables.md#password_reuse_interval-new-in-v650) | Newly added | This variable is used to establish a password reuse policy that allows TiDB to limit password reuse based on time elapsed. The default value `0` means disabling the password reuse policy based on time elapsed. | +| [`tidb_auto_build_stats_concurrency`](/system-variables.md#tidb_auto_build_stats_concurrency-new-in-v650) | Newly added | This variable is used to set the concurrency of executing the automatic update of statistics. The default value is `1`. | | [`tidb_cdc_write_source`](/system-variables.md#tidb_cdc_write_source-new-in-v650) | Newly added | When this variable is set to a value other than 0, data written in this session is considered to be written by TiCDC. This variable can only be modified by TiCDC. Do not manually modify this variable in any case. | | [`tidb_index_merge_intersection_concurrency`](/system-variables.md#tidb_index_merge_intersection_concurrency-new-in-v650) | Newly added | Sets the maximum concurrency for the intersection operations that index merge performs. It is effective only when TiDB accesses partitioned tables in the dynamic pruning mode. | | [`tidb_source_id`](/system-variables.md#tidb_source_id-new-in-v650) | Newly added | This variable is used to configure the different cluster IDs in a [bi-directional replication](/ticdc/manage-ticdc.md#bi-directional-replication) cluster.| +| [`tidb_sysproc_scan_concurrency`](/system-variables.md#tidb_sysproc_scan_concurrency-new-in-v650) | Newly added | This variable is used to set the concurrency of scan operations performed when TiDB executes internal SQL statements (such as an automatic update of statistics). The default value is `1`. | | [`tidb_ttl_delete_batch_size`](/system-variables.md#tidb_ttl_delete_batch_size-new-in-v650) | Newly added | This variable is used to set the maximum number of rows that can be deleted in a single `DELETE` transaction in a TTL job. | | [`tidb_ttl_delete_rate_limit`](/system-variables.md#tidb_ttl_delete_rate_limit-new-in-v650) | Newly added | This variable is used to limit the maximum number of `DELETE` statements allowed per second in a single node in a TTL job. When this variable is set to `0`, no limit is applied. | | [`tidb_ttl_delete_worker_count`](/system-variables.md#tidb_ttl_delete_worker_count-new-in-v650) | Newly added | This variable is used to set the maximum concurrency of TTL jobs on each TiDB node. | From 848853b61c1d453112d501342cd43ae8e4811d68 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Tue, 27 Dec 2022 09:53:11 +0800 Subject: [PATCH 67/83] Apply suggestions from code review --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 23854d41101d0..0c43d31a5e9ad 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -17,7 +17,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - The [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) feature becomes generally available (GA), which improves the performance of adding indexes by about 10 times compared with v6.1.0. - The TiDB global memory control becomes GA, and you can control the memory consumption threshold via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). - The high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute becomes GA, which is compatible with MySQL. -- Support restoring a cluster to a specific point in time by using [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) (GA), which is compatible with TiCDC and PITR. +- [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) is now compatible with TiCDC and PITR and becomes GA. - Enhance TiDB optimizer by making the more accurate [Cost Model version 2](/cost-model.md#cost-model-version-2) generally available and supporting expressions connected by `AND` for [INDEX MERGE](/explain-index-merge.md). - Support pushing down the `JSON_EXTRACT()` function to TiFlash. - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. @@ -44,7 +44,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * Support restoring a cluster to a specific point in time by using `FLASHBACK CLUSTER TO TIMESTAMP` (GA) [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** - TiDB v6.4.0 introduces the [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement as an experimental feature. You can use this statement to restore a cluster to a specific point in time within the Garbage Collection (GC) life time. In v6.5.0, this statement becomes GA. This feature helps you to easily undo DML misoperations, restore the original cluster in minutes, roll back data at different time points to determine the exact time when data changes, and it is compatible with PITR and TiCDC. + TiDB v6.4.0 introduces the [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement as an experimental feature. You can use this statement to restore a cluster to a specific point in time within the Garbage Collection (GC) life time. In v6.5.0, this feature is now compatible with TiCDC and PITR and becomes GA. This feature helps you to easily undo DML misoperations, restore the original cluster in minutes, and roll back data at different time points to determine the exact time when data changes. For more information, see [documentation](/sql-statements/sql-statement-flashback-to-timestamp.md). @@ -259,7 +259,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr The storage sink supports changed logs in the canal-json and CSV formats. Noticeably, the latency of replicating changed logs from TiCDC to storage can be as short as xx. For more information, see [documentation](/ticdc/ticdc-sink-to-cloud-storage.md). -* TiCDC supports bidirectional replication across multiple clusters @[asddongmen](https://github.com/asddongmen) **tw@shichun-0415** +* TiCDC supports bidirectional replication across multiple clusters [#38587](https://github.com/pingcap/tidb/issues/38587) @[xiongjiwei](https://github.com/xiongjiwei) @[asddongmen](https://github.com/asddongmen) **tw@ran-huang** TiCDC supports bidirectional replication across multiple TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution across multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from each TiDB cluster to the other TiDB clusters, you can achieve bidirectional data replication across multiple TiDB clusters. From 7c3e829ffeb207180d606281c1fd829dcf89805e Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Tue, 27 Dec 2022 15:55:35 +0800 Subject: [PATCH 68/83] add seven ticdc configs, add issue and contributor for ticdc and pitr features --- releases/release-6.5.0.md | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 0c43d31a5e9ad..00c3ffce2c2cc 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -257,7 +257,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr TiCDC supports replicating changed logs to Amazon S3, Azure Blob Storage, NFS, and other S3-compatible storage services. Cloud storage is reasonably priced and easy to use. If you do not want to use Kafka, you can use storage sinks. TiCDC saves the changed logs to a file and then sends it to the storage system. From the storage system, the consumer program reads the newly generated changed log files periodically. - The storage sink supports changed logs in the canal-json and CSV formats. Noticeably, the latency of replicating changed logs from TiCDC to storage can be as short as xx. For more information, see [documentation](/ticdc/ticdc-sink-to-cloud-storage.md). + The storage sink supports changed logs in the canal-json and CSV formats. For more information, see [documentation](/ticdc/ticdc-sink-to-cloud-storage.md). * TiCDC supports bidirectional replication across multiple clusters [#38587](https://github.com/pingcap/tidb/issues/38587) @[xiongjiwei](https://github.com/xiongjiwei) @[asddongmen](https://github.com/asddongmen) **tw@ran-huang** @@ -269,7 +269,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr To keep data secure, you need to set an expiration policy for the certificate used by the system. After the expiration period, the system needs a new certificate. TiCDC v6.5.0 supports online updates of TLS certificates. Without interrupting the replication tasks, TiCDC can automatically detect and update the certificate, without the need for manual intervention. -* TiCDC performance improves significantly **tw@shichun-0415 +* TiCDC performance improves significantly [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7478](https://github.com/pingcap/tiflow/issues/7478) [#7532](https://github.com/pingcap/tiflow/issues/7532) @[sdojjy](https://github.com/sdojjy) [@3AceShowHand](https://github.com/3AceShowHand) **tw@shichun-0415 In a test scenario of the TiDB cluster, the performance of TiCDC has improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even during TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is xx rows/s, the replication latency in DR can be maintained at x s. @@ -281,7 +281,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr Note that if you do not recover the system from a failure within one hour after BR exits, the snapshot data to be backed up might be recycled by the GC mechanism, causing the backup to fail. For more information, see [documentation](/br/br-checkpoint.md). -* PITR performance improved remarkably **tw@shichun-0415 +* PITR performance improved remarkably [@joccau](https://github.com/joccau) **tw@shichun-0415 In the log restore stage, the restore speed of one TiKV can reach 9 MiB/s, which is 50% faster than before. The restore speed is scalable and the RTO in DR scenarios is reduced greatly. The RPO in DR scenarios can be as short as 5 minutes. In normal cluster operation and maintenance (OM), for example, a rolling upgrade is performed or only one TiKV is down, the RPO can be 5 minutes. @@ -342,6 +342,13 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr | TiKV | `raw-min-ts-outlier-threshold` | Deleted | Since v6.4.0, this configuration item was deprecated. Since v6.5.0, this configuration item is deleted. | | TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | To reduce CDC latency, the default value is changed from `1s` to `200ms`. | | TiKV | [`memory-use-ratio`](/tikv-configuration-file.md#memory-use-ratio-introduced-new-in-v650) | Newly added | Indicates the ratio of available memory to total system memory in PITR log recovery. | +| TiCDC | [`sink.terminator`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | Indicates the row terminator, which is used for separating two data change events. The value is empty by default, which means "\r\n" is used. | +| TiCDC | [`sink.date-separator`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | Indicates the date separator type of the file directory. Value options are `none`, `year`, `month`, and `day`. `none` is the default value and means that the date is not separated. | +| TiCDC | [`sink.enable-partition-separator`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | Specifies whether to use partitions as the separation string. The default value is `false`, which means that partitions in a table are not stored in separate directories. | +| TiCDC | [`sink.csv.delimiter`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | Indicates the delimiter between fields. The value must be an ASCII character and defaults to `,`. | +| TiCDC | [`sink.csv.quote`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | The quotation that surrounds the fields. The default value is `"`. When the value is empty, no quotation is used. | +| TiCDC | [`sink.csv.null`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | Specifies the character displayed when the CSV column is null. The default value is `\N`.| +| TiCDC | [`sink.csv.include-commit-ts`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added| Specifies whether to include commit-ts in CSV rows. The default value is `false`. | ### Others From 7b74b5ee1d0dadd17aa970be421fc68c906fdf3c Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Tue, 27 Dec 2022 16:14:32 +0800 Subject: [PATCH 69/83] Apply suggestions from code review --- releases/release-6.5.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 00c3ffce2c2cc..9094bbd39c790 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -253,7 +253,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### TiDB data share subscription -* TiCDC supports replicating changed logs to storage sinks (experimental) [tiflow#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) **tw@shichun-0415** +* TiCDC supports replicating changed logs to storage sinks (experimental) [#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) **tw@shichun-0415** TiCDC supports replicating changed logs to Amazon S3, Azure Blob Storage, NFS, and other S3-compatible storage services. Cloud storage is reasonably priced and easy to use. If you do not want to use Kafka, you can use storage sinks. TiCDC saves the changed logs to a file and then sends it to the storage system. From the storage system, the consumer program reads the newly generated changed log files periodically. @@ -389,7 +389,7 @@ Starting from v6.5.0, the [`AMEND TRANSACTION`](/system-variables.md#tidb_enable - Optimize the granularity of locks to reduce lock contention and improve the handling capability of heartbeats under high concurrency [#5586](https://github.com/tikv/pd/issues/5586) @[rleungx](https://github.com/rleungx) - Optimize scheduler performance for large-scale clusters and accelerate the production of scheduling policies [#5473](https://github.com/tikv/pd/issues/5473) @[bufferflies](https://github.com/bufferflies) - Improve the speed of loading Regions [#5606](https://github.com/tikv/pd/issues/5606) @[rleungx](https://github.com/rleungx) - - Reduce unnecessary overhead by optimized handling of Region heartbeats [#5648](https://github.com/tikv/pd/issues/5648)@[rleungx](https://github.com/rleungx) + - Reduce unnecessary overhead by optimized handling of Region heartbeats [#5648](https://github.com/tikv/pd/issues/5648) @[rleungx](https://github.com/rleungx) - Add the feature of automatically garbage collecting tombstone stores [#5348](https://github.com/tikv/pd/issues/5348) @[nolouch](https://github.com/nolouch) + TiFlash From 517bd6d8398577675ad05607228b45ddd10b53a0 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Tue, 27 Dec 2022 19:13:11 +0800 Subject: [PATCH 70/83] fix CI link errors --- releases/release-6.5.0.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 9094bbd39c790..f12d4198b70e6 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -21,9 +21,9 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Enhance TiDB optimizer by making the more accurate [Cost Model version 2](/cost-model.md#cost-model-version-2) generally available and supporting expressions connected by `AND` for [INDEX MERGE](/explain-index-merge.md). - Support pushing down the `JSON_EXTRACT()` function to TiFlash. - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. -- TiDB Lightning and Dumpling support [importing](tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. +- TiDB Lightning and Dumpling support [importing](/tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. - TiDB Data Migration (DM) [continuous data validation](/dm/dm-continuous-data-validation.md) becomes GA. -- TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br-pitr-guide.md#carry-pitr) by 50%, and reduces the RPO to as short as 5 minutes. +- TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br/br-pitr-guide.md#carry-pitr) by 50%, and reduces the RPO to as short as 5 minutes. - Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) by x times and reduces replication latency to x seconds. - Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). - TiCDC supports [replicating changed logs to object storage](/ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS (experimental). @@ -143,7 +143,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr Before v6.5.0, TiDB only supported using index merge for the filter conditions connected by `OR`. Starting from v6.5.0, TiDB has supported using index merge for filter conditions connected by `AND` in the `WHERE` clause. In this way, the index merge of TiDB can now cover more general combinations of query filter conditions and is no longer limited to union (`OR`) relationship. The current v6.5.0 version only supports index merge under `OR` conditions as automatically selected by the optimizer. To enable index merge for `AND` conditions, you need to use the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint. - For more details about index merge, see [v5.4.0 Release Notes](/release-5.4.0#performance) and [Explain Index Merge](/explain-index-merge.md). + For more details about index merge, see [v5.4.0 Release Notes](/release-5.4.0.md#performance) and [Explain Index Merge](/explain-index-merge.md). * Support pushing down the following JSON functions to TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** @@ -199,7 +199,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr The `EXPLAIN ANALYZE` statement is used to print execution plans and runtime statistics. In v6.5.0, TiFlash has refined the execution information of the `TableFullScan` operator by adding the DMFile-related execution information. Now the TiFlash data scan status information is presented more intuitively, which helps you analyze TiFlash performance more easily. - For more information, see [documentation](sql-statements/sql-statement-explain-analyze.md). + For more information, see [documentation](/sql-statements/sql-statement-explain-analyze.md). * Support the output of execution plans in the JSON format [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) **tw@ran-huang** @@ -313,7 +313,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr | [`tidb_auto_build_stats_concurrency`](/system-variables.md#tidb_auto_build_stats_concurrency-new-in-v650) | Newly added | This variable is used to set the concurrency of executing the automatic update of statistics. The default value is `1`. | | [`tidb_cdc_write_source`](/system-variables.md#tidb_cdc_write_source-new-in-v650) | Newly added | When this variable is set to a value other than 0, data written in this session is considered to be written by TiCDC. This variable can only be modified by TiCDC. Do not manually modify this variable in any case. | | [`tidb_index_merge_intersection_concurrency`](/system-variables.md#tidb_index_merge_intersection_concurrency-new-in-v650) | Newly added | Sets the maximum concurrency for the intersection operations that index merge performs. It is effective only when TiDB accesses partitioned tables in the dynamic pruning mode. | -| [`tidb_source_id`](/system-variables.md#tidb_source_id-new-in-v650) | Newly added | This variable is used to configure the different cluster IDs in a [bi-directional replication](/ticdc/manage-ticdc.md#bi-directional-replication) cluster.| +| [`tidb_source_id`](/system-variables.md#tidb_source_id-new-in-v650) | Newly added | This variable is used to configure the different cluster IDs in a [bi-directional replication](/ticdc/ticdc-bidirectional-replication.md) cluster.| | [`tidb_sysproc_scan_concurrency`](/system-variables.md#tidb_sysproc_scan_concurrency-new-in-v650) | Newly added | This variable is used to set the concurrency of scan operations performed when TiDB executes internal SQL statements (such as an automatic update of statistics). The default value is `1`. | | [`tidb_ttl_delete_batch_size`](/system-variables.md#tidb_ttl_delete_batch_size-new-in-v650) | Newly added | This variable is used to set the maximum number of rows that can be deleted in a single `DELETE` transaction in a TTL job. | | [`tidb_ttl_delete_rate_limit`](/system-variables.md#tidb_ttl_delete_rate_limit-new-in-v650) | Newly added | This variable is used to limit the maximum number of `DELETE` statements allowed per second in a single node in a TTL job. When this variable is set to `0`, no limit is applied. | From a101dd5816acbce6d55a7cd5951c35325266f8c9 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Tue, 27 Dec 2022 19:19:22 +0800 Subject: [PATCH 71/83] Apply suggestions from code review --- releases/release-6.5.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index f12d4198b70e6..7b37b74018850 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -135,7 +135,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - On the [TiFlash panels](/grafana-performance-overview-dashboard.md#tiflash), you can easily view the request types, latency analysis, and resource usage overview of your TiFlash cluster. - On the [CDC panels](/grafana-performance-overview-dashboard.md#cdc), you can easily view the health, replication latency, data flow, and downstream write latency of your TiCDC cluster. - For more information, see [documentation](/performance-tuning-method.md). + For more information, see [documentation](/performance-tuning-methods.md). ### Performance @@ -143,7 +143,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr Before v6.5.0, TiDB only supported using index merge for the filter conditions connected by `OR`. Starting from v6.5.0, TiDB has supported using index merge for filter conditions connected by `AND` in the `WHERE` clause. In this way, the index merge of TiDB can now cover more general combinations of query filter conditions and is no longer limited to union (`OR`) relationship. The current v6.5.0 version only supports index merge under `OR` conditions as automatically selected by the optimizer. To enable index merge for `AND` conditions, you need to use the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint. - For more details about index merge, see [v5.4.0 Release Notes](/release-5.4.0.md#performance) and [Explain Index Merge](/explain-index-merge.md). + For more details about index merge, see [v5.4.0 Release Notes](/releases/release-5.4.0.md#performance) and [Explain Index Merge](/explain-index-merge.md). * Support pushing down the following JSON functions to TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** From 2f5d4687602792025129eaa2bfc9ce2f65921aaf Mon Sep 17 00:00:00 2001 From: qiancai Date: Wed, 28 Dec 2022 09:45:47 +0800 Subject: [PATCH 72/83] add the release info and remove writer names --- TOC.md | 4 ++- releases/release-6.5.0.md | 70 ++++++++++++++++++------------------ releases/release-notes.md | 4 +++ releases/release-timeline.md | 1 + 4 files changed, 43 insertions(+), 36 deletions(-) diff --git a/TOC.md b/TOC.md index 1299221a09bcc..728d118a0a13d 100644 --- a/TOC.md +++ b/TOC.md @@ -4,7 +4,7 @@ - [Docs Home](https://docs.pingcap.com/) - About TiDB - [TiDB Introduction](/overview.md) - - [TiDB 6.4 Release Notes](/releases/release-6.4.0.md) + - [TiDB 6.5 Release Notes](/releases/release-6.5.0.md) - [Basic Features](/basic-features.md) - [Experimental Features](/experimental-features.md) - [MySQL Compatibility](/mysql-compatibility.md) @@ -918,6 +918,8 @@ - [Release Timeline](/releases/release-timeline.md) - [TiDB Versioning](/releases/versioning.md) - [TiDB Installation Packages](/binary-package.md) + - v6.5 + - [6.5.0](/releases/release-6.5.0.md) - v6.4 - [6.4.0-DMR](/releases/release-6.4.0.md) - v6.3 diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 7b37b74018850..a5784b8117965 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -4,7 +4,7 @@ title: TiDB 6.5.0 Release Notes # TiDB 6.5.0 Release Notes -Release date: xx xx, 2022 +Release date: December 29, 2022 TiDB version: 6.5.0 @@ -32,35 +32,35 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### SQL -* The performance of TiDB adding indexes is improved by 10 times (GA) [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) **tw@Oreoxmt** +* The performance of TiDB adding indexes is improved by 10 times (GA) [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) TiDB v6.3.0 introduces the [Add index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) as an experimental feature to improve the speed of backfilling when creating an index. In v6.5.0, this feature becomes GA and is enabled by default, and the performance on large tables is expected to be 10 times faster. The acceleration feature is suitable for scenarios where a single SQL statement adds an index serially. When multiple SQL statements add indexes in parallel, only one of the SQL statements will be accelerated. -* Provide lightweight metadata lock to improve the DML success rate during DDL change (GA) [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) **tw@Oreoxmt** +* Provide lightweight metadata lock to improve the DML success rate during DDL change (GA) [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) TiDB v6.3.0 introduces [Metadata lock](/metadata-lock.md) as an experimental feature. To avoid the `Information schema is changed` error caused by DML statements, TiDB coordinates the priority of DMLs and DDLs during table metadata change, and makes the ongoing DDLs wait for the DMLs with old metadata to commit. In v6.5.0, this feature becomes GA and is enabled by default. It is suitable for various types of DDLs change scenarios. For more information, see [documentation](/metadata-lock.md). -* Support restoring a cluster to a specific point in time by using `FLASHBACK CLUSTER TO TIMESTAMP` (GA) [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) **tw@Oreoxmt** +* Support restoring a cluster to a specific point in time by using `FLASHBACK CLUSTER TO TIMESTAMP` (GA) [#37197](https://github.com/pingcap/tidb/issues/37197) [#13303](https://github.com/tikv/tikv/issues/13303) @[Defined2014](https://github.com/Defined2014) @[bb7133](https://github.com/bb7133) @[JmPotato](https://github.com/JmPotato) @[Connor1996](https://github.com/Connor1996) @[HuSharp](https://github.com/HuSharp) @[CalvinNeo](https://github.com/CalvinNeo) TiDB v6.4.0 introduces the [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) statement as an experimental feature. You can use this statement to restore a cluster to a specific point in time within the Garbage Collection (GC) life time. In v6.5.0, this feature is now compatible with TiCDC and PITR and becomes GA. This feature helps you to easily undo DML misoperations, restore the original cluster in minutes, and roll back data at different time points to determine the exact time when data changes. For more information, see [documentation](/sql-statements/sql-statement-flashback-to-timestamp.md). -* Fully support non-transactional DML statements including `INSERT`, `REPLACE`, `UPDATE`, and `DELETE` [#33485](https://github.com/pingcap/tidb/issues/33485) @[ekexium](https://github.com/ekexium) **tw@Oreoxmt** +* Fully support non-transactional DML statements including `INSERT`, `REPLACE`, `UPDATE`, and `DELETE` [#33485](https://github.com/pingcap/tidb/issues/33485) @[ekexium](https://github.com/ekexium) In the scenarios of large data processing, a single SQL statement with a large transaction might have a negative impact on the cluster stability and performance. A non-transactional DML statement is a DML statement split into multiple SQL statements for internal execution. The split statements compromise transaction atomicity and isolation but greatly improve the cluster stability. TiDB supports non-transactional `DELETE` statements since v6.1.0, and supports non-transactional `INSERT`, `REPLACE`, and `UPDATE` statements since v6.5.0. For more information, see [Non-Transactional DML statements](/non-transactional-dml.md) and [`BATCH` syntax](/sql-statements/sql-statement-batch.md). -* Support time to live (TTL) (experimental) [#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) **tw@ran-huang** +* Support time to live (TTL) (experimental) [#39262](https://github.com/pingcap/tidb/issues/39262) @[lcwangchao](https://github.com/lcwangchao) TTL provides row-level data lifetime management. In TiDB, a table with the TTL attribute automatically checks data lifetime and deletes expired data at the row level. TTL is designed to help you clean up unnecessary data periodically and in a timely manner without affecting the online read and write workloads. For more information, see [documentation](/time-to-live.md). -* Support saving TiFlash query results using the `INSERT INTO SELECT` statement (experimental) [#37515](https://github.com/pingcap/tidb/issues/37515) @[gengliqi](https://github.com/gengliqi) **tw@qiancai** +* Support saving TiFlash query results using the `INSERT INTO SELECT` statement (experimental) [#37515](https://github.com/pingcap/tidb/issues/37515) @[gengliqi](https://github.com/gengliqi) Starting from v6.5.0, TiDB supports pushing down the `SELECT` clause (analytical query) of the `INSERT INTO SELECT` statement to TiFlash. In this way, you can easily save the TiFlash query result to a TiDB table specified by `INSERT INTO` for further analysis, which takes effect as result caching (that is, result materialization). For example: @@ -76,7 +76,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr For more information, see [documentation](/tiflash/tiflash-results-materialization.md). -* Support binding history execution plans (experimental) [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) **tw@qiancai** +* Support binding history execution plans (experimental) [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) For a SQL statement, due to various factors during execution, the optimizer might occasionally choose a new execution plan instead of its previous optimal execution plan, and the SQL performance is impacted. In this case, if the optimal execution plan has not been cleared yet, it still exists in the SQL execution history. @@ -86,7 +86,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### Security -* Support the password complexity policy [#38928](https://github.com/pingcap/tidb/issues/38928) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** +* Support the password complexity policy [#38928](https://github.com/pingcap/tidb/issues/38928) @[CbcWestwolf](https://github.com/CbcWestwolf) After this policy is enabled, when you set a password, TiDB checks the password length, whether uppercase and lowercase letters, numbers, and special characters in the password are sufficient, whether the password matches the dictionary, and whether the password matches the username. This ensures that you set a secure password. @@ -94,19 +94,19 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr For more information, see [documentation](/password-management.md#password-complexity-policy). -* Support the password expiration policy [#38936](https://github.com/pingcap/tidb/issues/38936) @[CbcWestwolf](https://github.com/CbcWestwolf) **tw@ran-huang** +* Support the password expiration policy [#38936](https://github.com/pingcap/tidb/issues/38936) @[CbcWestwolf](https://github.com/CbcWestwolf) TiDB supports configuring the password expiration policy, including manual expiration, global-level automatic expiration, and account-level automatic expiration. After this policy is enabled, you must change your passwords periodically. This reduces the risk of password leakage due to long-term use and improves password security. For more information, see [documentation](/password-management.md#password-expiration-policy). -* Support the password reuse policy [#38937](https://github.com/pingcap/tidb/issues/38937) @[keeplearning20221](https://github.com/keeplearning20221) **tw@ran-huang** +* Support the password reuse policy [#38937](https://github.com/pingcap/tidb/issues/38937) @[keeplearning20221](https://github.com/keeplearning20221) TiDB supports configuring the password reuse policy, including global-level password reuse policy and account-level password reuse policy. After this policy is enabled, you cannot use the passwords that you have used within a specified period or the most recent several passwords that you have used. This reduces the risk of password leakage due to repeated use of passwords and improves password security. For more information, see [documentation](/password-management.md#password-reuse-policy). -* Support failed-login tracking and temporary account locking policy [#38938](https://github.com/pingcap/tidb/issues/38938) @[lastincisor](https://github.com/lastincisor) **tw@ran-huang** +* Support failed-login tracking and temporary account locking policy [#38938](https://github.com/pingcap/tidb/issues/38938) @[lastincisor](https://github.com/lastincisor) After this policy is enabled, if you log in to TiDB with incorrect passwords multiple times consecutively, the account is temporarily locked. After the lock time ends, the account is automatically unlocked. @@ -114,7 +114,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### Observability -* TiDB Dashboard can be deployed on Kubernetes as an independent Pod [#1447](https://github.com/pingcap/tidb-dashboard/issues/1447) @[SabaPing](https://github.com/SabaPing) **tw@shichun-0415 +* TiDB Dashboard can be deployed on Kubernetes as an independent Pod [#1447](https://github.com/pingcap/tidb-dashboard/issues/1447) @[SabaPing](https://github.com/SabaPing) TiDB v6.5.0 (and later) and TiDB Operator v1.4.0 (and later) support deploying TiDB Dashboard as an independent Pod on Kubernetes. Using TiDB Operator, you can access the IP address of this Pod to start TiDB Dashboard. @@ -126,7 +126,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr For more information, see [documentation](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). -* Performance Overview dashboard adds TiFlash and CDC (Change Data Capture) panels [#39230](https://github.com/pingcap/tidb/issues/39230) @[dbsid](https://github.com/dbsid) **tw@qiancai** +* Performance Overview dashboard adds TiFlash and CDC (Change Data Capture) panels [#39230](https://github.com/pingcap/tidb/issues/39230) @[dbsid](https://github.com/dbsid) Since v6.1.0, TiDB has introduced the Performance Overview dashboard in Grafana, which provides a system-level entry for overall performance diagnosis of TiDB, TiKV, and PD. In v6.5.0, the Performance Overview dashboard adds TiFlash and CDC panels. With these panels, starting from v6.5.0, you can use the Performance Overview dashboard to analyze the performance of all components in a TiDB cluster. @@ -139,13 +139,13 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### Performance -* [INDEX MERGE](/glossary.md#index-merge) supports expressions connected by `AND` [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) **tw@TomShawn** +* [INDEX MERGE](/glossary.md#index-merge) supports expressions connected by `AND` [#39333](https://github.com/pingcap/tidb/issues/39333) @[guo-shaoge](https://github.com/guo-shaoge) @[time-and-fate](https://github.com/time-and-fate) @[hailanwhu](https://github.com/hailanwhu) Before v6.5.0, TiDB only supported using index merge for the filter conditions connected by `OR`. Starting from v6.5.0, TiDB has supported using index merge for filter conditions connected by `AND` in the `WHERE` clause. In this way, the index merge of TiDB can now cover more general combinations of query filter conditions and is no longer limited to union (`OR`) relationship. The current v6.5.0 version only supports index merge under `OR` conditions as automatically selected by the optimizer. To enable index merge for `AND` conditions, you need to use the [`USE_INDEX_MERGE`](/optimizer-hints.md#use_index_merget1_name-idx1_name--idx2_name-) hint. For more details about index merge, see [v5.4.0 Release Notes](/releases/release-5.4.0.md#performance) and [Explain Index Merge](/explain-index-merge.md). -* Support pushing down the following JSON functions to TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) **tw@qiancai** +* Support pushing down the following JSON functions to TiFlash [#39458](https://github.com/pingcap/tidb/issues/39458) @[yibin87](https://github.com/yibin87) * `->` * `->>` @@ -153,23 +153,23 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr The JSON format provides a flexible way for application data modeling. Therefore, more and more applications are using the JSON format for data exchange and data storage. By pushing down JSON functions to TiFlash, you can improve the efficiency of analyzing data in the JSON type and use TiDB for more real-time analytics scenarios. -* Support pushing down the following string functions to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) **tw@qiancai** +* Support pushing down the following string functions to TiFlash [#6115](https://github.com/pingcap/tiflash/issues/6115) @[xzhangxian1008](https://github.com/xzhangxian1008) * `regexp_like` * `regexp_instr` * `regexp_substr` -* Support the global optimizer hint to interfere with the execution plan generation in [Views](/views.md) [#37887](https://github.com/pingcap/tidb/issues/37887) @[Reminiscent](https://github.com/Reminiscent) **tw@Oreoxmt** +* Support the global optimizer hint to interfere with the execution plan generation in [Views](/views.md) [#37887](https://github.com/pingcap/tidb/issues/37887) @[Reminiscent](https://github.com/Reminiscent) In some view access scenarios, you need to use optimizer hints to interfere with the execution plan of the query in the view to achieve the best performance. Since v6.5.0, TiDB supports adding global hints for the query blocks in the view, thus the hints defined in the query can be effective in the view. This feature provides a way to inject hints into complex SQL statements that contain nested views, enhances the execution plan control, and stabilizes the performance of complex statements. To use global hints, you need to [name the query blocks](/optimizer-hints.md#step-1-define-the-query-block-name-of-the-view-using-the-qb_name-hint) and [specify hint references](/optimizer-hints.md#step-2-add-the-target-hints). For more information, see [documentation](/optimizer-hints.md#hints-that-take-effect-globally). -* Support pushing down sorting operations of [partitioned tables](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) **tw@qiancai** +* Support pushing down sorting operations of [partitioned tables](/partitioned-table.md) to TiKV [#26166](https://github.com/pingcap/tidb/issues/26166) @[winoros](https://github.com/winoros) Although the [partitioned table](/partitioned-table.md) feature has been GA since v6.1.0, TiDB is continually improving its performance. In v6.5.0, TiDB supports pushing down sorting operations such as `ORDER BY` and `LIMIT` to TiKV for computation and filtering, which reduces network I/O overhead and improves SQL performance when you use partitioned tables. -* Optimizer introduces a more accurate Cost Model Version 2 [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) **tw@Oreoxmt** +* Optimizer introduces a more accurate Cost Model Version 2 [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) TiDB v6.2.0 introduces the [Cost Model Version 2](/cost-model.md#cost-model-version-2) as an experimental feature. This model uses a more accurate cost estimation method to help the optimizer choose the optimal execution plan. Especially when TiFlash is deployed, Cost Model Version 2 automatically helps choose the appropriate storage engine and avoids much manual intervention. After real-scene testing for a period of time, this model becomes GA in v6.5.0. Since v6.5.0, newly created clusters use Cost Model Version 2 by default. For clusters upgrade to v6.5.0, because Cost Model Version 2 might cause changes to query plans, you can set the [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-new-in-v620) variable to use the new cost model after sufficient performance testing. @@ -183,7 +183,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### Stability -* The global memory control feature is now GA [#37816](https://github.com/pingcap/tidb/issues/37816) @[wshwsh12](https://github.com/wshwsh12) **tw@TomShawn** +* The global memory control feature is now GA [#37816](https://github.com/pingcap/tidb/issues/37816) @[wshwsh12](https://github.com/wshwsh12) TiDB v6.4.0 introduces global memory control as an experimental feature. Since v6.5.0, the global memory control feature becomes GA and can track the main memory consumption in TiDB. When the global memory consumption reaches the threshold defined by [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640), TiDB tries to limit the memory usage by GC or canceling SQL operations, to ensure stability. @@ -195,13 +195,13 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### Ease of use -* Refine the execution information of the TiFlash `TableFullScan` operator in the `EXPLAIN ANALYZE` output [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) **tw@qiancai** +* Refine the execution information of the TiFlash `TableFullScan` operator in the `EXPLAIN ANALYZE` output [#5926](https://github.com/pingcap/tiflash/issues/5926) @[hongyunyan](https://github.com/hongyunyan) The `EXPLAIN ANALYZE` statement is used to print execution plans and runtime statistics. In v6.5.0, TiFlash has refined the execution information of the `TableFullScan` operator by adding the DMFile-related execution information. Now the TiFlash data scan status information is presented more intuitively, which helps you analyze TiFlash performance more easily. For more information, see [documentation](/sql-statements/sql-statement-explain-analyze.md). -* Support the output of execution plans in the JSON format [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) **tw@ran-huang** +* Support the output of execution plans in the JSON format [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) In v6.5.0, TiDB extends the output format of execution plans. By using `EXPLAIN FORMAT=tidb_json `, you can output SQL execution plans in the JSON format. With this capability, SQL debugging tools and diagnostic tools can read execution plans more conveniently and accurately, thus improving the ease of use of SQL diagnosis and tuning. @@ -209,7 +209,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### MySQL compatibility -* Support a high-performance and globally monotonic `AUTO_INCREMENT` column attribute (GA) [#38442](https://github.com/pingcap/tidb/issues/38442) @[tiancaiamao](https://github.com/tiancaiamao) **tw@Oreoxmt** +* Support a high-performance and globally monotonic `AUTO_INCREMENT` column attribute (GA) [#38442](https://github.com/pingcap/tidb/issues/38442) @[tiancaiamao](https://github.com/tiancaiamao) TiDB v6.4.0 introduces the `AUTO_INCREMENT` MySQL compatibility mode as an experimental feature. This mode introduces a centralized auto-increment ID allocating service that ensures IDs monotonically increase on all TiDB instances. This feature makes it easier to sort query results by auto-increment IDs. In v6.5.0, this feature becomes GA. The insert TPS of a table using this feature is expected to exceed 20,000, and this feature supports elastic scaling to improve the write throughput of a single table and entire clusters. To use the MySQL compatibility mode, you need to set `AUTO_ID_CACHE` to `1` when creating a table. The following is an example: @@ -221,7 +221,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### Data migration -* Support exporting and importing SQL and CSV files in gzip, snappy, and zstd compression formats [#38514](https://github.com/pingcap/tidb/issues/38514) @[lichunzhu](https://github.com/lichunzhu) **tw@hfxsd** +* Support exporting and importing SQL and CSV files in gzip, snappy, and zstd compression formats [#38514](https://github.com/pingcap/tidb/issues/38514) @[lichunzhu](https://github.com/lichunzhu) Dumpling supports exporting data to compressed SQL and CSV files in these compression formats: gzip, snappy, and zstd. TiDB Lightning also supports importing compressed files in these formats. @@ -229,13 +229,13 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr For more information, see [documentation](/dumpling-overview.md#improve-export-efficiency-through-concurrency). -* Optimize binlog parsing capability [#924](https://github.com/pingcap/dm/issues/924) @[gmhdbjd](https://github.com/GMHDBJD) **tw@hfxsd** +* Optimize binlog parsing capability [#924](https://github.com/pingcap/dm/issues/924) @[gmhdbjd](https://github.com/GMHDBJD) TiDB can filter out binlog events of the schemas and tables that are not in the migration task, thus improving the parsing efficiency and stability. This policy takes effect by default in v6.5.0. No additional configuration is required. Previously, even if only a few tables were migrated, the entire binlog file upstream had to be parsed. The binlog events of the tables in the binlog file that did not need to be migrated still had to be parsed, which was not efficient. Meanwhile, if these binlog events do not support parsing, the task will fail. By only parsing the binlog events of the tables in the migration task, the binlog parsing efficiency can be greatly improved and the task stability can be enhanced. -* Disk quota in TiDB Lightning is GA [#446](https://github.com/pingcap/tidb-lightning/issues/446) @[buchuitoudegou](https://github.com/buchuitoudegou) **tw@hfxsd** +* Disk quota in TiDB Lightning is GA [#446](https://github.com/pingcap/tidb-lightning/issues/446) @[buchuitoudegou](https://github.com/buchuitoudegou) You can configure disk quota for TiDB Lightning. When there is not enough disk quota, TiDB Lightning stops reading source data and writing temporary files. Instead, it writes the sorted key-values to TiKV first, and then continues the import process after TiDB Lightning deletes the local temporary files. @@ -243,7 +243,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr For more information, see [documentation](/tidb-lightning/tidb-lightning-physical-import-mode-usage.md#configure-disk-quota-new-in-v620). -* Continuous data validation in DM is GA [#4426](https://github.com/pingcap/tiflow/issues/4426) @[D3Hunter](https://github.com/D3Hunter) **tw@hfxsd** +* Continuous data validation in DM is GA [#4426](https://github.com/pingcap/tiflow/issues/4426) @[D3Hunter](https://github.com/D3Hunter) In the process of migrating incremental data from upstream to downstream databases, there is a small probability that data flow might cause errors or data loss. In scenarios where strong data consistency is required, such as credit and securities businesses, you can perform a full volume checksum on the data after migration to ensure data consistency. However, in some incremental replication scenarios, upstream and downstream writes are continuous and uninterrupted because the upstream and downstream data is constantly changing, making it difficult to perform consistency checks on all data. @@ -253,39 +253,39 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### TiDB data share subscription -* TiCDC supports replicating changed logs to storage sinks (experimental) [#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) **tw@shichun-0415** +* TiCDC supports replicating changed logs to storage sinks (experimental) [#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) TiCDC supports replicating changed logs to Amazon S3, Azure Blob Storage, NFS, and other S3-compatible storage services. Cloud storage is reasonably priced and easy to use. If you do not want to use Kafka, you can use storage sinks. TiCDC saves the changed logs to a file and then sends it to the storage system. From the storage system, the consumer program reads the newly generated changed log files periodically. The storage sink supports changed logs in the canal-json and CSV formats. For more information, see [documentation](/ticdc/ticdc-sink-to-cloud-storage.md). -* TiCDC supports bidirectional replication across multiple clusters [#38587](https://github.com/pingcap/tidb/issues/38587) @[xiongjiwei](https://github.com/xiongjiwei) @[asddongmen](https://github.com/asddongmen) **tw@ran-huang** +* TiCDC supports bidirectional replication across multiple clusters [#38587](https://github.com/pingcap/tidb/issues/38587) @[xiongjiwei](https://github.com/xiongjiwei) @[asddongmen](https://github.com/asddongmen) TiCDC supports bidirectional replication across multiple TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution across multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from each TiDB cluster to the other TiDB clusters, you can achieve bidirectional data replication across multiple TiDB clusters. For more information, see [documentation](/ticdc/ticdc-bidirectional-replication.md). -* TiCDC supports updating TLS online [#7908](https://github.com/pingcap/tiflow/issues/7908) @[CharlesCheung96](https://github.com/CharlesCheung96) **tw@shichun-0415** +* TiCDC supports updating TLS online [#7908](https://github.com/pingcap/tiflow/issues/7908) @[CharlesCheung96](https://github.com/CharlesCheung96) To keep data secure, you need to set an expiration policy for the certificate used by the system. After the expiration period, the system needs a new certificate. TiCDC v6.5.0 supports online updates of TLS certificates. Without interrupting the replication tasks, TiCDC can automatically detect and update the certificate, without the need for manual intervention. -* TiCDC performance improves significantly [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7478](https://github.com/pingcap/tiflow/issues/7478) [#7532](https://github.com/pingcap/tiflow/issues/7532) @[sdojjy](https://github.com/sdojjy) [@3AceShowHand](https://github.com/3AceShowHand) **tw@shichun-0415 +* TiCDC performance improves significantly [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7478](https://github.com/pingcap/tiflow/issues/7478) [#7532](https://github.com/pingcap/tiflow/issues/7532) @[sdojjy](https://github.com/sdojjy) [@3AceShowHand](https://github.com/3AceShowHand) In a test scenario of the TiDB cluster, the performance of TiCDC has improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even during TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is xx rows/s, the replication latency in DR can be maintained at x s. ### Backup and restore -* TiDB Backup & Restore supports snapshot checkpoint backup [#38647](https://github.com/pingcap/tidb/issues/38647) @[Leavrth](https://github.com/Leavrth) **tw@shichun-0415 +* TiDB Backup & Restore supports snapshot checkpoint backup [#38647](https://github.com/pingcap/tidb/issues/38647) @[Leavrth](https://github.com/Leavrth) TiDB snapshot backup supports resuming backup from a checkpoint. When Backup & Restore (BR) encounters a recoverable error, it retries backup. However, BR exits if the retry fails for several times. The checkpoint backup feature allows for longer recoverable failures to be retried, for example, a network failure of tens of minutes. Note that if you do not recover the system from a failure within one hour after BR exits, the snapshot data to be backed up might be recycled by the GC mechanism, causing the backup to fail. For more information, see [documentation](/br/br-checkpoint.md). -* PITR performance improved remarkably [@joccau](https://github.com/joccau) **tw@shichun-0415 +* PITR performance improved remarkably [@joccau](https://github.com/joccau) In the log restore stage, the restore speed of one TiKV can reach 9 MiB/s, which is 50% faster than before. The restore speed is scalable and the RTO in DR scenarios is reduced greatly. The RPO in DR scenarios can be as short as 5 minutes. In normal cluster operation and maintenance (OM), for example, a rolling upgrade is performed or only one TiKV is down, the RPO can be 5 minutes. -* TiKV-BR GA: Supports backing up and restoring RawKV [#67](https://github.com/tikv/migration/issues/67) @[pingyu](https://github.com/pingyu) @[haojinming](https://github.com/haojinming) **tw@shichun-0415** +* TiKV-BR GA: Supports backing up and restoring RawKV [#67](https://github.com/tikv/migration/issues/67) @[pingyu](https://github.com/pingyu) @[haojinming](https://github.com/haojinming) TiKV-BR is a backup and restore tool used in TiKV clusters. TiKV and PD can constitute a KV database when used without TiDB, which is called RawKV. TiKV-BR supports data backup and restore for products that use RawKV. TiKV-BR can also upgrade the [`api-version`](/tikv-configuration-file.md#api-version-new-in-v610) from `API V1` to `API V2` for TiKV cluster. diff --git a/releases/release-notes.md b/releases/release-notes.md index 60a87b9e295bd..39c7f4b737ff8 100644 --- a/releases/release-notes.md +++ b/releases/release-notes.md @@ -4,6 +4,10 @@ title: Release Notes # TiDB Release Notes +## 6.5 + +- [6.5.0](/releases/release-6.5.0.md): 2022-12-29 + ## 6.4 - [6.4.0-DMR](/releases/release-6.4.0.md): 2022-11-17 diff --git a/releases/release-timeline.md b/releases/release-timeline.md index 424918da156b9..994dafb3bf8db 100644 --- a/releases/release-timeline.md +++ b/releases/release-timeline.md @@ -9,6 +9,7 @@ This document shows all the released TiDB versions in reverse chronological orde | Version | Release Date | | :--- | :--- | +| [6.5.0](/releases/release-6.5.0.md) | 2022-12-29 | | [6.1.3](/releases/release-6.1.3.md) | 2022-12-05 | | [5.3.4](/releases/release-5.3.4.md) | 2022-11-24 | | [6.4.0-DMR](/releases/release-6.4.0.md) | 2022-11-17 | From 275c93ea4bdd544cb299270428d0f90556b7b7d3 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Wed, 28 Dec 2022 11:34:31 +0800 Subject: [PATCH 73/83] add ticdc performance data --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index a5784b8117965..8eb89b3db2e91 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -24,7 +24,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - TiDB Lightning and Dumpling support [importing](/tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. - TiDB Data Migration (DM) [continuous data validation](/dm/dm-continuous-data-validation.md) becomes GA. - TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br/br-pitr-guide.md#carry-pitr) by 50%, and reduces the RPO to as short as 5 minutes. -- Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) by x times and reduces replication latency to x seconds. +- Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) from 4000 rows/s to 35000 rows/s, and reduces replication latency to 2 seconds. - Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). - TiCDC supports [replicating changed logs to object storage](/ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS (experimental). @@ -74,7 +74,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Reuse TiFlash query results or deal with highly concurrent online requests - Need a relatively small result set compared with the input data size, preferably smaller than 100 MiB. - For more information, see [documentation](/tiflash/tiflash-results-materialization.md). + For more information, see [documentation](/tiflash/tiflash-results-materialization.md). * Support binding history execution plans (experimental) [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) @@ -271,7 +271,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * TiCDC performance improves significantly [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7478](https://github.com/pingcap/tiflow/issues/7478) [#7532](https://github.com/pingcap/tiflow/issues/7532) @[sdojjy](https://github.com/sdojjy) [@3AceShowHand](https://github.com/3AceShowHand) - In a test scenario of the TiDB cluster, the performance of TiCDC has improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even during TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is xx rows/s, the replication latency in DR can be maintained at x s. + In a test scenario of the TiDB cluster, the performance of TiCDC has improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even during TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is 35000 rows/s, the replication latency in DR can be maintained at 2 s. ### Backup and restore From 731363fc3c8d63791b6c1cf29aa87de89d421489 Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Wed, 28 Dec 2022 11:38:40 +0800 Subject: [PATCH 74/83] refine wording --- releases/release-6.5.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 8eb89b3db2e91..b04c10bcf7278 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -24,7 +24,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - TiDB Lightning and Dumpling support [importing](/tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. - TiDB Data Migration (DM) [continuous data validation](/dm/dm-continuous-data-validation.md) becomes GA. - TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br/br-pitr-guide.md#carry-pitr) by 50%, and reduces the RPO to as short as 5 minutes. -- Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) from 4000 rows/s to 35000 rows/s, and reduces replication latency to 2 seconds. +- Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) from 4000 rows/s to 35000 rows/s, and reduce the replication latency to 2s. - Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). - TiCDC supports [replicating changed logs to object storage](/ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS (experimental). @@ -271,7 +271,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * TiCDC performance improves significantly [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7478](https://github.com/pingcap/tiflow/issues/7478) [#7532](https://github.com/pingcap/tiflow/issues/7532) @[sdojjy](https://github.com/sdojjy) [@3AceShowHand](https://github.com/3AceShowHand) - In a test scenario of the TiDB cluster, the performance of TiCDC has improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even during TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is 35000 rows/s, the replication latency in DR can be maintained at 2 s. + In a test scenario of the TiDB cluster, the performance of TiCDC has improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even during TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is 35000 rows/s, the replication latency can be maintained at 2s. ### Backup and restore From 3488461406039a168200e34c4ce4267ff89ca4d3 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Wed, 28 Dec 2022 14:14:54 +0800 Subject: [PATCH 75/83] Update releases/release-6.5.0.md --- releases/release-6.5.0.md | 1 + 1 file changed, 1 insertion(+) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index b04c10bcf7278..ee4d87c6c44db 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -300,6 +300,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr |[`tidb_enable_amend_pessimistic_txn`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407)| Deprecated | Starting from v6.5.0, this variable is deprecated, and TiDB uses the [Metadata Lock](/metadata-lock.md) feature by default to avoid the `Information schema is changed` error. | | [`tidb_enable_outer_join_reorder`](/system-variables.md#tidb_enable_outer_join_reorder-new-in-v610) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the support of Outer Join for the [Join Reorder](/join-reorder.md) algorithm is enabled by default. | | [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Changes the default value from `1` to `2` after further tests, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | +| [`tidb_enable_gc_aware_memory_track`](/system-variables#tidb_enable_gc_aware_memory_track) | Modified | Changes the default value from `ON` to `OFF`. Because the GC-aware memory track is found inaccurate in tests and causes too large analyzed memory size tracked, the memory track is disabled. In addition, in Golang 1.19, the memory tracked by the GC-aware memory track does not have much impact on the overall memory. | | [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the metadata lock feature is enabled by default. | | [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | | [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | From 2c6a4d357f37e5bfcecf7cef5a2f32be51c2cd0f Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Wed, 28 Dec 2022 16:07:47 +0800 Subject: [PATCH 76/83] synch Chinese changes --- releases/release-6.5.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index ee4d87c6c44db..77893f9e24b00 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -259,9 +259,9 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr The storage sink supports changed logs in the canal-json and CSV formats. For more information, see [documentation](/ticdc/ticdc-sink-to-cloud-storage.md). -* TiCDC supports bidirectional replication across multiple clusters [#38587](https://github.com/pingcap/tidb/issues/38587) @[xiongjiwei](https://github.com/xiongjiwei) @[asddongmen](https://github.com/asddongmen) +* TiCDC supports bidirectional replication between two clusters [#38587](https://github.com/pingcap/tidb/issues/38587) @[xiongjiwei](https://github.com/xiongjiwei) @[asddongmen](https://github.com/asddongmen) - TiCDC supports bidirectional replication across multiple TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution across multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from each TiDB cluster to the other TiDB clusters, you can achieve bidirectional data replication across multiple TiDB clusters. + TiCDC supports bidirectional replication between two TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution across multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from one TiDB cluster to another TiDB cluster, you can achieve bidirectional data replication between the two TiDB clusters. For more information, see [documentation](/ticdc/ticdc-bidirectional-replication.md). From 9fdc789be475f05ace6ca47b9f4ac06df186c92d Mon Sep 17 00:00:00 2001 From: Aolin Date: Wed, 28 Dec 2022 16:26:52 +0800 Subject: [PATCH 77/83] Apply suggestions from code review --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 77893f9e24b00..5e084399ff6c7 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -32,9 +32,9 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr ### SQL -* The performance of TiDB adding indexes is improved by 10 times (GA) [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) +* The performance of TiDB adding indexes is improved by about 10 times (GA) [#35983](https://github.com/pingcap/tidb/issues/35983) @[benjamin2037](https://github.com/benjamin2037) @[tangenta](https://github.com/tangenta) - TiDB v6.3.0 introduces the [Add index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) as an experimental feature to improve the speed of backfilling when creating an index. In v6.5.0, this feature becomes GA and is enabled by default, and the performance on large tables is expected to be 10 times faster. The acceleration feature is suitable for scenarios where a single SQL statement adds an index serially. When multiple SQL statements add indexes in parallel, only one of the SQL statements will be accelerated. + TiDB v6.3.0 introduces the [Add index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) as an experimental feature to improve the speed of backfilling when creating an index. In v6.5.0, this feature becomes GA and is enabled by default, and the performance on large tables is expected to be about 10 times faster. The acceleration feature is suitable for scenarios where a single SQL statement adds an index serially. When multiple SQL statements add indexes in parallel, only one of the SQL statements will be accelerated. * Provide lightweight metadata lock to improve the DML success rate during DDL change (GA) [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) @@ -169,7 +169,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr Although the [partitioned table](/partitioned-table.md) feature has been GA since v6.1.0, TiDB is continually improving its performance. In v6.5.0, TiDB supports pushing down sorting operations such as `ORDER BY` and `LIMIT` to TiKV for computation and filtering, which reduces network I/O overhead and improves SQL performance when you use partitioned tables. -* Optimizer introduces a more accurate Cost Model Version 2 [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) +* Optimizer introduces a more accurate Cost Model Version 2 (GA) [#35240](https://github.com/pingcap/tidb/issues/35240) @[qw4990](https://github.com/qw4990) TiDB v6.2.0 introduces the [Cost Model Version 2](/cost-model.md#cost-model-version-2) as an experimental feature. This model uses a more accurate cost estimation method to help the optimizer choose the optimal execution plan. Especially when TiFlash is deployed, Cost Model Version 2 automatically helps choose the appropriate storage engine and avoids much manual intervention. After real-scene testing for a period of time, this model becomes GA in v6.5.0. Since v6.5.0, newly created clusters use Cost Model Version 2 by default. For clusters upgrade to v6.5.0, because Cost Model Version 2 might cause changes to query plans, you can set the [`tidb_cost_model_version = 2`](/system-variables.md#tidb_cost_model_version-new-in-v620) variable to use the new cost model after sufficient performance testing. From aeaad8f9bfc9ae7c4330963b976269917ab0a0bb Mon Sep 17 00:00:00 2001 From: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Wed, 28 Dec 2022 17:01:15 +0800 Subject: [PATCH 78/83] Apply suggestions from code review Co-authored-by: xixirangrang --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 5e084399ff6c7..18ed083547c59 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -122,7 +122,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - The compute work of TiDB Dashboard does not pose pressure on PD nodes. This ensures more stable cluster operation. - The user can still access TiDB Dashboard for diagnosis even if the PD node is unavailable. - - Accessing TiDB Dashboard in Internet does not involve the privileged interfaces of PD. Therefore, the security risk of the cluster is reduced. + - Accessing TiDB Dashboard on the internet does not involve the privileged interfaces of PD. Therefore, the security risk of the cluster is reduced. For more information, see [documentation](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). @@ -255,7 +255,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * TiCDC supports replicating changed logs to storage sinks (experimental) [#6797](https://github.com/pingcap/tiflow/issues/6797) @[zhaoxinyu](https://github.com/zhaoxinyu) - TiCDC supports replicating changed logs to Amazon S3, Azure Blob Storage, NFS, and other S3-compatible storage services. Cloud storage is reasonably priced and easy to use. If you do not want to use Kafka, you can use storage sinks. TiCDC saves the changed logs to a file and then sends it to the storage system. From the storage system, the consumer program reads the newly generated changed log files periodically. + TiCDC supports replicating changed logs to Amazon S3, Azure Blob Storage, NFS, and other S3-compatible storage services. Cloud storage is reasonably priced and easy to use. If you are not using Kafka, you can use storage sinks. TiCDC saves the changed logs to a file and then sends it to the storage system. From the storage system, the consumer program reads the newly generated changed log files periodically. The storage sink supports changed logs in the canal-json and CSV formats. For more information, see [documentation](/ticdc/ticdc-sink-to-cloud-storage.md). @@ -267,7 +267,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * TiCDC supports updating TLS online [#7908](https://github.com/pingcap/tiflow/issues/7908) @[CharlesCheung96](https://github.com/CharlesCheung96) - To keep data secure, you need to set an expiration policy for the certificate used by the system. After the expiration period, the system needs a new certificate. TiCDC v6.5.0 supports online updates of TLS certificates. Without interrupting the replication tasks, TiCDC can automatically detect and update the certificate, without the need for manual intervention. + To keep security of the database system, you need to set an expiration policy for the certificate used by the system. After the expiration period, the system needs a new certificate. TiCDC v6.5.0 supports online updates of TLS certificates. Without interrupting the replication tasks, TiCDC can automatically detect and update the certificate, without the need for manual intervention. * TiCDC performance improves significantly [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7478](https://github.com/pingcap/tiflow/issues/7478) [#7532](https://github.com/pingcap/tiflow/issues/7532) @[sdojjy](https://github.com/sdojjy) [@3AceShowHand](https://github.com/3AceShowHand) From 91fe3bb302dd493d39ff41e6339502fb16e1b277 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Wed, 28 Dec 2022 22:34:24 +0800 Subject: [PATCH 79/83] Apply suggestions from code review Co-authored-by: Aolin Co-authored-by: xixirangrang --- releases/release-6.5.0.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 18ed083547c59..7d125dba6f4c7 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -38,7 +38,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * Provide lightweight metadata lock to improve the DML success rate during DDL change (GA) [#37275](https://github.com/pingcap/tidb/issues/37275) @[wjhuang2016](https://github.com/wjhuang2016) - TiDB v6.3.0 introduces [Metadata lock](/metadata-lock.md) as an experimental feature. To avoid the `Information schema is changed` error caused by DML statements, TiDB coordinates the priority of DMLs and DDLs during table metadata change, and makes the ongoing DDLs wait for the DMLs with old metadata to commit. In v6.5.0, this feature becomes GA and is enabled by default. It is suitable for various types of DDLs change scenarios. + TiDB v6.3.0 introduces [Metadata lock](/metadata-lock.md) as an experimental feature. To avoid the `Information schema is changed` error caused by DML statements, TiDB coordinates the priority of DMLs and DDLs during table metadata change, and makes the ongoing DDLs wait for the DMLs with old metadata to commit. In v6.5.0, this feature becomes GA and is enabled by default. It is suitable for various types of DDLs change scenarios. When you upgrade your existing cluster from versions earlier than v6.5.0 to v6.5.0 or later, TiDB automatically enables metadata lock. To disable this feature, you can set the system variable [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) to `OFF`. For more information, see [documentation](/metadata-lock.md). @@ -179,7 +179,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * TiFlash optimizes the operations of getting the number of table rows [#37165](https://github.com/pingcap/tidb/issues/37165) @[elsa0520](https://github.com/elsa0520) - In the scenarios of data analysis, It is a common operation to get the actual number of rows of a table through `COUNT(*)` without filter conditions. In v6.5.0, TiFlash optimizes the rewriting of `COUNT(*)` and automatically selects the not-null columns with the shortest column definition to count the number of rows, which can effectively reduce the number of I/O operations in TiFlash and improve the execution efficiency of getting row count. + In the scenarios of data analysis, It is a common operation to get the actual number of rows of a table through `COUNT(*)` without filter conditions. In v6.5.0, TiFlash optimizes the rewriting of `COUNT(*)` and automatically selects the not-null columns with the shortest column definition to count the number of rows, which can effectively reduce the number of I/O operations in TiFlash and improve the execution efficiency of getting row counts. ### Stability @@ -203,7 +203,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * Support the output of execution plans in the JSON format [#39261](https://github.com/pingcap/tidb/issues/39261) @[fzzf678](https://github.com/fzzf678) - In v6.5.0, TiDB extends the output format of execution plans. By using `EXPLAIN FORMAT=tidb_json `, you can output SQL execution plans in the JSON format. With this capability, SQL debugging tools and diagnostic tools can read execution plans more conveniently and accurately, thus improving the ease of use of SQL diagnosis and tuning. + In v6.5.0, TiDB extends the output format of execution plans. By specifying `FORMAT = "tidb_json"` in the `EXPLAIN` statement, you can output SQL execution plans in the JSON format. With this capability, SQL debugging tools and diagnostic tools can read execution plans more conveniently and accurately, thus improving the ease of use of SQL diagnosis and tuning. For more information, see [documentation](/sql-statements/sql-statement-explain.md). From 22ea3d572d13a7c0ebf19949cf43bcd925dda5dd Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Wed, 28 Dec 2022 22:51:49 +0800 Subject: [PATCH 80/83] Apply suggestions from code review --- releases/release-6.5.0.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index 7d125dba6f4c7..e6876d871c2ce 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -261,7 +261,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * TiCDC supports bidirectional replication between two clusters [#38587](https://github.com/pingcap/tidb/issues/38587) @[xiongjiwei](https://github.com/xiongjiwei) @[asddongmen](https://github.com/asddongmen) - TiCDC supports bidirectional replication between two TiDB clusters. If you need a multi-master TiDB solution for your application, especially a multi-master solution across multiple regions, you can use this feature to build one. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from one TiDB cluster to another TiDB cluster, you can achieve bidirectional data replication between the two TiDB clusters. + TiCDC supports bidirectional replication between two TiDB clusters. If you need to build geo-distributed and multiple active data centers for your application, you can use this feature as a solution. By configuring the `bdr-mode = true` parameter for the TiCDC changefeeds from one TiDB cluster to another TiDB cluster, you can achieve bidirectional data replication between the two TiDB clusters. For more information, see [documentation](/ticdc/ticdc-bidirectional-replication.md). @@ -300,7 +300,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr |[`tidb_enable_amend_pessimistic_txn`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407)| Deprecated | Starting from v6.5.0, this variable is deprecated, and TiDB uses the [Metadata Lock](/metadata-lock.md) feature by default to avoid the `Information schema is changed` error. | | [`tidb_enable_outer_join_reorder`](/system-variables.md#tidb_enable_outer_join_reorder-new-in-v610) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the support of Outer Join for the [Join Reorder](/join-reorder.md) algorithm is enabled by default. | | [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Changes the default value from `1` to `2` after further tests, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | -| [`tidb_enable_gc_aware_memory_track`](/system-variables#tidb_enable_gc_aware_memory_track) | Modified | Changes the default value from `ON` to `OFF`. Because the GC-aware memory track is found inaccurate in tests and causes too large analyzed memory size tracked, the memory track is disabled. In addition, in Golang 1.19, the memory tracked by the GC-aware memory track does not have much impact on the overall memory. | +| [`tidb_enable_gc_aware_memory_track`](/system-variables.md#tidb_enable_gc_aware_memory_track) | Modified | Changes the default value from `ON` to `OFF`. Because the GC-aware memory track is found inaccurate in tests and causes too large analyzed memory size tracked, the memory track is disabled. In addition, in Golang 1.19, the memory tracked by the GC-aware memory track does not have much impact on the overall memory. | | [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the metadata lock feature is enabled by default. | | [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | | [`tidb_ddl_enable_fast_reorg`](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the acceleration of `ADD INDEX` and `CREATE INDEX` is enabled by default. | From 8aa2c34029911ad82a4c30709e465a61e3ebdba4 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Wed, 28 Dec 2022 23:24:28 +0800 Subject: [PATCH 81/83] fix broken links --- releases/release-6.5.0.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index e6876d871c2ce..ea34aeb06f8c2 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -16,14 +16,14 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - The [index acceleration](/system-variables.md#tidb_ddl_enable_fast_reorg-new-in-v630) feature becomes generally available (GA), which improves the performance of adding indexes by about 10 times compared with v6.1.0. - The TiDB global memory control becomes GA, and you can control the memory consumption threshold via [`tidb_server_memory_limit`](/system-variables.md#tidb_server_memory_limit-new-in-v640). -- The high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatible-mode) column attribute becomes GA, which is compatible with MySQL. +- The high-performance and globally monotonic [`AUTO_INCREMENT`](/auto-increment.md#mysql-compatibility-mode) column attribute becomes GA, which is compatible with MySQL. - [`FLASHBACK CLUSTER TO TIMESTAMP`](/sql-statements/sql-statement-flashback-to-timestamp.md) is now compatible with TiCDC and PITR and becomes GA. - Enhance TiDB optimizer by making the more accurate [Cost Model version 2](/cost-model.md#cost-model-version-2) generally available and supporting expressions connected by `AND` for [INDEX MERGE](/explain-index-merge.md). - Support pushing down the `JSON_EXTRACT()` function to TiFlash. - Support [password management](/password-management.md) policies that meet password compliance auditing requirements. - TiDB Lightning and Dumpling support [importing](/tidb-lightning/tidb-lightning-data-source.md) and [exporting](/dumpling-overview.md#improve-export-efficiency-through-concurrency) compressed SQL and CSV files. - TiDB Data Migration (DM) [continuous data validation](/dm/dm-continuous-data-validation.md) becomes GA. -- TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br/br-pitr-guide.md#carry-pitr) by 50%, and reduces the RPO to as short as 5 minutes. +- TiDB Backup & Restore supports snapshot checkpoint backup, improves the recovery performance of [PITR](/br/br-pitr-guide.md#run-pitr) by 50%, and reduces the RPO to as short as 5 minutes. - Improve the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) from 4000 rows/s to 35000 rows/s, and reduce the replication latency to 2s. - Provide row-level [Time to live (TTL)](/time-to-live.md) to manage data lifecycle (experimental). - TiCDC supports [replicating changed logs to object storage](/ticdc/ticdc-sink-to-cloud-storage.md) such as Amazon S3, Azure Blob Storage, and NFS (experimental). @@ -82,7 +82,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr In v6.5.0, TiDB supports binding historical execution plans by extending the binding object in the [`CREATE [GLOBAL | SESSION] BINDING`](/sql-statements/sql-statement-create-binding.md) statement. When the execution plan of a SQL statement changes, you can bind the original execution plan by specifying `plan_digest` in the `CREATE [GLOBAL | SESSION] BINDING` statement to quickly recover SQL performance, as long as the original execution plan is still in the SQL execution history memory table (for example, `statements_summary`). This feature can simplify the process of handling execution plan change issues and improve your maintenance efficiency. - For more information, see [documentation](/sql-plan-management.md#bind-historical-execution-plans). + For more information, see [documentation](/sql-plan-management.md#create-a-binding-according-to-a-historical-execution-plan). ### Security @@ -299,7 +299,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr |--------|------------------------------|------| |[`tidb_enable_amend_pessimistic_txn`](/system-variables.md#tidb_enable_amend_pessimistic_txn-new-in-v407)| Deprecated | Starting from v6.5.0, this variable is deprecated, and TiDB uses the [Metadata Lock](/metadata-lock.md) feature by default to avoid the `Information schema is changed` error. | | [`tidb_enable_outer_join_reorder`](/system-variables.md#tidb_enable_outer_join_reorder-new-in-v610) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the support of Outer Join for the [Join Reorder](/join-reorder.md) algorithm is enabled by default. | -| [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-introduced-new-in-v620) | Modified | Changes the default value from `1` to `2` after further tests, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | +| [`tidb_cost_model_version`](/system-variables.md#tidb_cost_model_version-new-in-v620) | Modified | Changes the default value from `1` to `2` after further tests, meaning that Cost Model Version 2 is used for index selection and operator selection by default. | | [`tidb_enable_gc_aware_memory_track`](/system-variables.md#tidb_enable_gc_aware_memory_track) | Modified | Changes the default value from `ON` to `OFF`. Because the GC-aware memory track is found inaccurate in tests and causes too large analyzed memory size tracked, the memory track is disabled. In addition, in Golang 1.19, the memory tracked by the GC-aware memory track does not have much impact on the overall memory. | | [`tidb_enable_metadata_lock`](/system-variables.md#tidb_enable_metadata_lock-new-in-v630) | Modified | Changes the default value from `OFF` to `ON` after further tests, meaning that the metadata lock feature is enabled by default. | | [`tidb_enable_tiflash_read_for_write_stmt`](/system-variables.md#tidb_enable_tiflash_read_for_write_stmt-new-in-v630) | Modified | Takes effect starting from v6.5.0. It controls whether read operations in SQL statements containing `INSERT`, `DELETE`, and `UPDATE` can be pushed down to TiFlash. The default value is `OFF`. | @@ -342,7 +342,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr | TiDB | [`disconnect-on-expired-password`](/tidb-configuration-file.md#disconnect-on-expired-password-new-in-v650) | Newly added | Determines whether TiDB disconnects the client connection when the password is expired. The default value is `true`, which means the client connection is disconnected when the password is expired. | | TiKV | `raw-min-ts-outlier-threshold` | Deleted | Since v6.4.0, this configuration item was deprecated. Since v6.5.0, this configuration item is deleted. | | TiKV | [`cdc.min-ts-interval`](/tikv-configuration-file.md#min-ts-interval) | Modified | To reduce CDC latency, the default value is changed from `1s` to `200ms`. | -| TiKV | [`memory-use-ratio`](/tikv-configuration-file.md#memory-use-ratio-introduced-new-in-v650) | Newly added | Indicates the ratio of available memory to total system memory in PITR log recovery. | +| TiKV | [`memory-use-ratio`](/tikv-configuration-file.md#memory-use-ratio-new-in-v650) | Newly added | Indicates the ratio of available memory to total system memory in PITR log recovery. | | TiCDC | [`sink.terminator`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | Indicates the row terminator, which is used for separating two data change events. The value is empty by default, which means "\r\n" is used. | | TiCDC | [`sink.date-separator`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | Indicates the date separator type of the file directory. Value options are `none`, `year`, `month`, and `day`. `none` is the default value and means that the date is not separated. | | TiCDC | [`sink.enable-partition-separator`](/ticdc/ticdc-changefeed-config.md#changefeed-configuration-parameters) | Newly added | Specifies whether to use partitions as the separation string. The default value is `false`, which means that partitions in a table are not stored in separate directories. | From 6bfdd6aaae144422c8093644bdc158e3c2172921 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Thu, 29 Dec 2022 09:14:46 +0800 Subject: [PATCH 82/83] fix the indent issue --- releases/release-6.5.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index ea34aeb06f8c2..a713738ae6c3f 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -74,7 +74,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr - Reuse TiFlash query results or deal with highly concurrent online requests - Need a relatively small result set compared with the input data size, preferably smaller than 100 MiB. - For more information, see [documentation](/tiflash/tiflash-results-materialization.md). + For more information, see [documentation](/tiflash/tiflash-results-materialization.md). * Support binding history execution plans (experimental) [#39199](https://github.com/pingcap/tidb/issues/39199) @[fzzf678](https://github.com/fzzf678) From b13e40254b281c573c0729f4555b99f6121629a1 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Thu, 29 Dec 2022 10:16:08 +0800 Subject: [PATCH 83/83] Update releases/release-6.5.0.md --- releases/release-6.5.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/releases/release-6.5.0.md b/releases/release-6.5.0.md index a713738ae6c3f..0d5a41f50ed5e 100644 --- a/releases/release-6.5.0.md +++ b/releases/release-6.5.0.md @@ -271,7 +271,7 @@ Compared with the previous LTS 6.1.0, 6.5.0 not only includes new features, impr * TiCDC performance improves significantly [#7540](https://github.com/pingcap/tiflow/issues/7540) [#7478](https://github.com/pingcap/tiflow/issues/7478) [#7532](https://github.com/pingcap/tiflow/issues/7532) @[sdojjy](https://github.com/sdojjy) [@3AceShowHand](https://github.com/3AceShowHand) - In a test scenario of the TiDB cluster, the performance of TiCDC has improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even during TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, when the throughput is 35000 rows/s, the replication latency can be maintained at 2s. + In a test scenario of the TiDB cluster, the performance of TiCDC has improved significantly. Specifically, the maximum row changes that a single TiCDC can process reaches 30K rows/s, and the replication latency is reduced to 10s. Even during TiKV and TiCDC rolling upgrade, the replication latency is less than 30s. In a disaster recovery (DR) scenario, by enabling TiCDC redo logs and Syncpoint, the TiCDC throughput of [replicating data to Kafka](/replicate-data-to-kafka.md) can be improved from 4000 rows/s to 35000 rows/s, and the replication latency can be maintained at 2s. ### Backup and restore