Skip to content
This repository has been archived by the owner on Nov 24, 2023. It is now read-only.

may fail to online DDL in some alter case #1175

Closed
lance6716 opened this issue Oct 16, 2020 · 1 comment · Fixed by #1184
Closed

may fail to online DDL in some alter case #1175

lance6716 opened this issue Oct 16, 2020 · 1 comment · Fixed by #1184
Assignees
Labels
severity/major type/bug This issue is a bug report

Comments

@lance6716
Copy link
Collaborator

Bug Report

Please answer these questions before submitting your issue. Thanks!

  1. What did you do? If possible, provide a recipe for reproducing the error.

in run_sql_file_online_ddl

$ptosc_bin --user=root --host=$host --port=$port --password=$password \
	            --charset=utf8 --progress percentage,1 \
	            --execute --nocheck-replication-filter --max-lag 20 \
	            --critical-load Threads_connected:5000,Threads_running:100 \
	            --max-load Threads_connected:5000,Threads_running:100 \
	            --alter "row_format=compressed key_block_size=8" D=$schema,t=$table \
                --recursion-method=none --print \
                >> $TEST_DIR/pt-osc.log
  1. What did you expect to see?

no error

  1. What did you see instead?
[code=11005:class=functional:scope=internal:level=high] not allowed operation: alter multiple tables in one statement

when parser restore, a , is added, thus cause multi-schema change

  1. Versions of the cluster

    • DM version (run dmctl -V or dm-worker -V or dm-master -V):

      (paste DM version here, and you must ensure versions of dmctl, DM-worker and DM-master are same)
      
    • Upstream MySQL/MariaDB server version:

      (paste upstream MySQL/MariaDB server version here)
      
    • Downstream TiDB cluster version (execute SELECT tidb_version(); in a MySQL client):

      (paste TiDB cluster version here)
      
    • How did you deploy DM: DM-Ansible or manually?

      (leave DM-Ansible or manually here)
      
    • Other interesting information (system version, hardware config, etc):

  2. current status of DM cluster (execute query-status in dmctl)

  3. Operation logs

    • Please upload dm-worker.log for every DM-worker instance if possible
    • Please upload dm-master.log if possible
    • Other interesting logs
    • Output of dmctl's commands with problems
  4. Configuration of the cluster and the task

    • dm-worker.toml for every DM-worker instance if possible
    • dm-master.toml for DM-master if possible
    • task config, like task.yaml if possible
    • inventory.ini if deployed by DM-Ansible
  5. Screenshot/exported-PDF of Grafana dashboard or metrics' graph in Prometheus for DM if possible

@lance6716 lance6716 added severity/major type/bug This issue is a bug report labels Oct 16, 2020
@csuzhangxc
Copy link
Member

package main

import (
	"bytes"
	"fmt"

	"github.com/pingcap/parser"
	"github.com/pingcap/parser/ast"
	"github.com/pingcap/parser/format"
	_ "github.com/pingcap/tidb/types/parser_driver" // for import parser driver
)

func main() {
	sql := "ALTER TABLE `db`.`tbl` row_format=compressed key_block_size=8"

	parser2 := parser.New()
	node, err := parser2.ParseOneStmt(sql, "", "")
	if err != nil {
		panic(err)
	}

	alt := node.(*ast.AlterTableStmt)
	fmt.Println("original len", len(alt.Specs))

	var b []byte
	bf := bytes.NewBuffer(b)
	err = alt.Restore(&format.RestoreCtx{
		Flags: format.DefaultRestoreFlags,
		In:    bf,
	})
	if err != nil {
		panic(err)
	}
	fmt.Println("restored", bf.String())

	newNode, err := parser2.ParseOneStmt(bf.String(), "", "")
	if err != nil {
		panic(err)
	}

	altNew := newNode.(*ast.AlterTableStmt)
	fmt.Println("restored len", len(altNew.Specs))
}
original len 1
restored ALTER TABLE `db`.`tbl` ROW_FORMAT = COMPRESSED, KEY_BLOCK_SIZE = 8
restored len 2

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
severity/major type/bug This issue is a bug report
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants