[2020/06/01 22:34:20.699 +08:00] [ERROR] [subtask.go:255] ["unit process error"]
[subtask=task_sync_dbname2db_pro] [unit=Sync] ["error information"="{\"msg\":\"[code=36051:class=sync-unit:scope=internal:level=high] current pos (mysql-bin|000001.000204, 24450739):
online ddls on ghost table `dbname`.`_xxxxxx_new`\\
github.com/pingcap/dm/pkg/terror.
(*Error).Generate\\
\\t/home/jenkins/agent/workspace/build_dm_master/go/src/github.com/pingcap/dm/pkg/terror/terror.go:232\\
github.com/pingcap/dm/syncer.
(*PT).Apply\\
\\t/home/jenkins/agent/workspace/build_dm_master/go/src/github.com/pingcap/dm/syncer/pt_osc.go:109\\
github.com/pingcap/dm/syncer.
(*Syncer).handleOnlineDDL\\
\\t/home/jenkins/agent/workspace/build_dm_master/go/src/github.com/pingcap/dm/syncer/ddl.go:209\\
github.com/pingcap/dm/syncer.(*Syncer).resolveDDLSQL\\
\\t/home/jenkins/agent/workspace/build_dm_master/go/src/github.com/pingcap/dm/syncer/ddl.go:145\\
github.com/pingcap/dm/syncer.
(*Syncer).handleQueryEvent\\
\\t/home/jenkins/agent/workspace/build_dm_master/go/src/github.com/pingcap/dm/syncer/syncer.go:1612\\
github.com/pingcap/dm/syncer.(*Syncer).Run\\
\\t/home/jenkins/agent/workspace/build_dm_master/go/src/github.com/pingcap/dm/syncer/syncer.go:1312\\
github.com/pingcap/dm/syncer.(*Syncer).Process\\
\\t/home/jenkins/agent/workspace/build_dm_master/go/src/github.com/pingcap/dm/syncer/syncer.go:599\\
runtime.goexit\\
\\t/usr/local/go/src/runtime/asm_amd64.s:1357\",\"error\":{\"ErrCode\":36051,\"ErrClass\":11,\"ErrScope\":3,\"ErrLevel\":3,\"Message\":\"current pos (mysql-bin|000001.000204, 24450739): online ddls on ghost table `dbname`.`_xxxxxx_new`\"}}"]
但是使用的工具确实是pt-osc
同时,配置文件也加了online-ddl-scheme: pt
name: task_sync_db12db2_pro
task-mode: all
is-sharding: false
online-ddl-scheme: gh-ost # 之前的配置是pt的,因为一直都是使用pt工具做ddl操作,这个是出现报错后,根据报错信息修改了
target-database:
host: x.x.x.x
port: 4000
user: synctidb
password: zUvhvEaZUyK6PJ1AQnWZQUAL=
mysql-instances:
- source-id: db13306_slave
meta:
binlog-name: mysql-bin.0000001
binlog-pos: 4
filter-rules: []
route-rules:
- db13306_slave.route_rules.10
- db13306_slave.route_rules.1
- db13306_slave.route_rules.3
- db13306_slave.route_rules.4
- db13306_slave.route_rules.7
- db13306_slave.route_rules.8
- db13306_slave.route_rules.9
- db13306_slave.route_rules.11
- db13306_slave.route_rules.12
- db13306_slave.route_rules.2
- db13306_slave.route_rules.5
- db13306_slave.route_rules.6
black-white-list: db13306_slave.bw_list.1
mydumper-config-name: db13306_slave.dump
routes:
db13306_slave.route_rules.1:
schema-pattern: db1
table-pattern: ""
target-schema: db2
target-table: ""
db13306_slave.route_rules.2:
schema-pattern: db1
table-pattern: table1
target-schema: db2
target-table: db1_table1
db13306_slave.route_rules.3:
schema-pattern: db1
table-pattern: table2
target-schema: db2
target-table: db1_table2
db13306_slave.route_rules.4:
schema-pattern: db1
table-pattern: table3
target-schema: db2
target-table: db1_table3
db13306_slave.route_rules.5:
schema-pattern: db1
table-pattern: table4
target-schema: db2
target-table: db1_table4
db13306_slave.route_rules.6:
schema-pattern: db1
table-pattern: table5
target-schema: db2
target-table: db1_table5
db13306_slave.route_rules.7:
schema-pattern: db1
table-pattern: table6
target-schema: db2
target-table: db1_table6
db13306_slave.route_rules.8:
schema-pattern: db1
table-pattern: table7
target-schema: db2
target-table: db1_table7
db13306_slave.route_rules.9:
schema-pattern: db1
table-pattern: table8
target-schema: db2
target-table: db1_table8
db13306_slave.route_rules.10:
schema-pattern: db1
table-pattern: table8
target-schema: db2
target-table: db1_table8
db13306_slave.route_rules.11:
schema-pattern: db1
table-pattern: table9
target-schema: db2
target-table: db1_table9
db13306_slave.route_rules.12:
schema-pattern: db1
table-pattern: table10
target-schema: db2
target-table: db1_table10
filters: {}
black-white-list:
db13306_slave.bw_list.1:
do-tables:
- db-name: db1
tbl-name: table8
- db-name: db1
tbl-name: table9
- db-name: db1
tbl-name: table10
- db-name: db1
tbl-name: table1
- db-name: db1
tbl-name: table2
- db-name: db1
tbl-name: table8
- db-name: db1
tbl-name: table3
- db-name: db1
tbl-name: table4
- db-name: db1
tbl-name: table5
- db-name: db1
tbl-name: table6
- db-name: db1
tbl-name: table7
mydumpers:
db13306_slave.dump:
mydumper-path: bin/mydumper
threads: 4
chunk-filesize: 64
skip-tz-utc: true
extra-args: -T db1.table8,db1.table9,db1.table10,db1.table1,db1.table2,db1.table8,db1.table3,db1.table4,db1.table5,db1.table6,db1.table7
- 同步以上10张表
- ddl的表为非同步表
- 配置文件使用dm-portal生成,去掉了一些不用的表的信息
- dm版本
./bin/dm-worker -V
Release Version: v1.0.4-1-gd681c67
Git Commit Hash: d681c6731d3432f4d8f38ea651f44d49d6860269
Git Branch: release-1.0
UTC Build Time: 2020-03-16 09:45:33
Go Version: go version go1.13 linux/amd64
来了老弟
4
你好,
请问下当前 dm 同步任务状态是否正常,同步是否中断?
当前问题是否为:
- 配置 pt 但是日志中显示 ghost 问题
- ddl 未同步的表为何会有报错。
故事:
基本环境:
0、dm配置online-ddl-scheme: pt
- 同步表:table1~10
- 配置文件,见上
操作:
- pt-osc修改非同步表xxxxxx
- 发现dm,pause;查看日志,见上log,日志中竟然是
online ddls on ghost table
dbname._xxxxxx_new`
- query-error task_sync_db12db2_pro 看不到err信息
- 修改online-ddl-scheme: gh-ost,stop task_sync_db12db2_pro; start-task ./conf/xxxx;错误跳过,恢复正常
疑问:
- 为何精确匹配了同步表,其他表的ddl(即使使用pt-osc或者ghost)仍然会影响同步?【此问题出现过两次,上一次是使用pt-checksum时生成的中间表导致复制断开】
- 使用pt-osc操作,为何报错中会出现gh-ost的报错?
假设现在的 pt-osc 执行的 online DDL 是这样的:
- 建临时表
- 各种对临时表、原始表的操作
- 将原始表 drop 掉、将临时表 rename 成原始表
只要让 DM 的 task 开始运行时是在 1 之前、或者 3 之后(也就是说不要在 2 这个状态),DM 就可以正确维护这些 online DDL 信息,后续 online DDL 操作就能正确执行。如果已经出错了,就参考 FAQ 的处理,后续也能继续正常同步。
你用的 pt-osc 的话, online-ddl-scheme 就是设置成 “pt”(上面你看到的那个报错,只是错误信息不准确)。
1 个赞
system
(system)
关闭
10
此话题已在最后回复的 1 分钟后被自动关闭。不再允许新回复。