DM测试全量MySQL导入TiDB不成功

为提高效率,请提供以下信息,问题描述清晰能够更快得到解决:

【概述】 场景 + 问题概述
image

Mariadb下面有两个数据库test和sharding,想只把sharding里面t开头的表及数据迁移到TiDB中,存放到TiDB的db_target库中,目前db_target不存在,不知道dm是否可以全量导出然后全量导入创建到TiDB中(我希望库名可以自动改成db_target,然后全量把源端库sharding中t开头的表全量导入这个库中)。

现在TiDB中没有db_target库

昨晚测试使用DM v1.0.1从Mariadb 10.4.20迁移数据库到TiDB v3.0.20,想达到的目的是实现全量的初始化并进行同步。但是现在启动任务时只全量导出了数据,没有自动导入到TiDB中。想咨询一下为啥没有自动导入,不知道哪里操作或设置的有问题。

【背景】 做过哪些操作

0.前期使用tidb-ansible部署2-2-2的TiDB集群步骤忽略
1.Mariadb环境准备

进行Mariadb v10.4.20 数据库安装,开启binlog、gtid、binlog_format设置为row、授权root用户权限GRANT RELOAD,SELECT ON *.* TO 'root'@'%';、GRANT REPLICATION SLAVE,REPLICATION CLIENT ON *.* TO 'root'@'%';

2.使用 DM binary 部署
2.1.下载dm

[tidb@tidbser1 u01]$ cd /u01  
[tidb@tidbser1 u01]$ wget https://download.pingcap.org/dm-v1.0.1-linux-amd64.tar.gz
[tidb@tidbser1 u01]$ tar -zxvf dm-v1.0.1-linux-amd64.tar.gz
[tidb@tidbser1 u01]$ cd /u01/dm-v1.0.1-linux-amd64

2.2.生成加密密码

[tidb@tidbser1 dm-v1.0.1-linux-amd64]$ ./bin/dmctl --encrypt "123456"
d0nbklDpbS6iFZsfh7wnVEBDUE2qNYY=

2.3.启动dm-worker

[tidb@tidbser1 dm-v1.0.1-linux-amd64]$ cat conf/dm-worker.toml
# Worker Configuration.

#log configuration
log-level = "info"
log-file = "dm-worker.log"

#dm-worker listen address
worker-addr = ":8262"

#server id of slave for binlog replication
#each instance (master and slave) in replication group should have different server id
server-id = 101

#represents a MySQL/MariaDB instance or a replication group
source-id = "mariadb-replica-01"

#flavor: mysql/mariadb
flavor = "mariadb"

#directory that used to store relay log
relay-dir = "./relay_log"

#enable gtid in relay log unit
enable-gtid = false

#charset of DSN of source mysql/mariadb instance
# charset= ""

[from]
host = "192.168.40.62"
user = "root"
password = "d0nbklDpbS6iFZsfh7wnVEBDUE2qNYY="
port = 3306

#relay log purge strategy
#[purge]
#interval = 3600
#expires = 24
#remain-space = 15

#task status checker
#[checker]
[tidb@tidbser1 dm-v1.0.1-linux-amd64]$ nohup ./bin/dm-worker -config conf/dm-worker.toml &

2.4.启动dm-master

[tidb@tidbser1 dm-v1.0.1-linux-amd64]$ cat conf/dm-master.toml
# Master Configuration.

# rpc configuration
#
# rpc timeout is a positive number plus time unit. we use golang standard time
# units including: "ns", "us", "ms", "s", "m", "h". You should provide a proper
# rpc timeout according to your use scenario.
rpc-timeout = "30s"
# rpc limiter controls how frequently events are allowed to happen.
# It implements a "token bucket" of size `rpc-rate-limit`, initially full and
# refilled at rate `rpc-rate-limit` tokens per second. Note `rpc-rate-limit`
# is float64 type, so remember to add a decimal point and one trailing 0 if its
# literal value happens to be an integer.
rpc-rate-limit = 10.0
rpc-rate-burst = 40

#log configuration
log-level = "info"
log-file = "dm-master.log"

#dm-master listen address
master-addr = ":8261"

# replication group <-> dm-Worker deployment, we'll refine it when new deployment function is available
[[deploy]]
source-id = "mariadb-replica-01"
dm-worker = "192.168.40.62:8262"
[tidb@tidbser1 dm-v1.0.1-linux-amd64]$ nohup ./bin/dm-master -config conf/dm-master.toml &

2.5.配置task1.yaml

[tidb@tidbser1 dm-v1.0.1-linux-amd64]$ cat task1.yaml
name: test
task-mode: all
is-sharding: true

target-database:
  host: "192.168.40.62"
  port: 4000
  user: "root"
  password: "" # 如果密码不为空,也需要配置 dmctl 加密后的密码

mysql-instances:
  - source-id: "mariadb-replica-01"
    black-white-list:  "instance"  # 如果 DM 版本 <= v1.0.6 则使用 black-white-list
    route-rules: ["sharding-route-rules-table", "sharding-route-rules-schema"]
    mydumper-config-name: "global"
    loader-config-name: "global"
    syncer-config-name: "global"

black-white-list:                  # 如果 DM 版本 <= v1.0.6 则使用 black-white-list
  instance:
    do-dbs: ["~^sharding[\\d]+"]
    do-tables:
    -  db-name: "~^sharding[\\d]+"
       tbl-name: "~^t"

routes:
  sharding-route-rules-table:
    schema-pattern: sharding*
    table-pattern: t*
    target-schema: db_target
    target-table: t_target

  sharding-route-rules-schema:
    schema-pattern: sharding*
    target-schema: db_target

mydumpers:
  global:
    chunk-filesize: 16
    skip-tz-utc: true

loaders:
  global:
    pool-size: 8

syncers:
  global:
    worker-count: 4
    batch: 50 

2.6. 创建数据迁移任务,任务显示执行成功

[tidb@tidbser1 dm-v1.0.1-linux-amd64]$ ./bin/dmctl -master-addr 192.168.40.62:8261
Welcome to dmctl
Release Version: v1.0.1
Git Commit Hash: e63c6cdebea0edcf2ef8c91d84cff4aaa5fc2df7
Git Branch: release-1.0
UTC Build Time: 2019-09-10 06:15:05
Go Version: go version go1.12 linux/amd64

»  start-task task1.yaml
»  {
    "result": true,
    "msg": "",
    "workers": [
        {
            "result": true,
            "worker": "192.168.40.62:8262",
            "msg": ""
        }
    ]
}
» query-status
{
    "result": true,
    "msg": "",
    "workers": [
        {
            "result": true,
            "worker": "192.168.40.62:8262",
            "msg": "",
            "subTaskStatus": [
                {
                    "name": "test",
                    "stage": "Running",
                    "unit": "Sync",
                    "result": null,
                    "unresolvedDDLLockID": "",
                    "sync": {
                        "totalEvents": "0",
                        "totalTps": "0",
                        "recentTps": "0",
                        "masterBinlog": "(on.000003, 335)",
                        "masterBinlogGtid": "0-1-2",
                        "syncerBinlog": "(on.000003, 335)",
                        "syncerBinlogGtid": "",
                        "blockingDDLs": [
                        ],
                        "unresolvedGroups": [
                        ],
                        "synced": true
                    }
                }
            ],
            "relayStatus": {
                "masterBinlog": "(on.000003, 335)",
                "masterBinlogGtid": "0-1-2",
                "relaySubDir": "0-1.000001",
                "relayBinlog": "(on.000003, 335)",
                "relayBinlogGtid": "0-1-2",
                "relayCatchUpMaster": true,
                "stage": "Running",
                "result": null
            },
            "sourceID": "mariadb-replica-01"
        }
    ]
}

2.7.检查生成了dm_worker_meta、dumped_data.test、relay_log


2.8.检查TiDB数据库

!!!任务执行成功,文件也都生成了,但是没有导入到TiDB集群中,请大家帮我看看为什么?

【现象】 业务和数据库现象

【问题】 当前遇到的问题

【业务影响】
自己测试学习,没有业务影响
【TiDB 版本】
v3.0.20
【附件】

  • 相关日志、配置文件、Grafana 监控(https://metricstool.pingcap.com/)
  • TiUP Cluster Display 信息
  • TiUP CLuster Edit config 信息
  • TiDB-Overview 监控
  • 对应模块的 Grafana 监控(如有 BR、TiDB-binlog、TiCDC 等)
  • 对应模块日志(包含问题前后 1 小时日志)

若提问为性能优化、故障排查类问题,请下载脚本运行。终端输出的打印结果,请务必全选并复制粘贴上传。dm-worker.log (589.3 KB) dm-master.log (275.5 KB)


正则匹配这块换下

1 个赞

我试下,谢谢回复

黑白名单这里写的有问题,可以从 dump 出来的文件确认,并不是你需要的库表信息。

上游只有 sharding 时,可以按照这样的方式改下 black-white-list

black-white-list:       
  instance:
    do-dbs: ["sharding"]
    do-tables:
    -  db-name: "sharding"
       tbl-name: "~^t.*"

注意:重新导入数据,需要首先把 已导入的库表删掉,同时清理掉 checkpoint 信息(可删掉 dm_meta 库)

2 个赞

谢谢同学

:wink::wink::wink:

我是需要把dm_meta库删除,然后把dm_worker_meta、dumped_data.test、relay_log目录下的内容都删除掉是吗?

只需要删除这个。其他的不用动。另外 DM 1.0.1 版本有点老了哈~

我参考官方文档弄的,上面有一个下载这个的连接,就是v1.0.1。我TiDB是v3.0.20,我也不知道DM下载那个版本的比较合适。

正常的话,官网推荐的方式是 TiUP 部署,也就是 DM 2.0.x 的版本。

TiDB版本和DM的版本匹配度不知道是啥样的。

TiDB 和 DM 是独立发版的。

已经解决,主要还是路由规则设置不太熟悉,还需要学习。

[tidb@tidbser1 dm-v1.0.1-linux-amd64]$ cat task1.yaml
name: mariadb_to_tidb
task-mode: all

target-database:
host: “192.168.40.62”
port: 4000
user: “root”
password: “” # 如果密码不为空,也需要配置 dmctl 加密后的密码

mysql-instances:

  • source-id: “mariadb-replica-01”
    black-white-list: “instance” # 如果 DM 版本 <= v1.0.6 则使用 black-white-list
    route-rules: [“sharding-route-rules-schema”]
    mydumper-config-name: “global”
    loader-config-name: “global”
    syncer-config-name: “global”

black-white-list: # 如果 DM 版本 <= v1.0.6 则使用 black-white-list
instance:
do-dbs: [“sharding”]
do-tables:
- db-name: “sharding”
tbl-name: “~^t”

routes:
sharding-route-rules-schema:
schema-pattern: sharding
target-schema: db_target

mydumpers:
global:
chunk-filesize: 16
skip-tz-utc: true

loaders:
global:
pool-size: 8

syncers:
global:
worker-count: 4
batch: 50

1 个赞

此话题已在最后回复的 1 分钟后被自动关闭。不再允许新回复。