使用 tidb-lightning进行数据迁移出现 pd unavailable问题

为提高效率,请提供以下信息,问题描述清晰能够更快得到解决:
【 TiDB 使用环境】

【概述】 将mysql迁移到tidb,已使用dumpling将mysql数据库导出,但使用lightning工具导入的时候,就会出现pd不可用情况。

【备份和数据迁移策略逻辑】

【背景】 将mysql迁移到tidb,已使用dumpling将mysql数据库导出,但使用lightning工具导入的时候,就会出现pd不可用情况。是否需要调整哪些配置

【现象】 使用lightning迁移数据失败,提示 pd unavailable

【问题】 使用lightning迁移数据失败,提示 pd unavailable

【业务影响】 使用lightning迁移数据失败,提示 pd unavailable

【TiDB 版本】 5.3

Lightning日志

[2021/12/18 20:14:54.893 +08:00] [WARN] [localhelper.go:153] [“fetch table region size statistics failed”] [table=punish_history] [error=“fetch region approx
imate sizes failed: Error 1105: pd unavailable”] [errorVerbose="Error 1105: pd unavailable\ngithub.com/pingcap/errors.AddStack\ \t/nfs/cache/mod/github.com/p
ingcap/errors@v0.11.5-0.20210425183316-da1aaba5fb63/errors.go:174\ngithub.com/pingcap/errors.Trace\ \t/nfs/cache/mod/github.com/pingcap/errors@v0.11.5-0.2021
0425183316-da1aaba5fb63/juju_adaptor.go:15\ngithub.com/pingcap/tidb/br/pkg/lightning/backend/local.fetchTableRegionSizeStats.func1\ \t/home/jenkins/agent/wor
kspace/optimization-build-tidb-linux-amd/go/src/github.com/pingcap/br/br/pkg/lightning/backend/local/localhelper.go:342\ngithub.com/pingcap/tidb/br/pkg/light
ning/common.SQLWithRetry.Transact.func1\ \t/home/jenkins/agent/workspace/optimization-build-tidb-linux-amd/go/src/github.com/pingcap/br/br/pkg/lightning/comm
on/util.go:156\ngithub.com/pingcap/tidb/br/pkg/lightning/common.Retry\ \t/home/jenkins/agent/workspace/optimization-build-tidb-linux-amd/go/src/github.com/pi
ngcap/br/br/pkg/lightning/common/util.go:118\ngithub.com/pingcap/tidb/br/pkg/lightning/common.SQLWithRetry.perform\ \t/home/jenkins/agent/workspace/optimizat
ion-build-tidb-linux-amd/go/src/github.com/pingcap/br/br/pkg/lightning/common/util.go:103\ngithub.com/pingcap/tidb/br/pkg/lightning/common.SQLWithRetry.Trans
act\ \t/home/jenkins/agent/workspace/optimization-build-tidb-linux-amd/go/src/github.com/pingcap/br/br/pkg/lightning/common/util.go:150\ngithub.com/pingcap/t
idb/br/pkg/lightning/backend/local.fetchTableRegionSizeStats\ \t/home/jenkins/agent/workspace/optimization-build-tidb-linux-amd/go/src/github.com/pingcap/br/
br/pkg/lightning/backend/local/localhelper.go:339\ngithub.com/pingcap/tidb/br/pkg/lightning/backend/local.(*local).SplitAndScatterRegionByRanges\ \t/home/jen
kins/agent/workspace/optimization-build-tidb-linux-amd/go/src/github.com/pingcap/br/br/pkg/lightning/backend/local/localhelper.go:151\ngithub.com/pingcap/tid
b/br/pkg/lightning/backend/local.(*local).ImportEngine\ \t/home/jenkins/agent/workspace/optimization-build-tidb-linux-amd/go/src/github.com/pingcap/br/br/pkg
/lightning/backend/local/local.go:2063\ngithub.com/pingcap/tidb/br/pkg/lightning/backend.(*ClosedEngine).Import\ \t/home/jenkins/agent/workspace/optimization
-build-tidb-linux-amd/go/src/github.com/pingcap/br/br/pkg/lightning/backend/backend.go:453\ngithub.com/pingcap/tidb/br/pkg/lightning/restore.(*TableRestore).

PD日志

[2021/12/18 20:14:36.611 +08:00] [INFO] [server.go:832] [“schedule config is updated”] [new="{“max-snapshot-count”:40,“max-pending-peer-count”:2147483647
,“max-merge-region-size”:0,“max-merge-region-keys”:0,“split-merge-interval”:“1h0m0s”,“enable-one-way-merge”:“false”,“enable-cross-table-merge”:
“true”,“patrol-region-interval”:“10ms”,“max-store-down-time”:“30m0s”,“leader-schedule-limit”:4,“leader-schedule-policy”:“count”,“region-sche
dule-limit”:40,“replica-schedule-limit”:64,“merge-schedule-limit”:8,“hot-region-schedule-limit”:4,“hot-region-cache-hits-threshold”:3,“store-limit
“:{“1”:{“add-peer”:15,“remove-peer”:15}},“tolerant-size-ratio”:0,“low-space-ratio”:0.8,“high-space-ratio”:0.7,“region-score-formula-version”:”
v2”,“scheduler-max-waiting-operator”:5,“enable-remove-down-replica”:“true”,“enable-replace-offline-replica”:“true”,“enable-make-up-replica”:“tr
ue”,“enable-remove-extra-replica”:“true”,“enable-location-replacement”:“false”,“enable-debug-metrics”:“false”,“enable-joint-consensus”:“true
“,“schedulers-v2”:[{“type”:“balance-region”,“args”:null,“disable”:false,“args-payload”:””},{“type”:“balance-leader”,“args”:null,“disable
“:false,“args-payload”:””},{“type”:“hot-region”,“args”:null,“disable”:false,“args-payload”:""}],“schedulers-payload”:null,“store-limit-mo
de”:“manual”,“hot-regions-write-interval”:“10m0s”,“hot-regions-reserved-days”:0}"] [old="{“max-snapshot-count”:64,“max-pending-peer-count”:64,"
max-merge-region-size":20,“max-merge-region-keys”:200000,“split-merge-interval”:“1h0m0s”,“enable-one-way-merge”:“false”,“enable-cross-table-merge
“:“true”,“patrol-region-interval”:“10ms”,“max-store-down-time”:“30m0s”,“leader-schedule-limit”:4,“leader-schedule-policy”:“count”,“region-s
chedule-limit”:2048,“replica-schedule-limit”:64,“merge-schedule-limit”:8,“hot-region-schedule-limit”:4,“hot-region-cache-hits-threshold”:3,“store-l
imit”:{“1”:{“add-peer”:15,“remove-peer”:15}},“tolerant-size-ratio”:0,“low-space-ratio”:0.8,“high-space-ratio”:0.7,“region-score-formula-version
“:“v2”,“scheduler-max-waiting-operator”:5,“enable-remove-down-replica”:“true”,“enable-replace-offline-replica”:“true”,“enable-make-up-replica”
:“true”,“enable-remove-extra-replica”:“true”,“enable-location-replacement”:“true”,“enable-debug-metrics”:“false”,“enable-joint-consensus”:“t
rue”,“schedulers-v2”:[{“type”:“balance-region”,“args”:null,“disable”:false,“args-payload”:””},{“type”:“balance-leader”,“args”:null,“dis
able”:false,“args-payload”:””},{“type”:“hot-region”,“args”:null,“disable”:false,“args-payload”:""}],“schedulers-payload”:null,“store-limi
t-mode”:“manual”,“hot-regions-write-interval”:“10m0s”,“hot-regions-reserved-days”:0}"]
[2021/12/18 20:14:53.958 +08:00] [INFO] [cluster_worker.go:128] [“alloc ids for region split”] [region-id=4] [peer-ids="[5]"]
[2021/12/18 20:14:53.958 +08:00] [INFO] [cluster_worker.go:128] [“alloc ids for region split”] [region-id=6] [peer-ids="[7]"]
[2021/12/18 20:14:53.958 +08:00] [INFO] [cluster_worker.go:128] [“alloc ids for region split”] [region-id=8] [peer-ids="[9]"]
[2021/12/18 20:14:53.979 +08:00] [INFO] [cluster_worker.go:220] [“region batch split, generate new regions”] [region-id=2] [origin=“id:4 end_key:“7480000000
000000FF3F5F728000000000FF0000010000000000FA” region_epoch:<conf_ver:1 version:4 > peers:<id:5 store_id:1 > id:6 start_key:“7480000000000000FF3F5F728000000
000FF0000010000000000FA” end_key:“7480000000000000FF3F5F728000000000FF01564C0000000000FA” region_epoch:<conf_ver:1 version:4 > peers:<id:7 store_id:1 > id
:8 start_key:“7480000000000000FF3F5F728000000000FF01564C0000000000FA” end_key:“7480000000000000FF3F5F728000000000FF02A8910000000000FA” region_epoch: peers:<id:9 store_id:1 >”] [total=3]

1 个赞

tidb-lighting 配置文件发一下

1 个赞

[root@oracle export]# more tidb-lightning.toml
[lightning]

日志

level = “info”
file = “tidb-lightning.log”

[tikv-importer]

选择使用的 local 后端

backend = “local”

设置排序的键值对的临时存放地址,目标路径需要是一个空目录

sorted-kv-dir = “/datafile/”
[checkpoint]
enable = false

[mydumper]

源数据目录。

data-source-dir = “/opt/export/data/”

配置通配符规则,默认规则会过滤 mysql、sys、INFORMATION_SCHEMA、PERFORMANCE_SCHEMA、METRICS_SCHEMA、INSPECTION_SCHEMA 系统数据库下的所有表

若不配置该项,导入系统表时会出现“找不到 schema”的异常

filter = [’.’, ‘!mysql.’, '!sys.’, ‘!INFORMATION_SCHEMA.’, '!PERFORMANCE_SCHEMA.’, ‘!METRICS_SCHEMA.’, '!INSPECTION_SCHEMA.’]
[tidb]

目标集群的信息

host = “192.168.1.210”
port = 4000
user = “root”
password = “”

表架构信息在从 TiDB 的“状态端口”获取。

status-port = 10080

集群 pd 的地址

pd-addr = “127.0.0.1:2379”

2 个赞

你别用127.0.0.1,用真实的PD ip试试

1 个赞

PD 和lightning在一起的啊。

而且发现如果导的库数据小没问题,如果库文件大小,就会这样。

先换ip试试吧,用tiup cluster display输出的那个pd leader 的ip

此话题已在最后回复的 1 分钟后被自动关闭。不再允许新回复。