tidb-lightning

复制链接完成认证,获得“加急”处理问题的权限,方便您更快速地解决问题。

为提高效率,请提供以下信息,问题描述清晰能够更快得到解决:

【概述】 场景 + 问题概述
错误退出状态码
【备份和数据迁移策略逻辑】

【背景】 做过哪些操作

【现象】 业务和数据库现象

# tidb-lightning --config tidb_restore_Xxx.toml 
Verbose debug logs will be written to tidb-lightning.log

[2021/03/24 11:08:53.719 +08:00] [INFO] [client.go:193] ["[pd] create pd client with endpoints"] [pd-address="[172.16.12.150:2379]"]
[2021/03/24 11:08:53.722 +08:00] [INFO] [base_client.go:296] ["[pd] update member urls"] [old-urls="[http://172.16.12.150:2379]"] [new-urls="[http://172.16.12.128:2379,http://172.16.12.150:2379,http://172.16.12.217:2379]"]
[2021/03/24 11:08:53.722 +08:00] [INFO] [base_client.go:308] ["[pd] switch leader"] [new-leader=http://172.16.12.128:2379] [old-leader=]
[2021/03/24 11:08:53.722 +08:00] [INFO] [base_client.go:112] ["[pd] init cluster id"] [cluster-id=6939715598709240559]
[2021/03/24 11:08:54.503 +08:00] [INFO] [client.go:193] ["[pd] create pd client with endpoints"] [pd-address="[172.16.12.150:2379]"]
[2021/03/24 11:08:54.505 +08:00] [INFO] [base_client.go:296] ["[pd] update member urls"] [old-urls="[http://172.16.12.150:2379]"] [new-urls="[http://172.16.12.128:2379,http://172.16.12.150:2379,http://172.16.12.217:2379]"]
[2021/03/24 11:08:54.505 +08:00] [INFO] [base_client.go:308] ["[pd] switch leader"] [new-leader=http://172.16.12.128:2379] [old-leader=]
[2021/03/24 11:08:54.505 +08:00] [INFO] [base_client.go:112] ["[pd] init cluster id"] [cluster-id=6939715598709240559]
[2021/03/24 11:08:54.531 +08:00] [INFO] [pd.go:362] ["pause scheduler successful at beginning"] [name="[balance-leader-scheduler,balance-region-scheduler,balance-hot-region-scheduler]"]
[2021/03/24 11:08:54.542 +08:00] [INFO] [pd.go:370] ["pause configs successful at beginning"] [cfg="{\"enable-location-replacement\":\"false\",\"leader-schedule-limit\":12,\"max-merge-region-keys\":0,\"max-merge-region-size\":0,\"max-pending-peer-count\":2147483647,\"max-snapshot-count\":9,\"region-schedule-limit\":40}"]
[2021/03/24 11:08:54.543 +08:00] [INFO] [client.go:193] ["[pd] create pd client with endpoints"] [pd-address="[172.16.12.150:2379]"]
[2021/03/24 11:08:54.546 +08:00] [INFO] [base_client.go:296] ["[pd] update member urls"] [old-urls="[http://172.16.12.150:2379]"] [new-urls="[http://172.16.12.128:2379,http://172.16.12.150:2379,http://172.16.12.217:2379]"]
[2021/03/24 11:08:54.546 +08:00] [INFO] [base_client.go:308] ["[pd] switch leader"] [new-leader=http://172.16.12.128:2379] [old-leader=]
[2021/03/24 11:08:54.546 +08:00] [INFO] [base_client.go:112] ["[pd] init cluster id"] [cluster-id=6939715598709240559]
[2021/03/24 11:08:54.556 +08:00] [INFO] [client.go:193] ["[pd] create pd client with endpoints"] [pd-address="[172.16.12.150:2379]"]
[2021/03/24 11:08:54.558 +08:00] [INFO] [base_client.go:296] ["[pd] update member urls"] [old-urls="[http://172.16.12.150:2379]"] [new-urls="[http://172.16.12.128:2379,http://172.16.12.150:2379,http://172.16.12.217:2379]"]
[2021/03/24 11:08:54.558 +08:00] [INFO] [base_client.go:308] ["[pd] switch leader"] [new-leader=http://172.16.12.128:2379] [old-leader=]
[2021/03/24 11:08:54.558 +08:00] [INFO] [base_client.go:112] ["[pd] init cluster id"] [cluster-id=6能39715598709240559]
[2021/03/24 11:08:54.600 +08:00] [INFO] [pd.go:409] ["resume scheduler"] [schedulers="[balance-leader-scheduler,balance-region-scheduler,balance-hot-region-scheduler]"]
[2021/03/24 11:08:54.600 +08:00] [INFO] [pd.go:395] ["exit pause scheduler and configs successful"]
[2021/03/24 11:08:54.601 +08:00] [INFO] [pd.go:429] ["resume scheduler successful"] [scheduler=balance-leader-scheduler]
[2021/03/24 11:08:54.602 +08:00] [INFO] [pd.go:429] ["resume scheduler successful"] [scheduler=balance-region-scheduler]
[2021/03/24 11:08:54.603 +08:00] [INFO] [pd.go:429] ["resume scheduler successful"] [scheduler=balance-hot-region-scheduler]
[2021/03/24 11:08:54.603 +08:00] [INFO] [pd.go:520] ["restoring config"] [config="{\"enable-cross-table-merge\":\"false\",\"enable-debug-metrics\":\"false\",\"enable-location-replacement\":\"true\",\"enable-make-up-replica\":\"true\",\"enable-one-way-merge\":\"false\",\"enable-remove-down-replica\":\"true\",\"enable-remove-extra-replica\":\"true\",\"enable-replace-offline-replica\":\"true\",\"high-space-ratio\":0.7,\"hot-region-cache-hits-threshold\":3,\"hot-region-schedule-limit\":4,\"leader-schedule-limit\":4,\"leader-schedule-policy\":\"count\",\"low-space-ratio\":0.8,\"max-merge-region-keys\":200000,\"max-merge-region-size\":20,\"max-pending-peer-count\":16,\"max-snapshot-count\":3,\"max-store-down-time\":\"30m0s\",\"merge-schedule-limit\":8,\"patrol-region-interval\":\"100ms\",\"region-schedule-limit\":2048,\"replica-schedule-limit\":64,\"scheduler-max-waiting-operator\":5,\"schedulers-payload\":null,\"schedulers-v2\":[{\"args\":null,\"args-payload\":\"\",\"disable\":false,\"type\":\"balance-region\"},{\"args\":null,\"args-payload\":\"\",\"disable\":false,\"type\":\"balance-leader\"},{\"args\":null,\"args-payload\":\"\",\"disable\":false,\"type\":\"hot-region\"},{\"args\":null,\"args-payload\":\"\",\"disable\":false,\"type\":\"label\"}],\"split-merge-interval\":\"1h0m0s\",\"store-limit\":{\"1\":{\"add-peer\":15,\"remove-peer\":15},\"4\":{\"add-peer\":15,\"remove-peer\":15},\"5\":{\"add-peer\":15,\"remove-peer\":15}},\"store-limit-mode\":\"manual\",\"tolerant-size-ratio\":0}"]
Error: TiDB Lightning has failed last time; please resolve these errors first
tidb lightning encountered error:  TiDB Lightning has failed last time; please resolve these errors first

【问题】 当前遇到的问题
如上信息所示,由于上一次导库出现了问题提示error: TiDB Lightning has failed last time; please resolve these errors first. 但是程序退出状态是0,在shell中说明是正常的,在一些通过状态判断不友好,看能否加一下os.Exit()
通过报错具体的被调用的函数是func (rc *RestoreController) restoreTables(ctx context.Context) error

【业务影响】
shell通过状态码判断是否数据还原成功不友好,无法直接通过程序退出状态码判断是否成还原功
【TiDB 版本】

【附件】

  • 相关日志、配置文件、Grafana 监控(https://metricstool.pingcap.com/)
  • TiUP Cluster Display 信息
  • TiUP CLuster Edit config 信息
  • TiDB-Overview 监控
  • 对应模块的 Grafana 监控(如有 BR、TiDB-binlog、TiCDC 等)
  • 对应模块日志(包含问题前后 1 小时日志)

若提问为性能优化、故障排查类问题,请下载脚本运行。终端输出的打印结果,请务必全选并复制粘贴上传。

请问是需求吗?

算是吧

好的,那我改下标签,后续会有相关同学关注。

感谢:grinning:

:handshake::handshake: