tidb br 备份成功,restore 失败

  • 【TiDB 版本】:tidb v4.0.0 、 br v4.0.0
  • 【问题描述】:BR v4.0.0 backup 成功,restore 失败

我们通过 tiup 创建了一个单点 tidb 环境,为了验证 br 备份功能,先通过 backup 全量备份了数据库(单 db 2000W 条数据左右),然后写入增量数据,再通过 br restore 回滚写入增量前 sst 文件,期望数据回滚成功

下面是相关错误命令:

$ ./tools/tidb-toolkit/bin/br restore full --pd "10.217.58.231:2379" --storage "local:///home/tidb/backup-shbt/br-backupfull-v1" --ratelimit 10 --log-file ./br-logs/backupfull-v1-restore-2.log
Detial BR log in ./br-logs/backupfull-v1-restore-2.log
Full restore <-----------------------------------------------------------------------------------------------------------> 100.00%
Checksum <-------------------------------------------------------/........................................................> 50.00%
Error: failed to validate checksum

关键错误日志:

[2020/06/05 10:51:54.632 +08:00] [ERROR] [client.go:180] ["tso request is canceled due to timeout"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*client).tsCancelLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:180"]
[2020/06/05 10:51:54.632 +08:00] [ERROR] [client.go:265] ["[pd] getTS error"] [error="rpc error: code = Canceled desc = context canceled"] [errorVerbose="rpc error: code = Canceled desc = context canceled\
github.com/pingcap/pd/v4/client.(*client).processTSORequests\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:301\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:250\
runtime.goexit\
\t/usr/local/go/src/runtime/asm_amd64.s:1357"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:265"]
[2020/06/05 10:51:54.634 +08:00] [ERROR] [base_client.go:130] ["[pd] failed updateLeader"] [error="failed to get leader from [http://10.217.58.231:2379]"] [errorVerbose="failed to get leader from [http://10.217.58.231:2379]\
github.com/pingcap/pd/v4/client.(*baseClient).updateLeader\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/base_client.go:198\
github.com/pingcap/pd/v4/client.(*baseClient).leaderLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/base_client.go:129\
runtime.goexit\
\t/usr/local/go/src/runtime/asm_amd64.s:1357"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*baseClient).leaderLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/base_client.go:130"]
...
[2020/06/05 10:52:07.199 +08:00] [ERROR] [client.go:265] ["[pd] getTS error"] [error="rpc error: code = Unknown desc = alloc timestamp failed, lease expired"] [errorVerbose="rpc error: code = Unknown desc = alloc timestamp failed, lease expired\
github.com/pingcap/pd/v4/client.(*client).processTSORequests\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:301\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:250\
runtime.goexit\
\t/usr/local/go/src/runtime/asm_amd64.s:1357"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:265"]
[2020/06/05 10:52:07.631 +08:00] [ERROR] [client.go:265] ["[pd] getTS error"] [error="rpc error: code = Unknown desc = alloc timestamp failed, lease expired"] [errorVerbose="rpc error: code = Unknown desc = alloc timestamp failed, lease expired\
github.com/pingcap/pd/v4/client.(*client).processTSORequests\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:301\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:250\
runtime.goexit\
\t/usr/local/go/src/runtime/asm_amd64.s:1357"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:265"]
[2020/06/05 10:52:07.632 +08:00] [ERROR] [client.go:265] ["[pd] getTS error"] [error="rpc error: code = Unknown desc = alloc timestamp failed, lease expired"] [errorVerbose="rpc error: code = Unknown desc = alloc timestamp failed, lease expired\
github.com/pingcap/pd/v4/client.(*client).processTSORequests\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:301\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:250\
runtime.goexit\
\t/usr/local/go/src/runtime/asm_amd64.s:1357"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:265"]
[2020/06/05 10:52:07.676 +08:00] [ERROR] [client.go:265] ["[pd] getTS error"] [error="rpc error: code = Unknown desc = alloc timestamp failed, lease expired"] [errorVerbose="rpc error: code = Unknown desc = alloc timestamp failed, lease expired\
github.com/pingcap/pd/v4/client.(*client).processTSORequests\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:301\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:250\
runtime.goexit\
\t/usr/local/go/src/runtime/asm_amd64.s:1357"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:265"]
[2020/06/05 10:54:18.893 +08:00] [ERROR] [import.go:255] ["download file skipped"] [file="name:\"2_4_43_ac54fb69a845e5f8a19f1b37546e0d399924841e59119d78a7799b41b03914dd_write.sst\" sha256:\"\\220\\272=X\\372\\273\\022\\307y\\311X\\333v2\\332i\\013\\341 \\027\\240c\\276\\315B\\017A\\217\\370\\344\\266\\331\" start_key:\"t\\200\\000\\000\\000\\000\\000\\0006_r\\200\\000\\000\\000\\001\\036@U\" end_key:\"t\\200\\000\\000\\000\\000\\000\\0006_r\\377\\377\\377\\377\\377\\377\\377\\377\\000\" end_version:417155418796326913 crc64xor:5343193373669625115 total_kvs:1036787 total_bytes:77759025 cf:\"write\" size:47050462 "] [region="id:289 start_key:\"t\\200\\000\\000\\000\\000\\000\\000\\3776_r\\200\\000\\000\\000\\001\\377;\\312-\\000\\000\\000\\000\\000\\372\" end_key:\"t\\200\\000\\000\\000\\000\\000\\000\\3776_r\\200\\000\\000\\000\\001\\377I\\310e\\000\\000\\000\\000\\000\\372\" region_epoch:<conf_ver:5 version:47 > peers:<id:290 store_id:1 > peers:<id:291 store_id:2 > peers:<id:292 store_id:3 > "] [startKey=dIAAAAAAAAD/Nl9ygAAAAAH/HkBVAAAAAAD6] [endKey=dIAAAAAAAAD/Nl9y////////////AAAAAAD7] [error="range is empty"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/br/pkg/restore.(*FileImporter).Import.func1\
\t/home/jenkins/agent/workspace/build_br_multi_branch_v4.0.0/go/src/github.com/pingcap/br/pkg/restore/import.go:255\
github.com/pingcap/br/pkg/utils.WithRetry\
\t/home/jenkins/agent/workspace/build_br_multi_branch_v4.0.0/go/src/github.com/pingcap/br/pkg/utils/retry.go:34\
github.com/pingcap/br/pkg/restore.(*FileImporter).Import\
\t/home/jenkins/agent/workspace/build_br_multi_branch_v4.0.0/go/src/github.com/pingcap/br/pkg/restore/import.go:208\
github.com/pingcap/br/pkg/restore.(*Client).RestoreFiles.func2\
\t/home/jenkins/agent/workspace/build_br_multi_branch_v4.0.0/go/src/github.com/pingcap/br/pkg/restore/client.go:493\
github.com/pingcap/br/pkg/utils.(*WorkerPool).Apply.func1\
\t/home/jenkins/agent/workspace/build_br_multi_branch_v4.0.0/go/src/github.com/pingcap/br/pkg/utils/worker.go:47"]
[2020/06/05 10:54:22.631 +08:00] [ERROR] [client.go:180] ["tso request is canceled due to timeout"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*client).tsCancelLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:180"]
[2020/06/05 10:54:22.631 +08:00] [ERROR] [client.go:265] ["[pd] getTS error"] [error="rpc error: code = Canceled desc = context canceled"] [errorVerbose="rpc error: code = Canceled desc = context canceled\
github.com/pingcap/pd/v4/client.(*client).processTSORequests\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:301\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:250\
runtime.goexit\
\t/usr/local/go/src/runtime/asm_amd64.s:1357"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*client).tsLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:265"]
[2020/06/05 10:54:22.633 +08:00] [ERROR] [base_client.go:130] ["[pd] failed updateLeader"] [error="failed to get leader from [http://10.217.58.231:2379]"] [errorVerbose="failed to get leader from [http://10.217.58.231:2379]\
github.com/pingcap/pd/v4/client.(*baseClient).updateLeader\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/base_client.go:198\
github.com/pingcap/pd/v4/client.(*baseClient).leaderLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/base_client.go:129\
runtime.goexit\
\t/usr/local/go/src/runtime/asm_amd64.s:1357"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*baseClient).leaderLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/base_client.go:130"]
[2020/06/05 10:54:22.802 +08:00] [ERROR] [base_client.go:130] ["[pd] failed updateLeader"] [error="failed to get leader from [http://10.217.58.231:2379]"] [errorVerbose="failed to get leader from [http://10.217.58.231:2379]\
github.com/pingcap/pd/v4/client.(*baseClient).updateLeader\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/base_client.go:198\
github.com/pingcap/pd/v4/client.(*baseClient).leaderLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/base_client.go:129\
runtime.goexit\
\t/usr/local/go/src/runtime/asm_amd64.s:1357"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*baseClient).leaderLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/base_client.go:130"]
[2020/06/05 10:54:25.632 +08:00] [ERROR] [client.go:180] ["tso request is canceled due to timeout"] [stack="github.com/pingcap/log.Error\
\t/go/pkg/mod/github.com/pingcap/log@v0.0.0-20200117041106-d28c14d3b1cd/global.go:42\
github.com/pingcap/pd/v4/client.(*client).tsCancelLoop\
\t/go/pkg/mod/github.com/pingcap/pd/v4@v4.0.0-rc.1.0.20200511074607-3bb650739add/client/client.go:180"]

麻烦检查一下 pd 节点状态都正常吗?看日志是 failed to get leader from [http://10.217.58.231:2379]

tiup cluster display 查看下集群状态

tiup ctl pd -u pd_ip:pd_port member 查看 pd 节点的状态

# tiup ctl pd -u 10.217.58.231:2379 member
The component `ctl` is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/ctl-v4.0.0-linux-amd64.tar.gz 127.08 MiB / 127.08 MiB 100.00% 10.80 MiB p/s
Starting component `ctl`: /root/.tiup/components/ctl/v4.0.0/ctl pd -u 10.217.58.231:2379 member
{
  "header": {
    "cluster_id": 6834357454104988796
  },
  "members": [
    {
      "name": "pd-10.217.58.231-2379",
      "member_id": 5867764794415365566,
      "peer_urls": [
        "http://10.217.58.231:2380"
      ],
      "client_urls": [
        "http://10.217.58.231:2379"
      ],
      "deploy_path": "/tidb-deploy/pd-2379/bin",
      "binary_version": "v4.0.0",
      "git_hash": "56d4c3d2237f5bf6fb11a794731ed1d95c8020c2"
    }
  ],
  "leader": {
    "name": "pd-10.217.58.231-2379",
    "member_id": 5867764794415365566,
    "peer_urls": [
      "http://10.217.58.231:2380"
    ],
    "client_urls": [
      "http://10.217.58.231:2379"
    ]
  },
  "etcd_leader": {
    "name": "pd-10.217.58.231-2379",
    "member_id": 5867764794415365566,
    "peer_urls": [
      "http://10.217.58.231:2380"
    ],
    "client_urls": [
      "http://10.217.58.231:2379"
    ],
    "deploy_path": "/tidb-deploy/pd-2379/bin",
    "binary_version": "v4.0.0",
    "git_hash": "56d4c3d2237f5bf6fb11a794731ed1d95c8020c2"
  }
}
# tiup cluster display polefs-tidb
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.1/tiup-cluster display polefs-tidb
TiDB Cluster: polefs-tidb
TiDB Version: v4.0.0
ID                   Role        Host           Ports                            OS/Arch       Status        Data Dir                    Deploy Dir
--                   ----        ----           -----                            -------       ------        --------                    ----------
10.217.58.231:3000   grafana     10.217.58.231  3000                             linux/x86_64  Up            -                           /tidb-deploy/grafana-3000
10.217.58.231:2379   pd          10.217.58.231  2379/2380                        linux/x86_64  Healthy|L|UI  /tidb-data/pd-2379          /tidb-deploy/pd-2379
10.217.58.231:9090   prometheus  10.217.58.231  9090                             linux/x86_64  Up            /tidb-data/prometheus-9090  /tidb-deploy/prometheus-9090
10.217.58.231:4000   tidb        10.217.58.231  4000/10080                       linux/x86_64  Up            -                           /tidb-deploy/tidb-4000
10.217.58.231:9000   tiflash     10.217.58.231  9000/8123/3930/20170/20292/8234  linux/x86_64  Up            /tidb-data/tiflash-9000     /tidb-deploy/tiflash-9000
10.217.58.231:20160  tikv        10.217.58.231  20160/20180                      linux/x86_64  Up            /tidb-data/tikv-20160       /tidb-deploy/tikv-20160
10.217.58.231:20161  tikv        10.217.58.231  20161/20181                      linux/x86_64  Up            /tidb-data/tikv-20161       /tidb-deploy/tikv-20161
10.217.58.231:20162  tikv        10.217.58.231  20162/20182                      linux/x86_64  Up            /tidb-data/tikv-20162       /tidb-deploy/tikv-20162

@gangshen-PingCAP Ping

请问恢复的时候 ,TiDB 集群是全新的集群吗?

不是的,我用的同一个集群,先备份后恢复,但我换了个盘也可以成功的(restore 回相同集群,而非全新集群)

$ ./tools/tidb-toolkit/bin/br restore full --pd "10.217.58.231:2379" --storage "local:///home/tidb/ceph-disk" --ratelimit 4 --log-file ./br-logs/backupfull-local-v2-restore.log
Detial BR log in ./br-logs/backupfull-local-v2-restore.log
Full restore <------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Checksum <----------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
{"level":"warn","ts":"2020-06-05T12:07:12.402+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-1975316c-2a69-48b9-bb3b-fc15fe80f4a3/10.217.58.231:2379","attempt":0,"error":"rpc error: code = Unavailable desc = transport is closing"}
[2020/06/05 12:07:12.402 +08:00] [INFO] [collector.go:58] ["Full restore Success summary: total restore files: 33, total success: 33, total failed: 0, total take(s): 477.56, total kv: 26777015, total size(MB): 1915.24, avg speed(MB/s): 4.01"] ["split region"=1.997889ms] ["restore checksum"=4.113676028s] ["restore ranges"=33]

@gangshen-PingCAP 感谢你的帮助,我用一个集群做了测试,通过限制 restore limitrate 值可以成功执行了,我想可能是由于磁盘 I/O 引起的内部通信超时,从我们的测试目的角度是验证 br 备份工具 backup & restore 功能对数据完整性的检验,如果命令执行成功了,我想已经可以达到验证 tikv 数据备份和恢复流程了,对吧 ?

执行命令记录:

[tidb@p43027v ~]$ ./tools/tidb-toolkit/bin/br backup full --pd "10.217.58.231:2379" --storage "local:///home/tidb/backup-shbt/br-backupfull-v2" --ratelimit 120 --log-file ./br-logs/backupfull-v2.log
Detial BR log in ./br-logs/backupfull-v2.log
Full backup <-------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Checksum <----------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
[2020/06/05 13:24:22.397 +08:00] [INFO] [collector.go:58] ["Full backup Success summary: total backup ranges: 2, total success: 2, total failed: 0, total take(s): 36.25, total kv: 26777015, total size(MB): 1915.24, avg speed(MB/s): 52.83"] ["backup checksum"=3.137410846s] ["backup fast checksum"=594.72µs] ["backup total regions"=35]
[tidb@p43027v ~]$
[tidb@p43027v ~]$ ./tools/tidb-toolkit/bin/br restore full --pd "10.217.58.231:2379" --storage "local:///home/tidb/backup-shbt/br-backupfull-v2" --ratelimit 4 --log-file ./br-logs/backupfull-v2-restore.log
Detial BR log in ./br-logs/backupfull-v2-restore.log
Full restore <------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Checksum <----------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
[2020/06/05 13:33:00.093 +08:00] [INFO] [collector.go:58] ["Full restore Success summary: total restore files: 33, total success: 33, total failed: 0, total take(s): 395.83, total kv: 26777015, total size(MB): 1915.24, avg speed(MB/s): 4.84"] ["split region"=1.485246ms] ["restore checksum"=3.836594726s] ["restore ranges"=33]
[tidb@p43027v ~]$

可以通过 select count(*) 简单验证一下数据量上是否有差异

好的,我重新做了次测试,验证没问题 :grinning:

  • 1,全量备份
  • 2,drop database
  • 3,restore tikv sst

(# start restart 应该在 drop 表之后)

恢复命令;

$ ./tools/tidb-toolkit/bin/br restore full --pd "10.217.58.231:2379" --storage "local:///home/tidb/backup-shbt/br-backupfull-v1" --ratelimit 4 --log-file ./br-logs/backupfull-v1-restore-abc.log
Detial BR log in ./br-logs/backupfull-v1-restore-abc.log
Full restore <------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
Checksum <----------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00%
[2020/06/05 14:24:56.955 +08:00] [INFO] [collector.go:58] ["Full restore Success summary: total restore files: 23, total success: 23, total failed: 0, total take(s): 287.98, total kv: 17750015, total size(MB): 1269.58, avg speed(MB/s): 4.41"] ["split region"=555.795447ms] ["restore checksum"=10.652134082s] ["restore ranges"=23]

:+1::+1::+1:

此话题已在最后回复的 1 分钟后被自动关闭。不再允许新回复。