br数据恢复

【 TiDB 使用环境】测试环境
【 TiDB 版本】7.5.3
【遇到的问题:问题现象及影响】
[tidb@wisdom-haproxy01 v7.1.2]$ tiup cluster display tidb-yanlian
Checking updates for component cluster… Timedout (after 2s)
Cluster type: tidb
Cluster name: tidb-yanlian
Cluster version: v7.5.3
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://175.41.128.81:2379/dashboard
Grafana URL: http://175.41.128.80:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir


175.41.128.80:9093 alertmanager 175.41.128.80 9093/9094 linux/x86_64 Up /data/tidb-data/alertmanager-9093 /data/tidb-deploy/alertmanager-9093
175.41.128.81:8300 cdc 175.41.128.81 8300 linux/x86_64 Down /data/tidb-data/cdc-8300 /data/tidb-deploy/cdc-8300
175.41.128.82:8300 cdc 175.41.128.82 8300 linux/x86_64 Down /data/tidb-data/cdc-8300 /data/tidb-deploy/cdc-8300
175.41.128.80:3000 grafana 175.41.128.80 3000 linux/x86_64 Up - /data/tidb-deploy/grafana-3000
175.41.128.81:2379 pd 175.41.128.81 2379/2380 linux/x86_64 Up|UI /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379
175.41.128.82:2379 pd 175.41.128.82 2379/2380 linux/x86_64 Up /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379
175.41.128.83:2379 pd 175.41.128.83 2379/2380 linux/x86_64 Up|L /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379
175.41.128.80:9090 prometheus 175.41.128.80 9090/12020 linux/x86_64 Up /data/tidb-data/prometheus-9090 /data/tidb-deploy/prometheus-9090
175.41.128.81:4000 tidb 175.41.128.81 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000
175.41.128.82:4000 tidb 175.41.128.82 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000
175.41.128.83:4000 tidb 175.41.128.83 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000
175.41.128.84:20160 tikv 175.41.128.84 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160
175.41.128.85:20160 tikv 175.41.128.85 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160
175.41.128.86:20160 tikv 175.41.128.86 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160
Total nodes: 14
[tidb@wisdom-haproxy01 v7.1.2]$ ./br restore db --pd 175.41.128.81:2379 --db typayv2 --storage s3://wisdom/ --s3.endpoint=http://175.41.185.90:9000
Detail BR log in /tmp/br.log.2024-09-12T14.36.00+0800
Error: failed to check task exists: found CDC changefeed(s): cluster/namespace: default/default changefeed(s): [banktype depositorder fkbankcardvalid fkmember fkmemberorder fkmerchantgroup fkpaychannelgroup fkriskcollectdata fkriskcollectdetail fkriskorder merchant merchantdepositorder merchantpaypool merchantwithdraworder mintables paychannel paychannel-ext paychannelsetting paypool paypoolchannel paytype t-depositorder-task thirddepositorder thirdmerchant thirdwithdraworder withdraworder], please stop changefeed(s) before restore

我cdc服务都停了 还报 found CDC changefeed

一般是还有部分进程或服务在后台运行或者是元数据可能未完全同步或清理

你直接删除cdc服务看看

查看下ticdc的状态,是"state": "stopped"吗?

tiup cdc cli changefeed remove --pd=http://175.41.128.81:2379 --changefeed-id=<changefeed_id>
直接删除下呢?

此话题已在最后回复的 7 天后被自动关闭。不再允许新回复。