环境:
5节点tikv 删除了目标region的两个tikv节点的数据
/tidb/tidb-data/tikv-20160$ rm -rf ./*
集群状态变为
10.137.32.3:3000 grafana 10.137.32.3 3000 linux/x86_64 Up - /opt/tidb/tidb-deploy/grafana-3000
10.137.32.3:2379 pd 10.137.32.3 2379/2380 linux/x86_64 Up|L|UI /opt/tidb/tidb-data/pd-2379 /opt/tidb/tidb-deploy/pd-2379
10.137.32.3:9090 prometheus 10.137.32.3 9090/12020 linux/x86_64 Up /opt/tidb/tidb-data/prometheus-9090 /opt/tidb/tidb-deploy/prometheus-9090
10.137.32.3:4000 tidb 10.137.32.3 4000/10080 linux/x86_64 Up - /opt/tidb/tidb-deploy/tidb-4000
10.137.32.3:20160 tikv 10.137.32.3 20160/20180 linux/x86_64 Up /opt/tidb/tidb-data/tikv-20160 /opt/tidb/tidb-deploy/tikv-20160
10.137.32.4:20160 tikv 10.137.32.4 20160/20180 linux/x86_64 Disconnected /opt/tidb/tidb-data/tikv-20160 /opt/tidb/tidb-deploy/tikv-20160
10.137.32.4:20161 tikv 10.137.32.4 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
10.137.32.5:20160 tikv 10.137.32.5 20160/20180 linux/x86_64 Disconnected /opt/tidb/tidb-data/tikv-20160 /opt/tidb/tidb-deploy/tikv-20160
10.137.32.5:20161 tikv 10.137.32.5 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
此时在还留有一个副本的几点执行unsafe-recover remove-fail-stores 报错,但是将该节点停掉后执行正常。是操作有什么不对的地方么?