k8s部署tidb v3.01,服务器重启后数据丢失,密码自动重置为空。

k8s部署tidb v3.01,服务器重启后数据丢失,密码自动重置为空。

k8s部署单个节点的tidb后导入了一点数据 然后重启服务器,结果重启完后密码被重置 数据也丢失了。
多节点也有同样的问题,多节点如果重启其中一个节点后该节点的pod无法重新启动。

pod 无法启动能否提供下日志信息?

数据丢失问题是否是因为配置问题,参考

https://docs.pingcap.com/zh/tidb-in-kubernetes/stable/configure-storage-class#数据安全

[2020/11/13 04:49:18.859 +00:00] [INFO] [mod.rs:334] [“starting working thread”] [worker=consistency-check]
[2020/11/13 04:49:18.861 +00:00] [WARN] [store.rs:1116] [“set thread priority for raftstore failed”] [error=“Os { code: 13, kind: PermissionDenied, message: “Permission denied” }”]
[2020/11/13 04:49:18.861 +00:00] [INFO] [node.rs:159] [“put store to PD”] [store=“Store { id: 104, address: “super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160”, state: Up, labels: [], version: “4.0.0-alpha”, unknown_fields: UnknownFields { fields: None }, cached_size: CachedSize { size: 0 } }”]
[2020/11/13 04:49:18.862 +00:00] [ERROR] [util.rs:326] [“request failed”] [err="Grpc(RpcFailure(RpcStatus { status: RpcStatusCode(2), details: Some("duplicated store address: id:104 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” version:\“4.0.0-alpha\” , already registered by id:1 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” labels:<key:\“host\” value:\“node1\” > version:\“4.0.0-alpha\” “) }))”]
[2020/11/13 04:49:18.863 +00:00] [ERROR] [util.rs:326] [“request failed”] [err="Grpc(RpcFailure(RpcStatus { status: RpcStatusCode(2), details: Some("duplicated store address: id:104 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” version:\“4.0.0-alpha\” , already registered by id:1 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” labels:<key:\“host\” value:\“node1\” > version:\“4.0.0-alpha\” “) }))”]
[2020/11/13 04:49:18.863 +00:00] [ERROR] [util.rs:326] [“request failed”] [err="Grpc(RpcFailure(RpcStatus { status: RpcStatusCode(2), details: Some("duplicated store address: id:104 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” version:\“4.0.0-alpha\” , already registered by id:1 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” labels:<key:\“host\” value:\“node1\” > version:\“4.0.0-alpha\” “) }))”]
[2020/11/13 04:49:18.864 +00:00] [ERROR] [util.rs:326] [“request failed”] [err="Grpc(RpcFailure(RpcStatus { status: RpcStatusCode(2), details: Some("duplicated store address: id:104 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” version:\“4.0.0-alpha\” , already registered by id:1 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” labels:<key:\“host\” value:\“node1\” > version:\“4.0.0-alpha\” “) }))”]
[2020/11/13 04:49:18.865 +00:00] [ERROR] [util.rs:326] [“request failed”] [err="Grpc(RpcFailure(RpcStatus { status: RpcStatusCode(2), details: Some("duplicated store address: id:104 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” version:\“4.0.0-alpha\” , already registered by id:1 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” labels:<key:\“host\” value:\“node1\” > version:\“4.0.0-alpha\” “) }))”]
[2020/11/13 04:49:18.865 +00:00] [ERROR] [util.rs:326] [“request failed”] [err="Grpc(RpcFailure(RpcStatus { status: RpcStatusCode(2), details: Some("duplicated store address: id:104 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” version:\“4.0.0-alpha\” , already registered by id:1 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” labels:<key:\“host\” value:\“node1\” > version:\“4.0.0-alpha\” “) }))”]
[2020/11/13 04:49:18.866 +00:00] [ERROR] [util.rs:326] [“request failed”] [err="Grpc(RpcFailure(RpcStatus { status: RpcStatusCode(2), details: Some("duplicated store address: id:104 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” version:\“4.0.0-alpha\” , already registered by id:1 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” labels:<key:\“host\” value:\“node1\” > version:\“4.0.0-alpha\” “) }))”]
[2020/11/13 04:49:18.866 +00:00] [ERROR] [util.rs:326] [“request failed”] [err="Grpc(RpcFailure(RpcStatus { status: RpcStatusCode(2), details: Some("duplicated store address: id:104 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” version:\“4.0.0-alpha\” , already registered by id:1 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” labels:<key:\“host\” value:\“node1\” > version:\“4.0.0-alpha\” “) }))”]
[2020/11/13 04:49:18.867 +00:00] [ERROR] [util.rs:326] [“request failed”] [err="Grpc(RpcFailure(RpcStatus { status: RpcStatusCode(2), details: Some("duplicated store address: id:104 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” version:\“4.0.0-alpha\” , already registered by id:1 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” labels:<key:\“host\” value:\“node1\” > version:\“4.0.0-alpha\” “) }))”]
[2020/11/13 04:49:18.867 +00:00] [ERROR] [util.rs:326] [“request failed”] [err=“Grpc(RpcFailure(RpcStatus { status: RpcStatusCode(2), details: Some(“duplicated store address: id:104 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” version:\“4.0.0-alpha\” , already registered by id:1 address:\“super-rdb-cluster-tikv-0.super-rdb-cluster-tikv-peer.denali.svc:20160\” labels:<key:\“host\” value:\“node1\” > version:\“4.0.0-alpha\” “) }))”]
[2020/11/13 04:49:18.868 +00:00] [FATAL] [server.rs:273] [“failed to start node: Other(”[components/pd_client/src/util.rs:334]: fail to request”)”]
[root@master denali-aicore-install]#

另外如果这么部署,两个工作节点,tikv,tipd,tidb各一个启动一个pod实例。
tikv调度在node1而 tipd调度到node2 ,把tipd所在的节点重启后。。
tikv和tidb两个pod实例都不能正常运行了 。。
上面是tikv的报错信息

看报错的日志出现了冲突的 store id: duplicated store address: id:104

处理办法参考如下

不是缩容清理不干净的问题啊 我是重启机器 就出现。。别光看报错信息。。要看我的上下文。。

参考你的那个处理方法等于删除集群重新部署了 。。那我之前的数据怎么办呢··

从现象上看数据好像没有持久化,麻烦贴一下 TidbCluster yaml 吧:

kubectl get tc $name -n $ns -o yaml
kubectl get sc
kubectl get pvc -n $ns