求大神解救 不解决我要被裁了 之前没做个任何备份和binlog 我现在只剩下的data 挂载卷的data 数据 之前k8s 完全宕机了 新的k8s 如何使用 sts 和 pv 如何重新挂上这些data 目录数据

我的pv 挂载卷如何编写yaml 让指定data1 ,2 ,3 衔接上对应kv 和 pd

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: “local-storage-prod”
provisioner: “kubernetes.io/no-provisioner
volumeBindingMode: “WaitForFirstConsumer”

apiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: prod
data:
setPVOwnerRef: “true”
nodeLabelsForPV: |
- kubernetes.io/hostname
storageClassMap: |
local-storage-prod:
hostDir: /data/tidb/prod/data
mountDir: /data

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: local-volume-provisioner
namespace: prod
labels:
app: local-volume-provisioner
spec:
selector:
matchLabels:
app: local-volume-provisioner
template:
metadata:
labels:
app: local-volume-provisioner
spec:
serviceAccountName: local-storage-admin
containers:
- image: “quay.io/external_storage/local-volume-provisioner:v2.3.4
name: provisioner
securityContext:
privileged: true
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: JOB_CONTAINER_IMAGE
value: “Quay
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 100m
memory: 100Mi
volumeMounts:
- mountPath: /etc/provisioner/config
name: provisioner-config
readOnly: true
- mountPath: /data
name: local-disks
mountPropagation: “HostToContainer”
volumes:
- name: provisioner-config
configMap:
name: local-provisioner-config
- name: local-disks
hostPath:
path: /data/tidb/prod/data

apiVersion: v1
kind: ServiceAccount
metadata:
name: local-storage-admin
namespace: prod

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-storage-provisioner-pv-binding
namespace: prod
subjects:

  • kind: ServiceAccount
    name: local-storage-admin
    namespace: prod
    roleRef:
    kind: ClusterRole
    name: system:persistent-volume-provisioner
    apiGroup: rbac.authorization.k8s.io

看14小时过去了,是不是已经解决了?
我们的环境pv都是 csi自动分配的,直接挂载卷的经验不多,给不出直接可用的脚本。
有几点建议:

  1. 如果数据很重要,建议优先对data卷做个备份,后面恢复万一失败了还可以重头再来。
  2. 按你们的环境建一个pv,然后模拟改一下pv对应的data卷。这一步能成功后再在真实的data上恢复pv
  3. pv有了,就好恢复了,pvc关联pv就行,然后对应的启动就行了。
1 个赞

启动一个新集群,挂起状态,data 目录 scp 替换,pd 特殊处理下即可。

https://docs.pingcap.com/zh/tidb/stable/pd-recover

tikv ip 无所谓的,pd 需要对应。

1 个赞

apiVersion: v1
kind: PersistentVolume
metadata:
  name: wordpress-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /mnt/data

网上找到的pv挂在 hostPath的,你可以参考着把 data 挂载成 pv。

感谢已经解决了

1 个赞

怎么解决的? :thinking:

@TiDBer_jbQFcY1n 写一个故障修复文章吧,对大家以后会很有借鉴。 :rose:

1 个赞

为啥在K8S上搞呢。。。。。。