K8s tikv在批量导入数据出现“CrashLoopBackOff”

依照文档指引安装:[在 GCP 上通过 Kubernetes 部署 TiDB 集群]

  • 系统版本 & kernel 版本】Linux gke-tidb-default-pool-c644815e-mfdm 4.14.137+ #1 SMP Thu Aug 8 02:47:02 PDT 2019 x86_64 Intel® Xeon® CPU @ 2.20GHz GenuineIntel GNU/Linux
  • TiDB 版本】Container image “pingcap/tikv:v3.0.4”
  • 磁盘型号】启动磁盘1个100G,SSD磁盘两个,分别为1G和10G
  • 集群节点分布】3个节点
  • 数据量 & region 数量 & 副本数】1个region
  • 问题描述(我做了什么)】通过loader导入数据,有一个tikv的pod出现fatal:[2019/10/28 06:22:25.999 +00:00] [FATAL] [server.rs:145] [“failed to create raft engine: RocksDb IO error: No space left on deviceWhile appending to file: /var/lib/tikv/raft/000725.sst: No space left on device”]

$ df -h Filesystem Size Used Avail Use% Mounted on /dev/root 1.2G 691M 530M 57% / devtmpfs 3.7G 0 3.7G 0% /dev tmpfs 3.7G 0 3.7G 0% /dev/shm tmpfs 3.7G 1.3M 3.7G 1% /run tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup tmpfs 1.0M 188K 836K 19% /etc/machine-id tmpfs 256K 0 256K 0% /mnt/disks tmpfs 3.7G 0 3.7G 0% /tmp overlayfs 1.0M 188K 836K 19% /etc /dev/sda8 12M 28K 12M 1% /usr/share/oem /dev/sda1 95G 4.7G 90G 5% /mnt/stateful_partition tmpfs 1.0M 132K 892K 13% /var/lib/cloud

关键词

No space left on device

测试库也只有1个2000万的表,100G的盘应该够用。

我是用tidb operator和helm安装的,请问要做什么配置才能够保证空间能够被用上?

主机上用df看空间是足够的:

Filesystem Size Used Avail Use% Mounted on

/dev/root 1.2G 691M 530M 57% /

devtmpfs 3.7G 0 3.7G 0% /dev

tmpfs 3.7G 0 3.7G 0% /dev/shm

tmpfs 3.7G 1.3M 3.7G 1% /run

tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup

tmpfs 1.0M 188K 836K 19% /etc/machine-idu

tmpfs 256K 0 256K 0% /mnt/disks

tmpfs 3.7G 0 3.7G 0% /tmp

overlayfs 1.0M 188K 836K 19% /etc

/dev/sda8 12M 28K 12M 1% /usr/share/oem

/dev/sda1 95G 4.7G 90G 5% /mnt/stateful_partition

tmpfs 1.0M 132K 892K 13% /var/lib/cloud

我先尝试改大pv看看。 谢谢。

pv 在创建时不同类型不同大小都应该指定成不同的 StorageClass 用于区分,在创建 TiDB-cluster 时,在 value.yml 文件中指定对应的 StorageClass 就可以让 POD 使用指定类型的 pv ,您是否对这三种容量的盘做了区分?如果没有,那很有可能 tikv pod 使用了一个很小的盘

对values.yaml修改如下:

diff --git a/charts/tidb-cluster/values.yaml b/charts/tidb-cluster/values.yaml
index 22f2978..4818fdb 100644
--- a/charts/tidb-cluster/values.yaml
+++ b/charts/tidb-cluster/values.yaml
@@ -190,7 +190,8 @@ tikv:
# Please refer to https://pingcap.com/docs-cn/v3.0/reference/configuration/tikv-server/configuration-file/
# (choose the version matching your tikv) for detailed explanation of each parameter.
config: |
-    log-level = "info"
+    #log-level = "info"
+    log-level = "error"
# # Here are some parameters you MUST customize (Please configure in the above `tikv.config` section):
#
# [readpool.coprocessor]
@@ -226,7 +227,8 @@ tikv:
# different classes might map to quality-of-service levels, or to backup policies,
# or to arbitrary policies determined by the cluster administrators.
# refer to https://kubernetes.io/docs/concepts/storage/storage-classes
-  storageClassName: local-storage
+  #storageClassName: local-storage
+  storageClassName: standard

# Image pull policy.
imagePullPolicy: IfNotPresent
@@ -239,7 +241,7 @@ tikv:
 	requests:
   	# cpu: 12000m
   	# memory: 24Gi
-      storage: 10Gi
+      storage: 60Gi

## affinity defines tikv scheduling rules,affinity default settings is empty.
## please read the affinity document before set your scheduling rule:
@@ -293,7 +295,8 @@ tidb:
# (choose the version matching your tidb) for detailed explanation of each parameter.
config: |
 	[log]
-    level = "info"
+    #level = "info"
+    level = "error"

# # Here are some parameters you MUST customize (Please configure in the above 'tidb.config' section):
# [performance]

此话题已在最后回复的 1 分钟后被自动关闭。不再允许新回复。