使用TIUP扩容TIKV节点成功后,新节点没有数据写入

【概述】使用TIUP扩容TIKV节点成功后,新节点没有数据写入

【背景】执行kv节点扩容操作:
tiup cluster scale-out tidb-cluster-name scale-out.yaml --user tidb -i /home/tidb/.ssh/id_rsa

scale-out.yaml 文件内容:

tikv_servers:

  • host: 192.168.178.8
    ssh_port: 12330
    port: 20160
    status_port: 20180
    deploy_dir: /usr/local/tidb/tikv-20160
    data_dir: /data_db1/tikv/tikv-20160
    log_dir: /data_db1/tikv/tikv-20160

  • host: 192.168.178.8
    ssh_port: 12330
    port: 20161
    status_port: 20181
    deploy_dir: /usr/local/tidb/tikv-20161
    data_dir: /data_db2/tikv/tikv-20161
    log_dir: /data_db2/tikv/tikv-20161

  • host: 192.168.178.8
    ssh_port: 12330
    port: 20162
    status_port: 20182
    deploy_dir: /usr/local/tidb/tikv-20162
    data_dir: /data_db3/tikv/tikv-20162
    log_dir: /data_db3/tikv/tikv-20162

  • host: 192.168.178.8
    ssh_port: 12330
    port: 20163
    status_port: 20183
    deploy_dir: /usr/local/tidb/tikv-20163
    data_dir: /data_db4/tikv/tikv-20163
    log_dir: /data_db4/tikv/tikv-20163

【现象】扩容后显示扩容正常,pd-ctl store 显示4个节点也已经新增,但是leader_score和region_count均为0

{
“store”: {
“id”: 5278951,
“address”: “192.168.178.8:20162”,
“version”: “4.0.0-rc”,
“status_address”: “192.168.178.8:20182”,
“git_hash”: “f45d0c963df3ee4b1011caf5eb146cacd1fbbad8”,
“start_timestamp”: 1623912961,
“binary_path”: “/usr/local/tidb/tikv-20162/bin/tikv-server”,
“last_heartbeat”: 1623917401462523740,
“state_name”: “Up”
},
“status”: {
“capacity”: “1.745TiB”,
“available”: “1.743TiB”,
“used_size”: “31.5MiB”,
“leader_count”: 0,
“leader_weight”: 1,
“leader_score”: 0,
“leader_size”: 0,
“region_count”: 0,
“region_weight”: 1,
“region_score”: 0,
“region_size”: 0,
“start_ts”: “2021-06-17T14:56:01+08:00”,
“last_heartbeat_ts”: “2021-06-17T16:10:01.46252374+08:00”,
“uptime”: “1h14m0.46252374s”
}

tikv节点日志信息:

[2021/06/17 14:56:00.794 +08:00] [INFO] [mod.rs:335] [“starting working thread”] [worker=addr-resolver]
[2021/06/17 14:56:00.796 +08:00] [INFO] [mod.rs:335] [“starting working thread”] [worker=region-collector-worker]
[2021/06/17 14:56:00.895 +08:00] [INFO] [future.rs:136] [“starting working thread”] [worker=gc-worker]
[2021/06/17 14:56:00.896 +08:00] [INFO] [mod.rs:335] [“starting working thread”] [worker=lock-collector]
[2021/06/17 14:56:00.959 +08:00] [INFO] [mod.rs:181] [“Storage started.”]
[2021/06/17 14:56:00.971 +08:00] [INFO] [mod.rs:335] [“starting working thread”] [worker=split-check]
[2021/06/17 14:56:00.973 +08:00] [INFO] [node.rs:348] [“start raft store thread”] [store_id=5278949]
[2021/06/17 14:56:00.973 +08:00] [INFO] [store.rs:862] [“start store”] [takes=15.599µs] [merge_count=0] [applying_count=0] [tombstone_count=0] [region_count=0] [store_id=5278949]
[2021/06/17 14:56:00.973 +08:00] [INFO] [store.rs:913] [“cleans up garbage data”] [takes=24.983µs] [garbage_range_count=1] [store_id=5278949]
[2021/06/17 14:56:00.992 +08:00] [INFO] [mod.rs:335] [“starting working thread”] [worker=snapshot-worker]
[2021/06/17 14:56:00.994 +08:00] [INFO] [mod.rs:335] [“starting working thread”] [worker=raft-gc-worker]
[2021/06/17 14:56:00.995 +08:00] [INFO] [mod.rs:335] [“starting working thread”] [worker=cleanup-worker]
[2021/06/17 14:56:00.999 +08:00] [INFO] [future.rs:136] [“starting working thread”] [worker=pd-worker]
[2021/06/17 14:56:01.000 +08:00] [INFO] [mod.rs:335] [“starting working thread”] [worker=consistency-check]
[2021/06/17 14:56:01.003 +08:00] [WARN] [store.rs:1180] [“set thread priority for raftstore failed”] [error=“Os { code: 13, kind: PermissionDenied, message: "Permission denied" }”]
[2021/06/17 14:56:01.003 +08:00] [INFO] [node.rs:167] [“put store to PD”] [store=“id: 5278949 address: "192.168.178.8:20160" version: "4.0.0-rc" status_address: "192.168.178.8:20180" git_hash: "f45d0c963df3ee4b1011caf5eb146cacd1fbbad8" start_timestamp: 1623912960 binary_path: "/usr/local/tidb/tikv-20160/bin/tikv-server"”]
[2021/06/17 14:56:01.011 +08:00] [INFO] [mod.rs:335] [“starting working thread”] [worker=cdc]
[2021/06/17 14:56:01.033 +08:00] [INFO] [future.rs:136] [“starting working thread”] [worker=waiter-manager]
[2021/06/17 14:56:01.040 +08:00] [INFO] [future.rs:136] [“starting working thread”] [worker=deadlock-detector]
[2021/06/17 14:56:01.042 +08:00] [INFO] [mod.rs:335] [“starting working thread”] [worker=backup-endpoint]
[2021/06/17 14:56:01.052 +08:00] [INFO] [mod.rs:335] [“starting working thread”] [worker=snap-handler]
[2021/06/17 14:56:01.054 +08:00] [INFO] [server.rs:225] [“listening on addr”] [addr=0.0.0.0:20160]
[2021/06/17 14:56:01.065 +08:00] [INFO] [server.rs:253] [“TiKV is ready to serve”]

【TiDB 版本】v4.0.0-rc

请问集群为何没有自动平衡数据?还需要调整leader_weight和region_weight参数吗?

2 个赞

正常情况下,扩容之后会自动进行数据的 balance。
可以 tiup cluster display ${cluster_name}看下整体状态,另外看下 PD leader 日志是否异常。

现在已经出 5.0 了哈,另外 4.0 这个版本比较低了而且还是 rc 版本,可能有在后面版本已经修复的问题,建议用较新的版本。

1 个赞

TiDB Version: v4.0.0-rc
ID Role Host Ports Status Data Dir Deploy Dir


192.168.100.151:9093 alertmanager 192.168.100.151 9093/9094 Up data/alertmanager-9093 /usr/local/tidb/alertmanager-9093
192.168.100.151:3000 grafana 192.168.100.151 3000 Up - /usr/local/tidb/grafana-3000
192.168.100.151:2379 pd 192.168.100.151 2379/2380 Healthy data/pd-2379 /usr/local/tidb/pd-2379
192.168.100.176:2379 pd 192.168.100.176 2379/2380 Healthy data/pd-2379 /usr/local/tidb/pd-2379
192.168.100.181:2379 pd 192.168.100.181 2379/2380 Healthy|L data/pd-2379 /usr/local/tidb/pd-2379
192.168.100.151:9090 prometheus 192.168.100.151 9090 Up data/prometheus-9090 /usr/local/tidb/prometheus-9090
192.168.100.151:4000 tidb 192.168.100.151 4000/10080 Up - /data_db2/tidb/tidb-4000
192.168.100.176:4000 tidb 192.168.100.176 4000/10080 Up - /data_db2/tidb/tidb-4000
192.168.100.181:4000 tidb 192.168.100.181 4000/10080 Up - /data_db1/tidb/tidb-4000
192.168.100.151:20160 tikv 192.168.100.151 20160/20180 Up /data_db3/tikv/tikv-20160 /usr/local/tidb/tikv-20160
192.168.100.151:20161 tikv 192.168.100.151 20161/20181 Up /data_db4/tikv/tikv-20161 /usr/local/tidb/tikv-20161
192.168.100.176:20160 tikv 192.168.100.176 20160/20180 Up /data_db3/tikv/tikv-20160 /usr/local/tidb/tikv-20160
192.168.100.176:20161 tikv 192.168.100.176 20161/20181 Up /data_db4/tikv/tikv-20161 /usr/local/tidb/tikv-20161
192.168.100.181:20160 tikv 192.168.100.181 20160/20180 Up /data_db2/tikv/tikv-20160 /usr/local/tidb/tikv-20160
192.168.100.181:20161 tikv 192.168.100.181 20161/20181 Up /data_db3/tikv/tikv-20161 /usr/local/tidb/tikv-20161
192.168.178.8:20160 tikv 192.168.178.8 20160/20180 Up /data_db1/tikv/tikv-20160 /usr/local/tidb/tikv-20160
192.168.178.8:20161 tikv 192.168.178.8 20161/20181 Up /data_db2/tikv/tikv-20161 /usr/local/tidb/tikv-20161
192.168.178.8:20162 tikv 192.168.178.8 20162/20182 Up /data_db3/tikv/tikv-20162 /usr/local/tidb/tikv-20162
192.168.178.8:20163 tikv 192.168.178.8 20163/20183 Up /data_db4/tikv/tikv-20163 /usr/local/tidb/tikv-20163
Success to stop component cluster

display 结果正常,192.168.178.8 上新增的4个节点

PD leader 日志除了几个warn级别的信息外也没有异常

[2021/06/17 15:34:14.927 +08:00] [WARN] [raft.go:363] [“leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk”] [to=46c3e91ef0f5bc13] [heartbeat-interval=500ms] [expected-duration=1s] [exceeded-duration=454.538951ms]
[2021/06/17 15:34:14.927 +08:00] [WARN] [raft.go:363] [“leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk”] [to=356d6bb660983ad0] [heartbeat-interval=500ms] [expected-duration=1s] [exceeded-duration=455.095537ms]

  1. 反馈下 grafana 中 pd 和 tikv 的监控
  2. 检查下这个节点和其他 tikv 及 pd 节点的网络情况,ping 一下试试
  3. 这个新加的 tikv 磁盘是什么? 根据提示是很慢吗?

1、监控显示有10个节点,但是数据只有原来的6个节点



2、网络是通的
3、磁盘都是2T的SSD,但不知为何从kv节点日志里面也有相关WARN信息:

[2021/06/17 14:56:00.041 +08:00] [WARN] [lib.rs:527] [“environment variable TZ is missing, using /etc/localtime”]
[2021/06/17 14:56:00.812 +08:00] [WARN] [config.rs:706] [“not on SSD device”] [data_path=/data_db3/tikv/tikv-20162]
[2021/06/17 14:56:00.813 +08:00] [WARN] [config.rs:706] [“not on SSD device”] [data_path=/data_db3/tikv/tikv-20162/raft]
[2021/06/17 14:56:01.042 +08:00] [WARN] [store.rs:1180] [“set thread priority for raftstore failed”] [error=“Os { code: 13, kind: PermissionDenied, message: "Permission denied" }”]

另外确认下,未扩容前的 tikv 节点,tikv 是否 打 label 了?

是的,扩容前的6个节点都有打label

主要原因是 :TiKV 扩容节点没有打 label 导致的,其他的节点已经打了 host label ,PD 无法通过 label 调度数据到没有 label 的 TiKV instance 节点。

麻烦把新扩容的 tikv 节点补上 label 信息吧,或者是 缩容掉,修改下配置文件打上 label 再扩容。

1 个赞

我用tiup cluster edit-config的方式打了label,好像还是不行?需要缩容重新扩吗

reload 了么?修改配置文件后需要 reload 才生效的。

可以在线打 label 的,参考下这个 【SOP 系列 04】现有集群 TiKV 如何从单实例部署调整为多实例部署

2 个赞

reload 是否只reload pd节点就可以了?

不行的。你在线打 label 吧。

此话题已在最后回复的 1 分钟后被自动关闭。不再允许新回复。