在原实例上进行TiKV扩容的问题

【TiDB 版本】
5.0

【问题描述】
原来有3个TiKV的节点,现在各多装了块硬盘,想在原实例上进行扩容,扩容配置文件如下

tidb-scale-out.yml

tikv_servers:

  • host: 192.168.1.229
    port: 20161
    status_port: 20181
    deploy_dir: “/data/tikv-20161”
    data_dir: “/data/tikv-20161”
    log_dir: “/home/tidb/tidb-deploy/tikv-20161/log”
    numa_node: “1”
  • host: 192.168.1.230
    port: 20161
    status_port: 20181
    deploy_dir: “/data/tikv-20161”
    data_dir: “/data/tikv-20161”
    log_dir: “/home/tidb/tidb-deploy/tikv-20161/log”
    numa_node: “1”
  • host: 192.168.1.231
    port: 20161
    status_port: 20181
    deploy_dir: “/data/tikv-20161”
    data_dir: “/data/tikv-20161”
    log_dir: “/home/tidb/tidb-deploy/tikv-20161/log”
    numa_node: “1”

运行:
tiup cluster scale-out tidb-test tidb-scale-out.yml

返回错误信息如下:
Found cluster newer version:

The latest version:         v1.4.2
Local installed version:    v1.4.1
Update current component:   tiup update cluster
Update all components:      tiup update --all

Starting component cluster: /root/.tiup/components/cluster/v1.4.1/tiup-cluster scale-out tidb-test tidb-scale-out.yml

Error: Failed to parse topology file tidb-scale-out.yml (topology.parse_failed)
caused by: directory conflict for ‘/data/tikv-20161’ between ‘tikv_servers:192.168.1.229.data_dir’ and ‘tikv_servers:192.168.1.229.deploy_dir’

Please check the syntax of your topology file tidb-scale-out.yml and try again.
Error: run /root/.tiup/components/cluster/v1.4.1/tiup-cluster (wd:/root/.tiup/data/SVsdMbH) failed: exit status 1

我想问下,这种操作方式是否支持?如果支持,是否我的扩容配置文件写的有问题?

引申一步的问题:如果我想把原来实例的数据迁移到另外一个硬盘的另外一个目录,应该怎么操作?

菜鸟问题,还望大佬们指点


若提问为性能优化、故障排查类问题,请下载脚本运行。终端输出的打印结果,请务必全选并复制粘贴上传。

deploy_dir: “/data/tikv-20161”
data_dir: “/data/tikv-20161”
这两个目录冲突了。把 deploy_dir 换个目录,比如说 /opt/tidb/tikv-20161

1 个赞

嗯,这个问题解决了,现在有个新问题,单机多实例需要设置label

tiup ctl:v5.0.1 pd -i -u hadoop-node01:2379

Starting component ctl: /root/.tiup/components/ctl/v5.0.1/ctl pd -i -u hadoop-node01:2379
» label
[
{
“key”: “host”,
“value”: “192.168.1.231”
},
{
“key”: “host”,
“value”: “192.168.1.230”
},
{
“key”: “host”,
“value”: “192.168.1.229”
}
]

» config show
{
“replication”: {
“enable-placement-rules”: “true”,
“isolation-level”: “”,
“location-labels”: “”,
“max-replicas”: 3,
“strictly-match-label”: “true”
},
“schedule”: {
“enable-cross-table-merge”: “true”,
“enable-debug-metrics”: “false”,
“enable-joint-consensus”: “true”,
“enable-location-replacement”: “true”,
“enable-make-up-replica”: “true”,
“enable-one-way-merge”: “false”,
“enable-remove-down-replica”: “true”,
“enable-remove-extra-replica”: “true”,
“enable-replace-offline-replica”: “true”,
“high-space-ratio”: 0.7,
“hot-region-cache-hits-threshold”: 3,
“hot-region-schedule-limit”: 4,
“leader-schedule-limit”: 4,
“leader-schedule-policy”: “count”,
“low-space-ratio”: 0.8,
“max-merge-region-keys”: 200000,
“max-merge-region-size”: 20,
“max-pending-peer-count”: 16,
“max-snapshot-count”: 3,
“max-store-down-time”: “30m0s”,
“merge-schedule-limit”: 8,
“patrol-region-interval”: “100ms”,
“region-schedule-limit”: 2048,
“region-score-formula-version”: “v2”,
“replica-schedule-limit”: 64,
“scheduler-max-waiting-operator”: 5,
“split-merge-interval”: “1h0m0s”,
“store-limit-mode”: “manual”,
“tolerant-size-ratio”: 0
}
}
» config set location-labels “host”
Failed to set config: [400] “cannot to update replication config, the default rules do not consistent with replication config, please update rule instead”

但是最后这条命令报错了,无法更改location-labels

需要先设置
config set enable-placement-rules “false”

» config set enable-placement-rules “false”
Success!
» config set location-labels “host”
Success!

此话题已在最后回复的 1 分钟后被自动关闭。不再允许新回复。