扩容tikv节点报错

为提高效率,请提供以下信息,问题描述清晰能够更快得到解决:
【 TiDB 使用环境】测试环境

【概述】场景+问题概述
扩容tikv节点报错 节点状态Offline
【背景】做过哪些操作:扩容及缩容
【现象】业务和数据库现象
【业务影响】
【TiDB 版本】
【附件】

  1. TiUP Cluster Display 信息

  2. TiUP Cluster Edit Config 信息

  3. TiDB- Overview 监控

  • 对应模块日志(包含问题前后1小时日志)
3 个赞

对应磁盘空间:


应该不是磁盘空间不足导致的

3 个赞

首先检查一下磁盘和内存是否达到瓶颈。

2 个赞

没有的我这是新的磁盘八百多G的

2 个赞

内存呢

1 个赞

内存至少还有40%的容量

2 个赞

是按照标准配置的SSD磁盘吗。

1 个赞

是的我这个是按照生产环境的配置SSD的盘

2 个赞

我dashboard上看是提示这三个实例是在下线中不知道为什么

3 个赞

单台机器部署多个tikv,检查一下已存在的tikv使用的内存是多少?

1 个赞

关键是他提示这个报错是我tikv的名字重复了吗

1 个赞

配置文件发一下,还有扩容的配置文件

1 个赞

看日志。请参考一下这个。 https://docs.pingcap.com/zh/tidb/stable/deploy-and-maintain-faq/#tikv-启动报错duplicated-store-address

1 个赞

# Global variables are applied to all deployments and used as the default value of

# the deployments if a specific deployment value is missing.

global:
user: “yidb”
ssh_port: 22
deploy_dir: “/app/yidb/deploy”
data_dir: “/app/yidb/deploy/data”

server_configs:
pd:
replication.enable-placement-rules: true
replication.location-labels: [“host”]
tidb:
log.level: “error”
prepared-plan-cache.enabled: true
alter-primary-key: true
lower-case-table-names: 1
tikv:
log-level: “error”
storage.block-cache.shared: true
storage.block-cache.capacity: 250G

pd_servers:

  • host: 10.145.156.221
  • host: 10.145.156.222
  • host: 10.145.156.223
    tidb_servers:
  • host: 10.145.156.221
  • host: 10.145.156.222
  • host: 10.145.156.223
    tikv_servers:
  • host: 10.145.156.229
    port: 20171
    status_port: 20181
    deploy_dir: “/data1/deploy”
    data_dir: “/data1/deploy/data”
    config:
    server.labels: { host: “tikv01” }
  • host: 10.145.156.229
    port: 20172
    status_port: 20182
    deploy_dir: “/data2/deploy”
    data_dir: “/data2/deploy/data”
    config:
    server.labels: { host: “tikv01” }
  • host: 10.145.156.229
    port: 20173
    status_port: 20183
    deploy_dir: “/data3/deploy”
    data_dir: “/data3/deploy/data”
    config:
    server.labels: { host: “tikv01” }
  • host: 10.145.156.225
    port: 20171
    status_port: 20181
    deploy_dir: “/data1/deploy”
    data_dir: “/data1/deploy/data”
    config:
    server.labels: { host: “tikv02” }
  • host: 10.145.156.225
    port: 20172
    status_port: 20182
    deploy_dir: “/data2/deploy”
    data_dir: “/data2/deploy/data”
    config:
    server.labels: { host: “tikv02” }
  • host: 10.145.156.225
    port: 20173
    status_port: 20183
    deploy_dir: “/data3/deploy”
    data_dir: “/data3/deploy/data”
    config:
    server.labels: { host: “tikv02” }
  • host: 10.145.156.226
    port: 20171
    status_port: 20181
    deploy_dir: “/data1/deploy”
    data_dir: “/data1/deploy/data”
    config:
    server.labels: { host: “tikv03” }
  • host: 10.145.156.226
    port: 20172
    status_port: 20182
    deploy_dir: “/data2/deploy”
    data_dir: “/data2/deploy/data”
    config:
    server.labels: { host: “tikv03” }
  • host: 10.145.156.226
    port: 20173
    status_port: 20183
    deploy_dir: “/data3/deploy”
    data_dir: “/data3/deploy/data”
    config:
    server.labels: { host: “tikv03” }
  • host: 10.145.156.227
    port: 20171
    status_port: 20181
    deploy_dir: “/data1/deploy”
    data_dir: “/data1/deploy/data”
    config:
    server.labels: { host: “tikv04” }
  • host: 10.145.156.227
    port: 20172
    status_port: 20182
    deploy_dir: “/data2/deploy”
    data_dir: “/data2/deploy/data”
    config:
    server.labels: { host: “tikv04” }
  • host: 10.145.156.227
    port: 20173
    status_port: 20183
    deploy_dir: “/data3/deploy”
    data_dir: “/data3/deploy/data”
    config:
    server.labels: { host: “tikv04” }
  • host: 10.145.156.228
    port: 20171
    status_port: 20181
    deploy_dir: “/data1/deploy”
    data_dir: “/data1/deploy/data”
    config:
    server.labels: { host: “tikv05” }
  • host: 10.145.156.228
    port: 20172
    status_port: 20182
    deploy_dir: “/data2/deploy”
    data_dir: “/data2/deploy/data”
    config:
    server.labels: { host: “tikv05” }
  • host: 10.145.156.228
    port: 20173
    status_port: 20183
    deploy_dir: “/data3/deploy”
    data_dir: “/data3/deploy/data”
    config:
    server.labels: { host: “tikv05” }
    monitoring_servers:
  • host: 10.145.156.221
    grafana_servers:
  • host: 10.145.156.221

yidb@paas-test-021:/home/yidb>cat scale-out.yaml
tikv_servers:

  • host: 10.145.156.228
    port: 20171
    status_port: 20181
    deploy_dir: “/data1/deploy”
    data_dir: “/data1/deploy/data”
    config:
    server.labels: { host: “tikv05” }
1 个赞

image 你tikv单台的机器总内存是多少

1 个赞
storage.block-cache.capacity = (MEM_TOTAL * 0.5 / TiKV 实例数量)

先确定内存
另外可以参考之前的链接。可能下线之后又上线导致的store重复。

感觉应该是下线之后又上线导致的store重复这个问题,大佬这样的问题该怎么解决呢

4 个赞

感谢大佬我试一下