为提高效率,请提供以下信息,问题描述清晰能够更快得到解决:
【 TiDB 使用环境】测试环境
【概述】场景+问题概述
扩容tikv节点报错 节点状态Offline
【背景】做过哪些操作:扩容及缩容
【现象】业务和数据库现象
【业务影响】
【TiDB 版本】
【附件】
-
TiUP Cluster Display 信息
-
TiUP Cluster Edit Config 信息
-
TiDB- Overview 监控
3 个赞
我dashboard上看是提示这三个实例是在下线中不知道为什么
3 个赞
单台机器部署多个tikv,检查一下已存在的tikv使用的内存是多少?
1 个赞
# Global variables are applied to all deployments and used as the default value of
# the deployments if a specific deployment value is missing.
global:
user: “yidb”
ssh_port: 22
deploy_dir: “/app/yidb/deploy”
data_dir: “/app/yidb/deploy/data”
server_configs:
pd:
replication.enable-placement-rules: true
replication.location-labels: [“host”]
tidb:
log.level: “error”
prepared-plan-cache.enabled: true
alter-primary-key: true
lower-case-table-names: 1
tikv:
log-level: “error”
storage.block-cache.shared: true
storage.block-cache.capacity: 250G
pd_servers:
- host: 10.145.156.221
- host: 10.145.156.222
- host: 10.145.156.223
tidb_servers:
- host: 10.145.156.221
- host: 10.145.156.222
- host: 10.145.156.223
tikv_servers:
- host: 10.145.156.229
port: 20171
status_port: 20181
deploy_dir: “/data1/deploy”
data_dir: “/data1/deploy/data”
config:
server.labels: { host: “tikv01” }
- host: 10.145.156.229
port: 20172
status_port: 20182
deploy_dir: “/data2/deploy”
data_dir: “/data2/deploy/data”
config:
server.labels: { host: “tikv01” }
- host: 10.145.156.229
port: 20173
status_port: 20183
deploy_dir: “/data3/deploy”
data_dir: “/data3/deploy/data”
config:
server.labels: { host: “tikv01” }
- host: 10.145.156.225
port: 20171
status_port: 20181
deploy_dir: “/data1/deploy”
data_dir: “/data1/deploy/data”
config:
server.labels: { host: “tikv02” }
- host: 10.145.156.225
port: 20172
status_port: 20182
deploy_dir: “/data2/deploy”
data_dir: “/data2/deploy/data”
config:
server.labels: { host: “tikv02” }
- host: 10.145.156.225
port: 20173
status_port: 20183
deploy_dir: “/data3/deploy”
data_dir: “/data3/deploy/data”
config:
server.labels: { host: “tikv02” }
- host: 10.145.156.226
port: 20171
status_port: 20181
deploy_dir: “/data1/deploy”
data_dir: “/data1/deploy/data”
config:
server.labels: { host: “tikv03” }
- host: 10.145.156.226
port: 20172
status_port: 20182
deploy_dir: “/data2/deploy”
data_dir: “/data2/deploy/data”
config:
server.labels: { host: “tikv03” }
- host: 10.145.156.226
port: 20173
status_port: 20183
deploy_dir: “/data3/deploy”
data_dir: “/data3/deploy/data”
config:
server.labels: { host: “tikv03” }
- host: 10.145.156.227
port: 20171
status_port: 20181
deploy_dir: “/data1/deploy”
data_dir: “/data1/deploy/data”
config:
server.labels: { host: “tikv04” }
- host: 10.145.156.227
port: 20172
status_port: 20182
deploy_dir: “/data2/deploy”
data_dir: “/data2/deploy/data”
config:
server.labels: { host: “tikv04” }
- host: 10.145.156.227
port: 20173
status_port: 20183
deploy_dir: “/data3/deploy”
data_dir: “/data3/deploy/data”
config:
server.labels: { host: “tikv04” }
- host: 10.145.156.228
port: 20171
status_port: 20181
deploy_dir: “/data1/deploy”
data_dir: “/data1/deploy/data”
config:
server.labels: { host: “tikv05” }
- host: 10.145.156.228
port: 20172
status_port: 20182
deploy_dir: “/data2/deploy”
data_dir: “/data2/deploy/data”
config:
server.labels: { host: “tikv05” }
- host: 10.145.156.228
port: 20173
status_port: 20183
deploy_dir: “/data3/deploy”
data_dir: “/data3/deploy/data”
config:
server.labels: { host: “tikv05” }
monitoring_servers:
- host: 10.145.156.221
grafana_servers:
- host: 10.145.156.221
yidb@paas-test-021:/home/yidb>cat scale-out.yaml
tikv_servers:
- host: 10.145.156.228
port: 20171
status_port: 20181
deploy_dir: “/data1/deploy”
data_dir: “/data1/deploy/data”
config:
server.labels: { host: “tikv05” }
1 个赞
storage.block-cache.capacity = (MEM_TOTAL * 0.5 / TiKV 实例数量)
先确定内存
另外可以参考之前的链接。可能下线之后又上线导致的store重复。
感觉应该是下线之后又上线导致的store重复这个问题,大佬这样的问题该怎么解决呢