参数查看问题

【 TiDB 使用环境 测试/
【 TiDB 版本】
【复现路径】做过哪些操作出现的问题
【遇到的问题:问题现象及影响】

官方文档 参数

max-replicas

问题是这个参数在哪里能查到,我装了一个tikv的测试机,想看看这个参数是否被设置为1了

  • 所有副本数量,即 leader 与 follower 数量之和。默认为 3,即 1 个 leader 和 2 个 follower。当此配置被在线修改后,PD 会在后台通过调度使得 Region 的副本数量符合配置。
  • 默认值:3

https://docs.pingcap.com/zh/tidb/stable/dynamic-config#在线修改-pd-配置

参考官档 show config

谢谢了,查到了,确实还是默认的3,虽然只有一个tikv节点

之前是通过什么方式没有查到?社区搜索的前两个应该就是了。 :thinking:

show config能查到,改完我的理解理论上用set命令改完应该存到pd的配置文件中,但是没看到

en, 存在 pd 里,和配置文件是两回事

就这里没搞懂,参数文件有,SET也改了,如果不一致怎么确定不一致,最后哪个生效

set config 修改掉的参数需要tiup cluster edit-config 同步修改下集群参数才能永久生效,否则重启或reload集群set的集群参数会重置

测试恰恰相反,比如我在tiup cluster edit-config里面写了max-replicas为1
image
set 设置为3,重启集群后可以看到是3

tiup cluster reload报不报错?那个- host感觉奇奇怪怪的。

不报错的

这个里面的max-replicas配置项放在第一条也是一样的情况么?

你set成2,集群配置为1,然后重启下看看。。。感觉你的集群配置没生效,所以是默认的3.。。

:wink:我觉得是那个 - host,不知道会不会把max-replicas配置给解析成host下的配置

再次测试,重新装的集群,未改任何参数 修改max-replica


重启后max-replica依然是2,并不会因为没修改集群参数而失效

单机部署测试,这是初始化参数

[root@tidb /]# cat topo.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "/tidb-deploy"
 data_dir: "/tidb-data"

# # Monitored variables are applied to all the machines.
monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115

server_configs:
 tidb:
   log.slow-threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
   replication.location-labels: ["host"]

pd_servers:
 - host: 10.0.0.26

tidb_servers:
 - host: 10.0.0.26

tikv_servers:
 - host: 10.0.0.26
   port: 20160
   status_port: 20180
   config:
     server.labels: { host: "logic-host-1" }

 - host: 10.0.0.26
   port: 20161
   status_port: 20181
   config:
     server.labels: { host: "logic-host-2" }

 - host: 10.0.0.26
   port: 20162
   status_port: 20182
   config:
     server.labels: { host: "logic-host-3" }

monitoring_servers:
 - host: 10.0.0.26

grafana_servers:
 - host: 10.0.0.26

这是set修改完max-replicas=2后的配置文件

[root@tidb /]# tiup cluster  show-config tidb-test
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.5/tiup-cluster show-config tidb-test
global:
  user: tidb
  ssh_port: 22
  ssh_type: builtin
  deploy_dir: /tidb-deploy
  data_dir: /tidb-data
  os: linux
monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115
  deploy_dir: /tidb-deploy/monitor-9100
  data_dir: /tidb-data/monitor-9100
  log_dir: /tidb-deploy/monitor-9100/log
server_configs:
  tidb:
    log.slow-threshold: 300
  tikv:
    readpool.coprocessor.use-unified-pool: true
    readpool.storage.use-unified-pool: false
  pd:
    replication.enable-placement-rules: true
    replication.location-labels:
    - host
  tidb_dashboard: {}
  tiflash: {}
  tiflash-learner: {}
  pump: {}
  drainer: {}
  cdc: {}
  kvcdc: {}
  grafana: {}
tidb_servers:
- host: 10.0.0.26
  ssh_port: 22
  port: 4000
  status_port: 10080
  deploy_dir: /tidb-deploy/tidb-4000
  log_dir: /tidb-deploy/tidb-4000/log
  arch: amd64
  os: linux
tikv_servers:
- host: 10.0.0.26
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /tidb-deploy/tikv-20160
  data_dir: /tidb-data/tikv-20160
  log_dir: /tidb-deploy/tikv-20160/log
  config:
    server.labels:
      host: logic-host-1
  arch: amd64
  os: linux
- host: 10.0.0.26
  ssh_port: 22
  port: 20161
  status_port: 20181
  deploy_dir: /tidb-deploy/tikv-20161
  data_dir: /tidb-data/tikv-20161
  log_dir: /tidb-deploy/tikv-20161/log
  config:
    server.labels:
      host: logic-host-2
  arch: amd64
  os: linux
- host: 10.0.0.26
  ssh_port: 22
  port: 20162
  status_port: 20182
  deploy_dir: /tidb-deploy/tikv-20162
  data_dir: /tidb-data/tikv-20162
  log_dir: /tidb-deploy/tikv-20162/log
  config:
    server.labels:
      host: logic-host-3
  arch: amd64
  os: linux
tiflash_servers: []
pd_servers:
- host: 10.0.0.26
  ssh_port: 22
  name: pd-10.0.0.26-2379
  client_port: 2379
  peer_port: 2380
  deploy_dir: /tidb-deploy/pd-2379
  data_dir: /tidb-data/pd-2379
  log_dir: /tidb-deploy/pd-2379/log
  arch: amd64
  os: linux
monitoring_servers:
- host: 10.0.0.26
  ssh_port: 22
  port: 9090
  ng_port: 12020
  deploy_dir: /tidb-deploy/prometheus-9090
  data_dir: /tidb-data/prometheus-9090
  log_dir: /tidb-deploy/prometheus-9090/log
  external_alertmanagers: []
  arch: amd64
  os: linux
grafana_servers:
- host: 10.0.0.26
  ssh_port: 22
  port: 3000
  deploy_dir: /tidb-deploy/grafana-3000
  arch: amd64
  os: linux
  username: admin
  password: admin
  anonymous_enable: false
  root_url: ""
  domain: ""

tikv配置文件

[root@tidb conf]# cat tikv.toml
# WARNING: This file is auto-generated. Do not edit! All your modification will be overwritten!
# You can use 'tiup cluster edit-config' and 'tiup cluster reload' to update the configuration
# All configuration items you want to change can be added to:
# server_configs:
#   tikv:
#     aa.b1.c3: value
#     aa.b2.c4: value
[readpool]
[readpool.coprocessor]
use-unified-pool = true
[readpool.storage]
use-unified-pool = false

[server]
[server.labels]
host = "logic-host-1"

pd参数文件

[root@tidb conf]# cat pd.toml
# WARNING: This file is auto-generated. Do not edit! All your modification will be overwritten!
# You can use 'tiup cluster edit-config' and 'tiup cluster reload' to update the configuration
# All configuration items you want to change can be added to:
# server_configs:
#   pd:
#     aa.b1.c3: value
#     aa.b2.c4: value
[replication]
enable-placement-rules = true
location-labels = ["host"]

pd-ctl可以看到确实被改成了2

» config show replication
{
  "max-replicas": 2,
  "location-labels": "host",
  "strictly-match-label": "false",
  "enable-placement-rules": "true",
  "enable-placement-rules-cache": "false",
  "isolation-level": ""
}

我搞错了,以为max-replica参数是tikv参数,只有tikv参数在set修改之后需要通过tiup重新修改配置,max-replica参数是pd的参数,针对 PD 可在线修改的配置项,成功修改后则会持久化到 etcd 中,不会对配置文件进行持久化,后续以 etcd 中的配置为准。

2 个赞

此话题已在最后回复的 60 天后被自动关闭。不再允许新回复。