tiup扩容tikv后不均衡leader

前提:

1、版本tidb4.0.0,CentOS Linux release 7.6.1810 (Core)
2、集群使用的tiup部署
3、生产环境,比较着急,急急急!哪位大神帮忙解决下啊。

使用如下命令扩容新的kv192.168.192.38
创建启动文件

[tidb@master ~]$ cat scale-out.yaml
tikv_servers:
  - host: 192.168.192.38

扩容

tiup cluster scale-out test-cluster scale-out.yaml

查看display已经添加成功了

问题:

1、查看kv的store,已经加上6个多小时了,发现leader_size一直是0 ?
2、leader_size也不再平衡到这台kv ?
3、Prometheus监控也没有这台kv的数据
4、如何该让pd去均衡region和leader到新增的kv上

Starting component `ctl`:  pd -u http://192.168.192.32:2379 store 763660
{
  "store": {
    "id": 763660,
    "address": "192.168.192.38:20160",
    "version": "4.0.0",
    "status_address": "192.168.192.38:20180",
    "git_hash": "198a2cea01734ce8f46d55a29708f123f9133944",
    "start_timestamp": 1596438961,
    "deploy_path": "/home/tidb/deploy/tikv-20160/bin",
    "last_heartbeat": 1596461503097110507,
    "state_name": "Up"
  },
  "status": {
    "capacity": "388GiB",
    "available": "385.6GiB",
    "used_size": "31.5MiB",
    "leader_count": 0,
    "leader_weight": 1,
    "leader_score": 0,
    "leader_size": 0,
    "region_count": 18,
    "region_weight": 1,
    "region_score": 882,
    "region_size": 882,
    "start_ts": "2020-08-03T15:16:01+08:00",
    "last_heartbeat_ts": "2020-08-03T21:31:43.097110507+08:00",
    "uptime": "6h15m42.097110507s"
  }
}

  1. 这个服务器是第一次添加 tikv 节点吗?
  2. 麻烦反馈 pd-ctl 的 store 和config show all 信息,多谢。

前提:这个tidb集群是从3.0.11版本用tiup升级到4.0.0版本,现在是4.0.0版本。
1、集群升级后第一次扩容kv,扩容的这台kv服务器是第一次添加。
2、 pd-ctl 的 store 和config show all 信息
tiup ctl pd -u http://192.168.192.32:2379 store 763660 结果如下

config show all 结果如下

[tidb@back-paas ~]$ tiup ctl pd -u http://192.168.192.32:2379 config show all
Starting component `ctl`:  pd -u http://192.168.192.32:2379 config show all
{
  "client-urls": "http://0.0.0.0:2379",
  "peer-urls": "http://192.168.192.32:2380",
  "advertise-client-urls": "http://192.168.192.32:2379",
  "advertise-peer-urls": "http://192.168.192.32:2380",
  "name": "pd_huirui-32",
  "data-dir": "/home/tidb/deploy/data.pd",
  "force-new-cluster": false,
  "enable-grpc-gateway": true,
  "initial-cluster": "pd_huirui-31=http://192.168.192.31:2380,pd_huirui-32=http://192.168.192.32:2380,pd_huirui-33=http://192.168.192.33:2380",
  "initial-cluster-state": "new",
  "join": "",
  "lease": 3,
  "log": {
    "level": "info",
    "format": "text",
    "disable-timestamp": false,
    "file": {
      "filename": "/home/tidb/deploy/log/pd.log",
      "max-size": 300,
      "max-days": 0,
      "max-backups": 0
    },
    "development": false,
    "disable-caller": false,
    "disable-stacktrace": false,
    "disable-error-verbose": true,
    "sampling": null
  },
  "tso-save-interval": "3s",
  "metric": {
    "job": "pd_huirui-32",
    "address": "",
    "interval": "15s"
  },
  "schedule": {
    "max-snapshot-count": 3,
    "max-pending-peer-count": 16,
    "max-merge-region-size": 20,
    "max-merge-region-keys": 200000,
    "split-merge-interval": "1h0m0s",
    "enable-one-way-merge": "false",
    "enable-cross-table-merge": "false",
    "patrol-region-interval": "100ms",
    "max-store-down-time": "30m0s",
    "leader-schedule-limit": 4,
    "leader-schedule-policy": "count",
    "region-schedule-limit": 4,
    "replica-schedule-limit": 8,
    "merge-schedule-limit": 8,
    "hot-region-schedule-limit": 4,
    "hot-region-cache-hits-threshold": 3,
    "store-balance-rate": 15,
    "tolerant-size-ratio": 5,
    "low-space-ratio": 0.8,
    "high-space-ratio": 0.6,
    "scheduler-max-waiting-operator": 3,
    "enable-remove-down-replica": "true",
    "enable-replace-offline-replica": "true",
    "enable-make-up-replica": "true",
    "enable-remove-extra-replica": "true",
    "enable-location-replacement": "true",
    "enable-debug-metrics": "false",
    "schedulers-v2": [
      {
        "type": "balance-region",
        "args": null,
        "disable": false,
        "args-payload": ""
      },
      {
        "type": "balance-leader",
        "args": null,
        "disable": false,
        "args-payload": ""
      },
      {
        "type": "hot-region",
        "args": null,
        "disable": false,
        "args-payload": ""
      },
      {
        "type": "label",
        "args": null,
        "disable": false,
        "args-payload": ""
      }
    ],
    "schedulers-payload": {
      "balance-hot-region-scheduler": "null",
      "balance-leader-scheduler": "{\"name\":\"balance-leader-scheduler\",\"ranges\":[{\"start-key\":\"\",\"end-key\":\"\"}]}",
      "balance-region-scheduler": "{\"name\":\"balance-region-scheduler\",\"ranges\":[{\"start-key\":\"\",\"end-key\":\"\"}]}",
      "label-scheduler": "{\"name\":\"label-scheduler\",\"ranges\":[{\"start-key\":\"\",\"end-key\":\"\"}]}"
    },
    "store-limit-mode": "manual"
  },
  "replication": {
    "max-replicas": 3,
    "location-labels": "",
    "strictly-match-label": "false",
    "enable-placement-rules": "false"
  },
  "pd-server": {
    "use-region-storage": "true",
    "max-gap-reset-ts": "24h0m0s",
    "key-type": "table",
    "runtime-services": "",
    "metric-storage": "http://192.168.192.33:9090",
    "dashboard-address": "http://192.168.192.33:2379"
  },
  "cluster-version": "4.0.0",
  "quota-backend-bytes": "8GiB",
  "auto-compaction-mode": "periodic",
  "auto-compaction-retention-v2": "1h",
  "TickInterval": "500ms",
  "ElectionInterval": "3s",
  "PreVote": true,
  "security": {
    "cacert-path": "",
    "cert-path": "",
    "key-path": "",
    "cert-allowed-cn": null
  },
  "label-property": {},
  "WarningMsgs": [
    "Config contains undefined item: namespace-classifier"
  ],
  "DisableStrictReconfigCheck": false,
  "HeartbeatStreamBindInterval": "1m0s",
  "LeaderPriorityCheckInterval": "1m0s",
  "dashboard": {
    "tidb_cacert_path": "",
    "tidb_cert_path": "",
    "tidb_key_path": "",
    "public_path_prefix": "/dashboard"
  },
  "replication-mode": {
    "replication-mode": "majority",
    "dr-auto-sync": {
      "label-key": "",
      "primary": "",
      "dr": "",
      "primary-replicas": 0,
      "dr-replicas": 0,
      "wait-store-timeout": "1m0s",
      "wait-sync-timeout": "1m0s"
    }
  }
}

[tidb@back-paas ~]$

麻烦反馈下 Grafana PD 中 statistics-balance 和 scheluder 相关的 metrics 图片,谢谢。

好的稍等

statistics-balance如下图:

scheluder 如下图:


从图片中发现 balance leader scheduler 中有大量的 skip,说明 leader 调度一直有问题,麻烦再提供下 pd leader 的日志,以及那台新增加的 tikv server 的日志,谢谢。

多谢啦
logs-pd_192_168_192_32_2379.tar (1.2 MB)
tikv.log (48.8 KB)

pd 日志中只发现了一条和 store 763660相关的信息,看起来是节点添加后集群没有正确识别到,相关的调度没有对这个 store 生效。

[2020/08/04 10:38:59.327 +08:00] [Info] [cluster.go:478] ["region Version changed"] [region-id=257201] [detail="StartKey Changed:{7480000000000000FF155F728000000000FF37CE4A0000000000FA} -> {7480000000000000FF155F728000000000FF37D23A0000000000FA}, EndKey:{7480000000000000FF1700000000000000F8}"] [old-version=633] [new-version=634]
[2020/08/04 10:38:59.327 +08:00] [Info] [cluster_worker.go:218] ["region batch split, generate new regions"] [region-id=257201] [origin="id:835965 start_key:\"7480000000000000FF155F728000000000FF37CE4A0000000000FA\" end_key:\"7480000000000000FF155F728000000000FF37D23A0000000000FA\" region_epoch:<conf_ver:5220 version:634 > peers:<id:835966 store_id:223229 > peers:<id:835967 store_id:188660 > peers:<id:835968 store_id:198520 > peers:<id:835969 store_id:763660 is_learner:true >"] [total=1]
[2

方便的话你这边看下能否将这个新加的 tikv server 先 offline掉,然后再重新扩容至集群后能否恢复正常。我们这边再继续分析下。

好的多谢:blush:,我先缩容下去,在加进来看下

我这边已经执行缩容了但是状态一直pending


里面的region不调走,一直这个状态pd不对此kv做调度,这边该怎么操作

现在 这个offline Store 还一直是 pending 吗?里面的 region 没有调读走?

是的相隔这么长时间还是这个状态以下是最新状态
tiup cluster display test-cluster

tiup ctl pd -u http://192.168.192.32:2379 store 763660

您好,麻烦提供一下 PD leader 2020/08/03 15:16:00 开始到现在的完整日志吧

上面提供了,您看下是否需要我再重新上传一份

我这边看到的开始时间好像是 2020/08/04 10:29:32.074,需要 2020/08/03 15:16:00 开始的

太大了传不过去,只能一段时间一段时间的传,我这边传哪段时间的会比较好些

从 2020/08/03 15:16:00 开始吧

2020/08/03 15:16:00——2020/08/03 22:00:00
logs-pd_192_168_192_32_2379 (1).tar (4.8 MB)
2020/08/03 22:00:00——2020/08/04 02:00:00
logs-pd_192_168_192_32_2379 (2).tar (2.7 MB)
如果需要后面的日志我这边再上传

2020/08/03 15:16:00——2020/08/03 22:00:00 这个日志里面好像是从 2020/08/03 15:16:35 开始的,有更早一点的吗