Tikv 节点state_name 状态为:offline 怎么修复

为提高效率,提问时请提供以下信息,问题描述清晰可优先响应。

  • 【TiDB 版本】:V2.1.15
  • 【问题描述】:因为昨天误操作,将tikv节点(3个节点)中的一个 store delete id 下线,重启后发现两个tikv节点state_name状态都为 offline,另外一个为up状态,为了不影响使用,我又增加两个tikv节点,现在其中三个为up状态,两个为offline,通过grafana 监控看到pd视图中的leader balance ratio 和region balance ratio 为100%,现在的tidb集群是可以正常使用的,我应该怎么修复这个问题呢?

{‘tidb_log_dir’: ‘{{ deploy_dir }}/log’, ‘dummy’: None, ‘tidb_port’: 4000, ‘tidb_status_port’: 10080, ‘tidb_cert_dir’: ‘{{ deploy_dir }}/conf/ssl’}

系统信息
+--------+----------------------------+
|  Host  |          Release           |
+--------+----------------------------+
| tidb21 | 5.3.11-1.el7.elrepo.x86_64 |
| tidb22 | 5.3.11-1.el7.elrepo.x86_64 |
| tidb23 | 5.3.11-1.el7.elrepo.x86_64 |
| tidb24 | 5.3.11-1.el7.elrepo.x86_64 |
| tidb25 | 5.3.11-1.el7.elrepo.x86_64 |
| tidb26 | 5.3.11-1.el7.elrepo.x86_64 |
| tidb27 | 5.3.11-1.el7.elrepo.x86_64 |
| tidb29 | 5.4.0-1.el7.elrepo.x86_64  |
| tidb30 | 5.4.0-1.el7.elrepo.x86_64  |
+--------+----------------------------+
TiDB 集群信息
+---------------------+--------------+------+----+------+
|     TiDB_version    | Clu_replicas | TiDB | PD | TiKV |
+---------------------+--------------+------+----+------+
| 5.7.25-TiDB-v2.1.15 |      2       |  3   | 3  |  5   |
+---------------------+--------------+------+----+------+
集群节点信息
+------------+-------------+
|  Node_IP   | Server_info |
+------------+-------------+
| instance_0 |     tikv    |
| instance_7 |     tikv    |
| instance_1 |   pd+tidb   |
| instance_2 |   tidb+pd   |
| instance_3 |     tikv    |
| instance_4 |     tikv    |
| instance_5 |   pd+tidb   |
| instance_6 |     tikv    |
+------------+-------------+
容量 & region 数量
+---------------------+-----------------+--------------+
| Storage_capacity_GB | Storage_uesd_GB | Region_count |
+---------------------+-----------------+--------------+
|        491.52       |      51.26      |    75343     |
+---------------------+-----------------+--------------+
QPS
+---------+----------------+-----------------+
| Clu_QPS | Duration_99_MS | Duration_999_MS |
+---------+----------------+-----------------+
|   5.51  |    1869.37     |     3569.66     |
+---------+----------------+-----------------+
热点 region 信息
+--------------------+----------+-----------+
|       Store        | Hot_read | Hot_write |
+--------------------+----------+-----------+
| store-store_246151 |    5     |     0     |
| store-store_246152 |    3     |     1     |
|   store-store_5    |    3     |     0     |
|   store-store_4    |    3     |     0     |
| store-store_239050 |    0     |     0     |
|   store-store_1    |    2     |     0     |
+--------------------+----------+-----------+
磁盘延迟信息
+--------+----------+-------------+--------------+
| Device | Instance | Read_lat_MS | Write_lat_MS |
+--------+----------+-------------+--------------+
+--------+----------+-------------+--------------+

store 信息:

{
  "count": 6,
  "stores": [
    {
      "store": {
        "id": 5,
        "address": "10.3.1.27:20160",
        "version": "2.1.15",
        "state_name": "Up"
      },
      "status": {
        "capacity": "98 GiB",
        "available": "67 GiB",
        "leader_count": 9182,
        "leader_weight": 1,
        "leader_score": 34194,
        "leader_size": 34194,
        "region_count": 22652,
        "region_weight": 1,
        "region_score": 90170,
        "region_size": 90170,
        "start_ts": "2019-11-28T04:31:01+08:00",
        "last_heartbeat_ts": "2019-11-28T13:37:20.150941551+08:00",
        "uptime": "9h6m19.150941551s"
      }
    },
    {
      "store": {
        "id": 239050,
        "address": "10.3.1.28:20160",
        "version": "2.1.15",
        "state_name": "Down"
      },
      "status": {
        "leader_weight": 1,
        "region_weight": 1,
        "start_ts": "1970-01-01T08:00:00+08:00"
      }
    },
    {
      "store": {
        "id": 246151,
        "address": "10.3.1.30:20160",
        "version": "2.1.15",
        "state_name": "Up"
      },
      "status": {
        "capacity": "98 GiB",
        "available": "94 GiB",
        "leader_count": 5799,
        "leader_weight": 1,
        "leader_score": 18488,
        "leader_size": 18488,
        "region_count": 7356,
        "region_weight": 1,
        "region_score": 21294,
        "region_size": 21294,
        "start_ts": "2019-11-28T04:41:52+08:00",
        "last_heartbeat_ts": "2019-11-28T13:37:15.531333289+08:00",
        "uptime": "8h55m23.531333289s"
      }
    },
    {
      "store": {
        "id": 246152,
        "address": "10.3.1.29:20160",
        "version": "2.1.15",
        "state_name": "Up"
      },
      "status": {
        "capacity": "98 GiB",
        "available": "88 GiB",
        "leader_count": 3698,
        "leader_weight": 1,
        "leader_score": 18460,
        "leader_size": 18460,
        "region_count": 3811,
        "region_weight": 1,
        "region_score": 21134,
        "region_size": 21134,
        "start_ts": "2019-11-28T04:41:52+08:00",
        "last_heartbeat_ts": "2019-11-28T13:37:23.564446403+08:00",
        "uptime": "8h55m31.564446403s"
      }
    },
    {
      "store": {
        "id": 1,
        "address": "10.3.1.25:20160",
        "state": 1,
        "version": "2.1.15",
        "state_name": "Offline"
      },
      "status": {
        "capacity": "98 GiB",
        "available": "82 GiB",
        "leader_count": 13831,
        "leader_weight": 1,
        "leader_score": 32451,
        "leader_size": 32451,
        "region_count": 24990,
        "region_weight": 1,
        "region_score": 55995,
        "region_size": 55995,
        "start_ts": "2019-11-28T04:30:58+08:00",
        "last_heartbeat_ts": "2019-11-28T13:37:23.563929082+08:00",
        "uptime": "9h6m25.563929082s"
      }
    },
    {
      "store": {
        "id": 4,
        "address": "10.3.1.26:20160",
        "state": 1,
        "version": "2.1.15",
        "state_name": "Offline"
      },
      "status": {
        "capacity": "98 GiB",
        "available": "65 GiB",
        "leader_count": 5151,
        "leader_weight": 1,
        "leader_score": 21092,
        "leader_size": 21092,
        "region_count": 16533,
        "region_weight": 1,
        "region_score": 60839,
        "region_size": 60839,
        "start_ts": "2019-11-28T04:31:00+08:00",
        "last_heartbeat_ts": "2019-11-28T13:37:23.180811015+08:00",
        "uptime": "9h6m23.180811015s"
      }
    }
  ]
}

store 239050 现在的状态是 down,store 4 现在的状态是 offline,针对这个节点以及监控 ratio 相关的指标是 100%,表示当前 pd 正在进行 region 和 leader 的调度,这部分可以看下 pd 监控面板下的 Statistics - balance 和 scheduler 监控指标,确认下当前是否在进行相关的调度。

  1. offline 状态的节点可以观察下 leader_count 和 region_count,当 leader 和 region 调度完成后,状态会变成 tombstone,节点正常下线.

  2. down 状态的节点是说超过一小时(可通过 max-down-time 配置)没有收到 store 心跳,此时 PD 会为这个 store 上的数据添加副本。这台服务 10.3.1.28 上的 tikv 现在是起到的状态还是关闭的状态?

另外 pd-ctl config show 看下 pd 调度相关参数的设置

你好:我在问另外一个问题,我有同一个sql,通过Navicat 工具远程连接数据库(公司内网),手动执行这个sql后,每次执行的时间都不一样,有时候是3s,有时候是20s,有时候是6s,时间差距特别大,而且执行时间有长有短,根本就不平均;

请看下这个 sql 的执行计划是否有变化,以及每次返回的数据量是否一致。执行计划的相关内容可以参考下述连接:

https://pingcap.com/docs-cn/stable/reference/performance/understanding-the-query-execution-plan/#理解-tidb-执行计划

https://pingcap.com/docs-cn/stable/reference/performance/sql-optimizer-overview/#物理优化简介