K8s集群新增tikv节点后,数据加载及主键查询速度明显下降

  • 系统版本 & kernel 版本 】谷歌云 Linux gke-tidb-default-pool-c644815e-mfdm 4.14.137+ #1 SMP Thu Aug 8 02:47:02 PDT 2019 x86_64 Intel® Xeon® CPU @ 2.20GHz GenuineIntel GNU/Linux

  • TiDB 版本 】Container image “pingcap/tikv:v3.0.4”

  • 磁盘型号 】单个虚拟机的启动磁盘1个100G,SSD磁盘两个,分别为1G和10G

  • 集群节点分布 】4个vm节点

    NAME READY STATUS RESTARTS AGE

demo-discovery-76d999b748-bgsk9 1/1 Running 0 25h

demo-monitor-648447d8f-sj7pp      3/3     Running   0          25h
demo-pd-0                         1/1     Running   0          25h
demo-pd-1                         1/1     Running   1          25h
demo-pd-2                         1/1     Running   0          25h
demo-tidb-0                       2/2     Running   0          25h
demo-tikv-0                       1/1     Running   0          17h
demo-tikv-1                       1/1     Running   0          6h15m
demo-tikv-2                       1/1     Running   0          25h
demo-tikv-3                       1/1     Running   0          12h
  • 数据量 & region 数量 & 副本数 】只有单库单表3.6亿记录,4个tikv的region数分别为900、1.1k、1.1k、1.1k,store size分别为54GB、64GB、70GB、70GB
  • 问题描述(我做了什么) 】 k8s集群新增tikv节点后,tikv节点数据看似已经均衡。 使用loader加载数据,比之前加载速度明显慢。 sysbench的oltp_point_select测试,152.90 qps。而之前3亿数据的qps是988.23。

你好麻烦提供一下 TiDB TiKV 的监控信息

这是过去24小时的监控:






请看还需要哪些信息?谢谢!

请问是新增了哪些节点 。我看 tikv 节点服务的运行时间都不一致的。另外 sysbench的oltp_point_select 测试是什么时候执行的?l

  • oader 以及 sysbench 的参数也麻烦发一下。

  • TiDB TiKV 的日志 提供一下。

  • chrome 安装一下 截长图的插件 把所有的 TiKV TiDB 监控项打开后截图发一下。

机器和 Pod 配置是什么样的,扩容是直接改 tikv 个数加的么,对应虚拟机有增加么,另外只启动一个 tidb pod 的话,很容易成为瓶颈,加 tikv 节点时可适当增加 tidb 节点个数。

tikv-3是新增的。
刚刚重新做了测试,205 qps,配置如下:

imxuxiong@instance-3:~/sysbench$ sysbench --config-file=config oltp_point_select --tables=1 --table-size=300000000 run
sysbench 1.0.18 (using bundled LuaJIT 2.1.0-beta2)

imxuxiong@instance-3:~/sysbench$ cat config
mysql-host=127.0.0.1
mysql-port=4000
mysql-user=root
mysql-db=sbtest
time=300
threads=16
report-interval=10
db-driver=mysql

测试期间,
tidb的慢日志
tidb0.slowlog (13.6 KB)
tidb1.slowlog (640.5 KB)

tikv没有日志,这些是2小时内的日志:
imxuxiong@instance-3:~$ cat /tmp/tikv1.log

[2019/11/01 12:01:13.042 +00:00] [ERROR] [endpoint.rs:454] [error-response] [err=“region message: “stale command””]

[2019/11/01 12:01:13.071 +00:00] [ERROR] [endpoint.rs:454] [error-response] [err=“region message: “peer is not leader for region 60, leader may Some(id: 14060 store_id: 5033)” not_leader { region_id: 60 leader { id: 14060 store_id: 5033 } }”]

imxuxiong@instance-3:~$ cat /tmp/tikv2.log

[2019/11/01 11:31:38.924 +00:00] [ERROR] [endpoint.rs:454] [error-response] [err=“region message: “peer is not leader for region 68, leader may Some(id: 13827 store_id: 5)” not_leader { region_id: 68 leader { id: 13827 store_id: 5 } }”]

imxuxiong@instance-3:~$ cat /tmp/tikv3.log

[2019/11/01 11:31:38.809 +00:00] [ERROR] [endpoint.rs:454] [error-response] [err=“region message: “peer is not leader for region 16, leader may Some(id: 10374 store_id: 4)” not_leader { region_id: 16 leader { id: 10374 store_id: 4 } }”]

[2019/11/01 11:31:38.912 +00:00] [ERROR] [endpoint.rs:454] [error-response] [err=“region message: “peer is not leader for region 16, leader may Some(id: 10374 store_id: 4)” not_leader { region_id: 16 leader { id: 10374 store_id: 4 } }”]

[2019/11/01 12:01:13.075 +00:00] [ERROR] [endpoint.rs:454] [error-response] [err=“region message: “"[src/raftstore/store/util.rs:251]: mismatch peer id 12258 != 14060"””]

麻烦导出下 Tikv-Detail 以及 TiDB 面板的所有监控项, 具体方法是 dE 打开所有监控然后再用上面提到的截屏插件来截屏。

另外也拿下所有节点的 tidb.log

我周末关闭了GKE集群,刚刚重启,之前的grafana监控找不到。

重启之后的point_select测试恢复到900qps。

但loader速度还是非常慢。

因截屏过大上传失败。

不知道如何拿所有节点的 tidb.log,我用stern导出部分日志,可以看到导入过程中的错误:

tidb.log (10.9 KB) tikv.log (10.5 KB)

stern -n tidb demo-tidb --tail 10 > /tmp/tidb.log

stern -n tidb demo-tikv --tail 10 > /tmp/tikv.log

因loader速度太慢,kill掉loader后,又重新执行了point_select测试,qps又只有112。

sysbench测试期间的日志如下:

tidb.pointselect.log (34.1 KB) tikv.pointselect.log (10.5 KB)

看了下给出的 tidb.log 和 tikv.log ,tidb.log 里面有两次 TiDB 的重启,是不是 loader 导数出现了 OOM 还是人为重启 ?另外 tikv.log 里面有大量的 locked primary_lock 报错,也是会影响读写操作,需要从测试逻辑来处理下。

我不太记得中间是否人为重启TiDB。loader导入的速度很慢,这样TiDB也会OOM吗?日志完整的话是否有记录OOM呢?要在tidb还是tikv中找?

测试使用sysbench做的point_select(主键查询),逻辑没有变,但在3亿至3亿8千万数据量之间不同次的测试结果变化很大,qps从900->100->900->100。

表结构如下:

imxuxiong@instance-3:/data/export-20191027-030217$ cat sbtest.sbtest1-schema.sql

/*!40103 SET TIME_ZONE=’+00:00’ */;

CREATE TABLE sbtest1 ( id int(11) NOT NULL AUTO_INCREMENT, k int(11) NOT NULL DEFAULT ‘0’, c char(120) NOT NULL DEFAULT ‘’, pad char(60) NOT NULL DEFAULT ‘’, PRIMARY KEY (id), KEY k_1 (k) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin AUTO_INCREMENT=10004202;

不过由于数据是我通过手工复制生成的,字段k、c、pad每间隔100万id就会重复。

tidb.log 里面在 2019/11/04 06:51:05.798 & 2019/11/04 06:51:06.247 两次有重启操作,是否 OOM 可以 dmesg -T | grep -i oom 看下系统日志,另外 tikv.log 中有大量的 locked primary_lock: 7480000000000000315F698000000000000001038000000000287E8D03800000001667F891 lock_version: 412314294144729091 key: 7480000000000000315F728000000016680B9B lock_ttl: 6861 txn_size: 4925"] 报错,也会影响读写的速度,curl http://{TiDBIP}:10080/schema?table_id=49 查下是哪个表,看着是这个表有问题。

我用的是k8s集群,容器关闭后,dmesg看不到昨天的日志。

schema输出如下,id=49看似sbtest1,是我测试库中唯一一张的表:

imxuxiong@instance-3:~$ curl localhost:10080/schema?table_id=49
Handling connection for 10080
{
"id": 49,
"name": {
 "O": "sbtest1",
 "L": "sbtest1"
},
"charset": "utf8mb4",
"collate": "utf8mb4_bin",
"cols": [
 {
  "id": 1,
  "name": {
   "O": "id",
   "L": "id"
  },
  "offset": 0,
  "origin_default": null,
  "default": null,
  "default_bit": null,
  "generated_expr_string": "",
  "generated_stored": false,
  "dependences": null,
  "type": {
   "Tp": 3,
   "Flag": 515,
   "Flen": 11,
   "Decimal": 0,
   "Charset": "binary",
   "Collate": "binary",
   "Elems": null
  },
  "state": 5,
  "comment": "",
  "version": 2
 },
 {
  "id": 2,
  "name": {
   "O": "k",
   "L": "k"
  },
  "offset": 1,
  "origin_default": null,
  "default": "0",
  "default_bit": null,
  "generated_expr_string": "",
  "generated_stored": false,
  "dependences": null,
  "type": {
   "Tp": 3,
   "Flag": 9,
   "Flen": 11,
   "Decimal": 0,
   "Charset": "binary",
   "Collate": "binary",
   "Elems": null
  },
  "state": 5,
  "comment": "",
  "version": 2
 },
 {
  "id": 3,
  "name": {
   "O": "c",
   "L": "c"
  },
  "offset": 2,
  "origin_default": null,
  "default": "",
  "default_bit": null,
  "generated_expr_string": "",
  "generated_stored": false,
  "dependences": null,
  "type": {
   "Tp": 254,
   "Flag": 1,
   "Flen": 120,
   "Decimal": 0,
   "Charset": "utf8mb4",
   "Collate": "utf8mb4_bin",
   "Elems": null
  },
  "state": 5,
  "comment": "",
  "version": 2
 },
 {
  "id": 4,
  "name": {
   "O": "pad",
   "L": "pad"
  },
  "offset": 3,
  "origin_default": null,
  "default": "",
  "default_bit": null,
  "generated_expr_string": "",
  "generated_stored": false,
  "dependences": null,
  "type": {
   "Tp": 254,
   "Flag": 1,
   "Flen": 60,
   "Decimal": 0,
   "Charset": "utf8mb4",
   "Collate": "utf8mb4_bin",
   "Elems": null
  },
  "state": 5,
  "comment": "",
  "version": 2
 }
],
"index_info": [
 {
  "id": 1,
  "idx_name": {
   "O": "k_1",
   "L": "k_1"
  },
  "tbl_name": {
   "O": "",
   "L": ""
  },
  "idx_cols": [
   {
    "name": {
     "O": "k",
     "L": "k"
    },
    "offset": 1,
    "length": -1
   }
  ],
  "is_unique": false,
  "is_primary": false,
  "state": 5,
  "comment": "",
  "index_type": 1
 }
],
"fk_info": null,
"state": 5,
"pk_is_handle": true,
"comment": "",
"auto_inc_id": 10004202,
"max_col_id": 4,
"max_idx_id": 1,
"update_timestamp": 412194168133124098,
"ShardRowIDBits": 0,
"max_shard_row_id_bits": 0,
"pre_split_regions": 0,
"partition": null,
"compression": "",
"view": null,
"version": 3
}

建议再复现下这个场景,把必要的信息再收集一下,后面再一起排查:

1、抓下 3 个日志: tidb 的 log,系统层的日志 dmesg 以及 tikv 的日志

2、导出两个监控 tikv-detail 面板和 tidb-sever 的面板

3、pd-ctl 看下所有 store 的状态

4、使用 pd-ctl 看下 tikv.log 中的 region 60 的信息

4亿记录单表,重新做了point-select(200qps)和read-write(60qps)测试,结果都比以前差很多。
记录如下。
如果必要,我也可以再重新测试。

tidb.log (2.1 MB) tikv.log (713.9 KB)
dmesg.txt (64.9 KB)

tikv-detail的图片太大,上传失败。分享到这里:Grafana

图片上传到百度网盘:链接: 百度网盘-链接不存在 提取码: 5ewc

imxuxiong@instance-3:~/sysbench$ kubectl exec -it demo-pd-0 -n tidb ./pd-ctl store
{
  "count": 4,
  "stores": [
    {
      "store": {
        "id": 1,
        "address": "demo-tikv-1.demo-tikv-peer.tidb.svc:20160",
        "labels": [
          {
            "key": "host",
            "value": "gke-tidb-default-pool-6dbef19b-jnwc"
          }
        ],
        "version": "3.0.4",
        "state_name": "Up"
      },
      "status": {
        "capacity": "98 GiB",
        "available": "28 GiB",
        "leader_count": 379,
        "leader_weight": 1,
        "leader_score": 33010,
        "leader_size": 33010,
        "region_count": 1216,
        "region_weight": 1,
        "region_score": 589544279.3093362,
        "region_size": 105022,
        "start_ts": "2019-11-08T00:32:10Z",
        "last_heartbeat_ts": "2019-11-08T02:09:11.619847293Z",
        "uptime": "1h37m1.619847293s"
      }
    },
    {
      "store": {
        "id": 4,
        "address": "demo-tikv-2.demo-tikv-peer.tidb.svc:20160",
        "labels": [
          {
            "key": "host",
            "value": "gke-tidb-default-pool-6dbef19b-3ht3"
          }
        ],
        "version": "3.0.4",
        "state_name": "Up"
      },
      "status": {
        "capacity": "98 GiB",
        "available": "30 GiB",
        "leader_count": 379,
        "leader_weight": 1,
        "leader_score": 32280,
        "leader_size": 32280,
        "region_count": 1105,
        "region_weight": 1,
        "region_score": 520626272.81566286,
        "region_size": 95664,
        "start_ts": "2019-11-08T00:32:12Z",
        "last_heartbeat_ts": "2019-11-08T02:09:06.629200146Z",
        "uptime": "1h36m54.629200146s"
      }
    },
    {
      "store": {
        "id": 5,
        "address": "demo-tikv-0.demo-tikv-peer.tidb.svc:20160",
        "labels": [
          {
            "key": "host",
            "value": "gke-tidb-default-pool-6dbef19b-6nv1"
          }
        ],
        "version": "3.0.4",
        "state_name": "Up"
      },
      "status": {
        "capacity": "98 GiB",
        "available": "29 GiB",
        "leader_count": 381,
        "leader_weight": 1,
        "leader_score": 32929,
        "leader_size": 32929,
        "region_count": 1165,
        "region_weight": 1,
        "region_score": 571088412.7278748,
        "region_size": 100897,
        "start_ts": "2019-11-08T00:31:15Z",
        "last_heartbeat_ts": "2019-11-08T02:09:08.980457636Z",
        "uptime": "1h37m53.980457636s"
      }
    },
    {
      "store": {
        "id": 5033,
        "address": "demo-tikv-3.demo-tikv-peer.tidb.svc:20160",
        "labels": [
          {
            "key": "host",
            "value": "gke-tidb-default-pool-6dbef19b-5gw1"
          }
        ],
        "version": "3.0.4",
        "state_name": "Up"
      },
      "status": {
        "capacity": "98 GiB",
        "available": "29 GiB",
        "leader_count": 365,
        "leader_weight": 1,
        "leader_score": 32013,
        "leader_size": 32013,
        "region_count": 1060,
        "region_weight": 1,
        "region_score": 544490989.9747672,
        "region_size": 91872,
        "sending_snap_count": 1,
        "start_ts": "2019-11-08T00:31:07Z",
        "last_heartbeat_ts": "2019-11-08T02:09:11.864320799Z",
        "uptime": "1h38m4.864320799s"
      }
    }
  ]
}

imxuxiong@instance-3:~/sysbench$ kubectl exec -it demo-pd-0 -n tidb ./pd-ctl region 60
{
  "id": 60,
  "start_key": "7480000000000000FF1D00000000000000F8",
  "end_key": "7480000000000000FF1F00000000000000F8",
  "epoch": {
    "conf_ver": 50,
    "version": 15
  },
  "peers": [
    {
      "id": 20431,
      "store_id": 4
    },
    {
      "id": 23069,
      "store_id": 1
    },
    {
      "id": 24085,
      "store_id": 5
    }
  ],
  "leader": {
    "id": 23069,
    "store_id": 1
  },
  "approximate_size": 1
}

感谢反馈,正在查看,会尽快反馈

您好:
1.从您反馈的监控查看,apply log duration时长较长,请查看tikv磁盘的performance disk性能指标(选择tikv服务器),是否正常.


2. 查看jian’k监控,你的2和3看起来比0和1的性能慢?

3. 导入的时候查看下监控是否有热点在tikv3上?表有自增id列吗?如果有尝试打散。

感谢回复。

因环境都在GKE上,我只是在测试时启动集群,测试完成后关闭。不同次测试的结果有时候波动很大,很可能和GKE的网络和IO环境不稳定有关系。但是否可能和TiDB的调度策略也有关系呢?

刚刚又完成了几次sysbench的point-select和read-write测试,比昨天的环境结果要好。

但read-write的波动很大。如下:

···
imxuxiong@instance-3:~/sysbench$ sysbench --config-file=config oltp_read_write --tables=1 --table-size=400000000 run
sysbench 1.0.18 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 16
Report intermediate results every 10 second(s)
Initializing random number generator from current time

 [ 10s ] thds: 16 tps: 19.69 qps: 414.98 (r/w/o: 292.91/81.08/40.99) lat (ms,95%): 1561.52 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 16 tps: 31.10 qps: 621.37 (r/w/o: 435.25/124.01/62.11) lat (ms,95%): 977.74 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 16 tps: 27.80 qps: 558.70 (r/w/o: 389.30/113.60/55.80) lat (ms,95%): 926.33 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 16 tps: 21.70 qps: 436.90 (r/w/o: 306.30/87.20/43.40) lat (ms,95%): 1401.61 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 16 tps: 32.20 qps: 632.10 (r/w/o: 442.90/124.80/64.40) lat (ms,95%): 977.74 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 16 tps: 39.60 qps: 793.90 (r/w/o: 556.00/158.70/79.20) lat (ms,95%): 646.19 err/s: 0.00 reconn/s: 0.00
[ 70s ] thds: 16 tps: 40.60 qps: 813.00 (r/w/o: 569.40/162.40/81.20) lat (ms,95%): 601.29 err/s: 0.00 reconn/s: 0.00
[ 80s ] thds: 16 tps: 41.00 qps: 825.30 (r/w/o: 578.80/164.50/82.00) lat (ms,95%): 601.29 err/s: 0.00 reconn/s: 0.00
[ 90s ] thds: 16 tps: 36.60 qps: 724.20 (r/w/o: 505.40/145.60/73.20) lat (ms,95%): 787.74 err/s: 0.00 reconn/s: 0.00
[ 100s ] thds: 16 tps: 42.70 qps: 860.40 (r/w/o: 602.70/172.30/85.40) lat (ms,95%): 590.56 err/s: 0.00 reconn/s: 0.00
[ 110s ] thds: 16 tps: 46.60 qps: 932.80 (r/w/o: 653.70/186.00/93.10) lat (ms,95%): 530.08 err/s: 0.00 reconn/s: 0.00
 [ 120s ] thds: 16 tps: 44.90 qps: 897.39 (r/w/o: 628.09/179.40/89.90) lat (ms,95%): 549.52 err/s: 0.00 reconn/s: 0.00
[ 130s ] thds: 16 tps: 46.50 qps: 923.80 (r/w/o: 645.70/185.10/93.00) lat (ms,95%): 520.62 err/s: 0.00 reconn/s: 0.00
[ 140s ] thds: 16 tps: 44.00 qps: 886.31 (r/w/o: 620.30/177.80/88.20) lat (ms,95%): 539.71 err/s: 0.00 reconn/s: 0.00
[ 150s ] thds: 16 tps: 47.30 qps: 941.70 (r/w/o: 659.70/187.50/94.50) lat (ms,95%): 520.62 err/s: 0.00 reconn/s: 0.00
[ 160s ] thds: 16 tps: 46.60 qps: 934.00 (r/w/o: 653.50/187.20/93.30) lat (ms,95%): 511.33 err/s: 0.00 reconn/s: 0.00
[ 170s ] thds: 16 tps: 46.80 qps: 928.99 (r/w/o: 649.80/185.50/93.70) lat (ms,95%): 530.08 err/s: 0.00 reconn/s: 0.00
[ 180s ] thds: 16 tps: 29.20 qps: 598.40 (r/w/o: 418.30/121.60/58.50) lat (ms,95%): 1258.08 err/s: 0.00 reconn/s: 0.00
 [ 190s ] thds: 16 tps: 14.10 qps: 278.50 (r/w/o: 195.80/54.50/28.20) lat (ms,95%): 2585.31 err/s: 0.00 reconn/s: 0.00
[ 200s ] thds: 16 tps: 13.80 qps: 281.90 (r/w/o: 197.30/57.00/27.60) lat (ms,95%): 2238.47 err/s: 0.00 reconn/s: 0.00
[ 210s ] thds: 16 tps: 3.10 qps: 59.50 (r/w/o: 41.60/11.70/6.20) lat (ms,95%): 6476.48 err/s: 0.00 reconn/s: 0.00
[ 220s ] thds: 16 tps: 0.30 qps: 5.60 (r/w/o: 3.70/1.30/0.60) lat (ms,95%): 13797.01 err/s: 0.00 reconn/s: 0.00
[ 230s ] thds: 16 tps: 1.60 qps: 28.80 (r/w/o: 20.50/5.10/3.20) lat (ms,95%): 22034.77 err/s: 0.00 reconn/s: 0.00
[ 240s ] thds: 16 tps: 1.60 qps: 28.10 (r/w/o: 18.90/6.00/3.20) lat (ms,95%): 26861.48 err/s: 0.00 reconn/s: 0.00
[ 250s ] thds: 16 tps: 1.90 qps: 48.40 (r/w/o: 34.40/10.20/3.80) lat (ms,95%): 17435.99 err/s: 0.00 reconn/s: 0.00
[ 260s ] thds: 16 tps: 1.90 qps: 34.20 (r/w/o: 24.60/5.80/3.80) lat (ms,95%): 17124.84 err/s: 0.00 reconn/s: 0.00
[ 270s ] thds: 16 tps: 1.90 qps: 38.00 (r/w/o: 26.50/7.70/3.80) lat (ms,95%): 16519.10 err/s: 0.00 reconn/s: 0.00
[ 280s ] thds: 16 tps: 2.50 qps: 48.30 (r/w/o: 33.40/9.90/5.00) lat (ms,95%): 16224.31 err/s: 0.00 reconn/s: 0.00
[ 290s ] thds: 16 tps: 1.00 qps: 20.70 (r/w/o: 13.50/5.20/2.00) lat (ms,95%): 12384.09 err/s: 0.00 reconn/s: 0.00
[ 300s ] thds: 16 tps: 2.70 qps: 52.60 (r/w/o: 38.10/9.10/5.40) lat (ms,95%): 16819.24 err/s: 0.00 reconn/s: 0.00
[ 310s ] thds: 15 tps: 0.60 qps: 7.90 (r/w/o: 4.20/3.10/0.60) lat (ms,95%): 9118.47 err/s: 0.00 reconn/s: 0.00
[ 320s ] thds: 10 tps: 0.80 qps: 1.00 (r/w/o: 0.00/0.20/0.80) lat (ms,95%): 21255.35 err/s: 0.00 reconn/s: 0.00
 [ 330s ] thds: 10 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            102606
        write:                           29311
        other:                           14663
        total:                           146580
    transactions:                        7329   (21.96 per sec.)
    queries:                             146580 (439.12 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          333.8007s
    total number of events:              7329

Latency (ms):
         min:                                  126.82
         avg:                                  682.61
         max:                                35944.98
         95th percentile:                     1213.57
         sum:                              5002841.81

表有自增列。但非常明显的性能问题我是在3亿记录后才发现的。请问如何“打散”呢?
表结构如下:

imxuxiong@instance-3:/data/export-20191027-030217$ cat sbtest.sbtest1-schema.sql 
/*!40103 SET TIME_ZONE='+00:00' */;
CREATE TABLE `sbtest1` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `k` int(11) NOT NULL DEFAULT '0',
  `c` char(120) NOT NULL DEFAULT '',
  `pad` char(60) NOT NULL DEFAULT '',
  PRIMARY KEY (`id`),
  KEY `k_1` (`k`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin AUTO_INCREMENT=10004202;

测试期间服务器性能监控(包括IO)见下:

tidb和tikv的监控图太大,上传到云盘:链接: 百度网盘-链接不存在 提取码: jnid 复制这段内容后打开百度网盘手机App,操作更方便哦

您好:
如果是int类型自增列,由于写入的时候会顺序写入,容易产生热点,不知道此列在业务上是否必须自增主键?
业务上是否有可能改造,使用以下参数shard_row_id_bits和pre_split_regions,请参考以下文章,多谢
https://pingcap.com/docs-cn/stable/reference/configuration/tidb-server/tidb-specific-variables/#shard_row_id_bits
https://pingcap.com/docs-cn/stable/reference/sql/statements/split-region/#pre_split_regions

现在数据已经导入4亿的情况下,我还需要执行“split table”吗?执行这个操作耗时跟已有的数据量大概什么关系呢?比方,导入4亿花了若干天时间,split的耗时跟导入的耗时有什么关系呢?

sysbench的point-select已经是分散的吧。split是不是只能解决导入/写入数据的问题呢?

另外,在导入3亿后增加kv节点,我才观测到性能明显下降。如果是kv热点问题,是不是在一开始导入的时候就会观察到?