通用场景下tikv的copTask在内存中扫描数据太慢,比mysql、pg等都慢

【 TiDB 使用环境】Poc
【 TiDB 版本】6.6
【复现路径】做全表扫描观察tikv的copTask行为
【遇到的问题:问题现象及影响】copTask扫描数据太慢
【资源配置】都是在单线程下,内存中执行,不涉及资源使用
在通用场景中测试发现在tikv扫描数据相比于mysql或者pg都比较慢,如果说是因为lsmtree原因,其它某些基于lsmtree的数据库扫描并不比mysql差。
在实际测试中发现tikv的一个copTask全表扫描能力比单线程的mysql慢1-2倍,比单线程的pg慢5倍左右。
想问下为何tikv扫描会这么慢,是否后续有优化空间?
这里因为都是内存读,实际block设置的大小的意义并不是很大,但是这里也都贴出来作为参考。
测试语句为:select sum(C_NATIONKEY) from customer; 并让其走全表扫描。
所有数据库表结构均为:

mysql> show create table customer \G
*************************** 1. row ***************************
       Table: customer
Create Table: CREATE TABLE `customer` (
  `C_CUSTKEY` bigint NOT NULL,
  `C_NAME` varchar(25) NOT NULL,
  `C_ADDRESS` varchar(40) NOT NULL,
  `C_NATIONKEY` bigint NOT NULL,
  `C_PHONE` char(15) NOT NULL,
  `C_ACCTBAL` decimal(15,2) NOT NULL,
  `C_MKTSEGMENT` char(10) NOT NULL,
  `C_COMMENT` varchar(117) NOT NULL,
  PRIMARY KEY (`C_CUSTKEY`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
1 row in set (0.00 sec)

TiDB的相关测试如下:

mysql> select version();
+--------------------+
| version()          |
+--------------------+
| 5.7.25-TiDB-v6.6.0 |
+--------------------+
1 row in set (0.00 sec)

mysql> show config where name like 'rocksdb%block-size%';
+------+----------------------+------------------------------+-------+
| Type | Instance             | Name                         | Value |
+------+----------------------+------------------------------+-------+
| tikv | 192.168.31.201:20160 | rocksdb.defaultcf.block-size | 32KiB |
| tikv | 192.168.31.201:20160 | rocksdb.lockcf.block-size    | 16KiB |
| tikv | 192.168.31.201:20160 | rocksdb.raftcf.block-size    | 16KiB |
| tikv | 192.168.31.201:20160 | rocksdb.writecf.block-size   | 32KiB |
+------+----------------------+------------------------------+-------+
4 rows in set (0.01 sec)

--这里设置为1的作用是方便统计单个copTask的耗时,单个copTask一直执行过程中cpu消耗为100%(一个cpu)
mysql> set tidb_distsql_scan_concurrency=1;
Query OK, 0 rows affected (0.00 sec)

mysql> explain analyze select sum(C_NATIONKEY) from customer;
+----------------------------+------------+---------+-----------+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------+-----------+------+
| id                         | estRows    | actRows | task      | access object  | execution info                                                                                                                                                                                                                                                                                          | operator info                                   | memory    | disk |
+----------------------------+------------+---------+-----------+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------+-----------+------+
| StreamAgg_16               | 1.00       | 1       | root      |                | time:754.7ms, loops:2                                                                                                                                                                                                                                                                                   | funcs:sum(Column#11)->Column#9                  | 1.45 KB   | N/A  |
| └─TableReader_17           | 1.00       | 3       | root      |                | time:754.7ms, loops:2, cop_task: {num: 3, max: 306.3ms, min: 169ms, avg: 251.4ms, p95: 306.3ms, max_proc_keys: 603265, p95_proc_keys: 603265, tot_proc: 751ms, rpc_num: 3, rpc_time: 754.2ms, copr_cache: disabled, build_task_duration: 12.4µs, max_distsql_concurrency: 1}                            | data:StreamAgg_8                                | 442 Bytes | N/A  |
|   └─StreamAgg_8            | 1.00       | 3       | cop[tikv] |                | tikv_task:{proc max:305ms, min:168ms, avg: 250.3ms, p80:305ms, p95:305ms, iters:1466, tasks:3}, scan_detail: {total_process_keys: 1500000, total_process_keys_size: 305225771, total_keys: 1500003, get_snapshot_time: 76.4µs, rocksdb: {key_skipped_count: 1500000, block: {cache_hit_count: 10004}}}  | funcs:sum(tpch.customer.c_nationkey)->Column#11 | N/A       | N/A  |
|     └─TableFullScan_15     | 1500000.00 | 1500000 | cop[tikv] | table:customer | tikv_task:{proc max:246ms, min:132ms, avg: 204ms, p80:246ms, p95:246ms, iters:1466, tasks:3}                                                                                                                                                                                                            | keep order:false                                | N/A       | N/A  |
+----------------------------+------------+---------+-----------+----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------+-----------+------+
4 rows in set (0.76 sec)

从执行计划信息中total_process_keys: 1500000, total_process_keys_size: 305225771, total_keys: 1500003可以看到所有扫描过的数据几乎不包含未GC的数据,均是有效读数据。从rocksdb: {key_skipped_count: 1500000, block: {cache_hit_count: 10004}}}可以看到均发生的是逻辑读(内存读),次数为10004
这里表扫描算子是:TableFullScan_15,一共执行了3个task,每一个task执行时间平均为:avg: 204ms,所以表扫描一共花费的时间为612毫秒

MySQL的相关测试如下:

mysql> select version();
+-----------+
| version() |
+-----------+
| 8.0.32    |
+-----------+
1 row in set (0.00 sec)

mysql> show variables like '%innodb_page_size%';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| innodb_page_size | 16384 |
+------------------+-------+
1 row in set (0.00 sec)

mysql> show status like 'Innodb_data_reads';
+-------------------+---------+
| Variable_name     | Value   |
+-------------------+---------+
| Innodb_data_reads | 3710851 |
+-------------------+---------+
1 row in set (0.01 sec)

mysql> show status like 'Innodb_buffer_pool_read_requests';
+----------------------------------+-----------+
| Variable_name                    | Value     |
+----------------------------------+-----------+
| Innodb_buffer_pool_read_requests | 414272227 |
+----------------------------------+-----------+
1 row in set (0.00 sec)

mysql> explain  analyze select sum(C_NATIONKEY) from customer;
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| EXPLAIN                                                                                                                                                                                                                        |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| -> Aggregate: sum(customer.C_NATIONKEY)  (cost=234307.26 rows=1) (actual time=334.921..334.922 rows=1 loops=1)
    -> Table scan on customer  (cost=120530.06 rows=1137772) (actual time=0.097..237.381 rows=1500000 loops=1)
 |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.33 sec)

mysql> show status like 'Innodb_data_reads';
+-------------------+---------+
| Variable_name     | Value   |
+-------------------+---------+
| Innodb_data_reads | 3710851 |
+-------------------+---------+
1 row in set (0.00 sec)

mysql> show status like 'Innodb_buffer_pool_read_requests';
+----------------------------------+-----------+
| Variable_name                    | Value     |
+----------------------------------+-----------+
| Innodb_buffer_pool_read_requests | 414310612 |
+----------------------------------+-----------+
1 row in set (0.00 sec)

从语句执行前后的Innodb_data_reads指标可以看到都是3710851 ,说明没有发生物理读。
从语句执行前后的Innodb_buffer_pool_read_requests 指标可以看到发生的逻辑读次数为:414310612 - 414272227 = 38385
从执行信息中:Table scan on customer (cost=120530.06 rows=1137772) (actual time=0.097…237.381 rows=1500000 loops=1)可以看到表扫描的实际执行时间一共为237.381毫秒

postgresql的相关测试如下:

tpch=# select version();
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 15.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit
(1 row)

tpch=# set max_parallel_workers_per_gather=0;
SET

tpch=# set track_io_timing=on;
SET

tpch=# SELECT current_setting('block_size');
 current_setting 
-----------------
 8192
(1 row)

tpch=# explain (analyze true,buffers true,verbose true,timing) select sum(C_NATIONKEY) from customer;
                                                           QUERY PLAN                                                            
---------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=56484.90..56484.91 rows=1 width=32) (actual time=234.525..234.526 rows=1 loops=1)
   Output: sum(c_nationkey)
   Buffers: shared hit=37734
   ->  Seq Scan on public.customer  (cost=0.00..52734.72 rows=1500072 width=8) (actual time=0.012..111.046 rows=1500000 loops=1)
         Output: c_custkey, c_name, c_address, c_nationkey, c_phone, c_acctbal, c_mktsegment, c_comment
         Buffers: shared hit=37734
 Planning Time: 0.060 ms
 Execution Time: 234.556 ms
(8 rows)

从执行信息中可以看到全部是逻辑读且一共发生了37734次。
从表扫描这里:Seq Scan on public.customer (cost=0.00…52734.72 rows=1500072 width=8) (actual time=0.012…111.046 rows=1500000 loops=1)可以看到单线程表扫描一共花费111.046毫秒。

在开启并行的情况下(比如tidb默认distsql_scan_concurrency=15),其它数据库也都能有较好的加速,CPU充足情况基本线性。某些行列混存的数据库表现要更好。
全内存扫描时间长意味着花费的CPU时间长,所以相同处理量的情况下需要的资源就要多。
所以这里请问下相同数据量情况下为何tikv的copTask的扫描逻辑读耗时这么长呢?是否后续会有优化空间?

首先,测试的背景和配置是什么?
然后,涉及的目标和条件是什么?
其次,你的预期是什么?
最后,想要一个什么答案?

划重点

大佬这个重点是一语中的。

@人如其名
“total_process_keys: 1500000, total_process_keys_size: 305225771, total_keys: 1500003可以看到所有扫描过的数据几乎不包含未GC的数据,均是有效读数据。从rocksdb: {key_skipped_count: 1500000, block: {cache_hit_count: 10004}}}”
block cacht hit才10004,一共有1500000ge key。这里应该发生物理读了吧。如果方便的话请上传监控数据到clinic (https://docs.pingcap.com/tidb/dev/clinic-introduction),方便我们进一步分析。

scan_detail: {get_snapshot_time: 14.8µs, rocksdb: {block: {}}}
我测试类似语句,为什么 rocksdb:这里都没有值

mysql> explain analyze select sum(L_EXTENDEDPRICE) from lineitem;
+----------------------------+------------+---------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------+-----------+------+
| id                         | estRows    | actRows | task      | access object  | execution info                                                                                                                                                                                                                                                                                                                                                    | operator info                                        | memory    | disk |
+----------------------------+------------+---------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------+-----------+------+
| HashAgg_11                 | 1.00       | 1       | root      |                | time:1.37s, loops:2, partial_worker:{wall_time:1.366558591s, concurrency:5, task_num:1, tot_wait:6.832190204s, tot_exec:31.755µs, tot_time:6.832224842s, max:1.366448056s, p95:1.366448056s}, final_worker:{wall_time:0s, concurrency:5, task_num:1, tot_wait:6.832286057s, tot_exec:16.792µs, tot_time:6.832305524s, max:1.366465061s, p95:1.366465061s}         | funcs:sum(Column#18)->Column#17                      | 20.2 KB   | N/A  |
| └─TableReader_12           | 1.00       | 13      | root      |                | time:1.37s, loops:2, cop_task: {num: 13, max: 1.37s, min: 708.7ms, avg: 998.8ms, p95: 1.37s, max_proc_keys: 467326, p95_proc_keys: 467326, tot_proc: 12.9s, tot_wait: 55ms, rpc_num: 13, rpc_time: 13s, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}                                                                                                      | data:HashAgg_5                                       | 436 Bytes | N/A  |
|   └─HashAgg_5              | 1.00       | 13      | cop[tikv] |                | tikv_task:{proc max:1.36s, min:687ms, avg: 985.2ms, p80:1.07s, p95:1.36s, iters:5869, tasks:13}, scan_detail: {total_process_keys: 6001215, total_process_keys_size: 1166860203, total_keys: 6001228, get_snapshot_time: 59.8ms, rocksdb: {key_skipped_count: 6001215, block: {cache_hit_count: 43, read_count: 18391, read_byte: 554.2 MB, read_time: 877.6ms}}} | funcs:sum(tpch1.lineitem.l_extendedprice)->Column#18 | N/A       | N/A  |
|     └─TableFullScan_10     | 6622334.00 | 6001215 | cop[tikv] | table:lineitem | tikv_task:{proc max:1.28s, min:630ms, avg: 924.2ms, p80:1.02s, p95:1.28s, iters:5869, tasks:13}                                                                                                                                                                                                                                                                   | keep order:false                                     | N/A       | N/A  |
+----------------------------+------------+---------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------+-----------+------+
4 rows in set (1.37 sec)

block: {cache_hit_count: 43, read_count: 18391, read_byte: 554.2 MB, read_time: 877.6ms} 这种带有read_count才是发生物理读吧,我帖子中的那个只有逻辑读。
另外这种慢的情况并不是特殊情况,是普遍情况,所以你随便自己做个环境测试就能测试出来。

https://docs.pingcap.com/tidb/stable/identify-slow-queries#fields-description

尝试配置 TiKV 的 readwrite-splitting 和 region merge/split 配置来优化 TiKV 的集群性能

OK,定性的分析来说,LSM tree的读性能要比Btree的性能差一些。有一些mitigation可以尝试:

  1. storage.block-cache.num-shard-bits, 默认是6。如果block cache设置超过8GB的话,可以调大num-shard-bits到 log2(block cache capacity / 128MB)。调大可以增加block cache的并发度
    2)rocksdb.writecf/defaultcf.num-levels 减少层数对读有帮助,但是会增加写放大。所以这个需要根据特定需求和workload权衡利弊。

从产品层面LSM tree有一些读的优化可以做,但这些改动我们有一些探索,但不是短期能落地的。
可以了解一下你们的硬件配置和数据大小吗以及你们的诉求。

另外TiFlash你们是否有打开。TiDB会自动选择tikv或者TiFlash,在有些场景会有大的提升。

1 个赞

是的,这个能接受。但是同样是LSMTree的另一家国产分布式数据库他们纯内存扫描速度就非常快,比咱们快数倍到十几倍。不太能接受。。。所以还是想看到底慢在哪里。同样的环境安装了tidb和某B,测试方法如上面。我个人感觉和配置没关系,我已经把测试场景降低到单线程纯内存操作了。而且tidb和某B都是单节点的纯全表扫描。
另外我们也会对重点复杂场景上tiflash,但是tikv是基础而且不一定每一套系统都会上tiflash。所以希望tikv上在数据扫描上进一步优化下。目前想看到底是慢在哪里了,比如是rocksdb的next扫描慢,get慢?还是tikv的task上判断数据可见性时候慢还是什么地方?

某b是ob吗

OB和TiKV用的引擎不一样,TiKV用的RocksDB目前没有对读做特别的优化。我们后面也会去做这个优化。

1 个赞

这个block cacht hit只得是block的数量不是key的数量,按照上方提供的32KB一个block, 应该不小了