站在使用者的角度,我赞同你的说法,也想让limit稳定。
但是好像看mysql语法limit并不是用来做分页的,也不能保证稳定性( https://dev.mysql.com/doc/refman/8.0/en/limit-optimization.html ),只是在大多数场景下是稳定的,可能就用惯了把limit当做分页(不稳定的场景比如执行计划的改变,表做了optimize等)。
用row_number()over()来做分页更严谨一些,但是效率相对较差,顺便也说下tidb的分页和传统数据库的分页在生成序号的时候还是有一点不太一样。
mysql> select o_orderkey,row_number()over() as nbr from orders order by o_orderkey limit 10;
+------------+----------+
| o_orderkey | nbr |
+------------+----------+
| 1 | 4436580 |
| 2 | 13517437 |
| 3 | 5491147 |
| 4 | 9463180 |
| 5 | 2440301 |
| 6 | 9888487 |
| 7 | 4486562 |
| 32 | 8920212 |
| 33 | 5572217 |
| 34 | 564184 |
+------------+----------+
10 rows in set (4.24 sec)
mysql> explain analyze select o_orderkey,row_number()over() as nbr from orders order by o_orderkey limit 10;
+----------------------------+-------------+----------+-----------+----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+---------+------+
| id | estRows | actRows | task | access object | execution info | operator info | memory | disk |
+----------------------------+-------------+----------+-----------+----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+---------+------+
| TopN_12 | 10.00 | 10 | root | | time:6.2s, loops:2 | tpch.orders.o_orderkey, offset:0, count:10 | 26.7 KB | N/A |
| └─Window_17 | 15000000.00 | 15000000 | root | | time:5.6s, loops:14661 | row_number()->Column#11 over(rows between current row and current row) | N/A | N/A |
| └─IndexReader_21 | 15000000.00 | 15000000 | root | | time:2.19s, loops:14661, cop_task: {num: 15, max: 4.56s, min: 1.95s, avg: 3.71s, p95: 4.56s, max_proc_keys: 1435897, p95_proc_keys: 1435897, tot_proc: 38.9s, tot_wait: 336ms, rpc_num: 15, rpc_time: 55.6s, copr_cache: disabled, distsql_concurrency: 15} | index:IndexFullScan_20 | 95.8 MB | N/A |
| └─IndexFullScan_20 | 15000000.00 | 15000000 | cop[tikv] | table:orders, index:orders_idx1(O_ORDERDATE) | tikv_task:{proc max:3.22s, min:918ms, avg: 2.05s, p80:2.51s, p95:3.22s, iters:14720, tasks:15}, scan_detail: {total_process_keys: 15000000, total_process_keys_size: 690000000, total_keys: 15000020, rocksdb: {key_skipped_count: 15000005, block: {cache_hit_count: 6117, read_count: 3103, read_byte: 115.9 MB}}} | keep order:false | N/A | N/A |
+----------------------------+-------------+----------+-----------+----------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+---------+------+
4 rows in set (6.21 sec)
这个在传统数据库中不会这样子的,序号是往下排列的,如果tidb需要达到相同效果那么需要如下改写:
select o_orderkey,row_number()over() as nbr from (select o_orderkey from orders order by o_orderkey limit 10) a;
mysql> select o_orderkey,row_number()over() as nbr from (select o_orderkey from orders order by o_orderkey limit 10) a;
+------------+------+
| o_orderkey | nbr |
+------------+------+
| 1 | 1 |
| 2 | 2 |
| 3 | 3 |
| 4 | 4 |
| 5 | 5 |
| 6 | 6 |
| 7 | 7 |
| 32 | 8 |
| 33 | 9 |
| 34 | 10 |
+------------+------+
10 rows in set (0.02 sec)
mysql> explain analyze select o_orderkey,row_number()over() as nbr from (select o_orderkey from orders order by o_orderkey limit 10) a;
+------------------------------+---------+---------+-----------+---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+-----------+------+
| id | estRows | actRows | task | access object | execution info | operator info | memory | disk |
+------------------------------+---------+---------+-----------+---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+-----------+------+
| Window_11 | 10.00 | 10 | root | | time:5.73ms, loops:2 | row_number()->Column#11 over(rows between current row and current row) | N/A | N/A |
| └─Limit_15 | 10.00 | 10 | root | | time:5.64ms, loops:2 | offset:0, count:10 | N/A | N/A |
| └─TableReader_28 | 10.00 | 10 | root | | time:5.6ms, loops:1, cop_task: {num: 1, max: 4.93ms, proc_keys: 10, rpc_num: 1, rpc_time: 4.67ms, copr_cache: disabled, distsql_concurrency: 1} | data:Limit_27 | 297 Bytes | N/A |
| └─Limit_27 | 10.00 | 10 | cop[tikv] | | tikv_task:{time:0s, loops:1}, scan_detail: {total_process_keys: 10, total_process_keys_size: 270, total_keys: 11, rocksdb: {key_skipped_count: 10, block: {cache_hit_count: 6}}} | offset:0, count:10 | N/A | N/A |
| └─TableFullScan_26 | 10.00 | 10 | cop[tikv] | table:orders | tikv_task:{time:0s, loops:1} | keep order:true | N/A | N/A |
+------------------------------+---------+---------+-----------+---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------+-----------+------+
5 rows in set (0.02 sec)