在4.0中的执行计划带`for update `导致update拿不到锁引起大量3秒超时。

为提高效率,请提供以下信息,问题描述清晰能够更快得到解决:
【 TiDB 使用环境】
生产测试环境,V4.0.13

【概述】 少量update 语句出现3秒以上的延时,发现是prewrite:3.52过高,经查询由于update的执行计划不同导致,但是未查出具体哪里出的问题

【背景】
录制3.0TiDB中的语句实时写到4.0验证4.0集群发现的慢SQL

【现象】
通过流量复制验证4.0,有大量3秒超时的慢查询,报的错是被其他事务锁住了导致回退。但是在3.0的集群中未遇到这种情况。

SQL

UPDATE
  fcbox_pre_post_order
SET
  post_id = '123456',
  order_status = 300,
  update_emp = 'sys',
  update_sys_tm = '2021-07-21 15:50:39.449',
  charge_type = 1
WHERE
  post_id = '123456';

慢查询的执行计划

id                     	task     	estRows	operator info                                                                                                                                      	actRows	execution info                                                                                                                                                                                                                                                                                                                                                                   	memory 	disk
	Update_5               	root     	0      	N/A                                                                                                                                                	0      	time:3.15ms, loops:1, , commit_txn: {prewrite:3.52s, backoff: {time: 3.51s, type: [txnNotFound]}, resolve_lock: 3.52s, region_num:4, write_keys:5, write_byte:429, txn_retry:1}, lock_keys: {time:8.38ms, region:1, keys:1, lock_rpc:8.359945ms, rpc_count:1}                                                                                                                    	14.4 KB	N/A
	└─SelectLock_7         	root     	1.12   	for update                                                                                                                                         	0      	                                                                                                                                                                                                                                                                                                                                                                                 	N/A    	N/A
	  └─IndexLookUp_13     	root     	1.12   	                                                                                                                                                   	1      	time:2.94ms, loops:2, index_task: {total_time: 724µs, fetch_handle: 721.9µs, build: 653ns, wait: 1.48µs}, table_task: {total_time: 5.16ms, num: 1, concurrency: 4}                                                                                                                                                                                                            	22.4 KB	N/A
	    ├─IndexRangeScan_11	cop[tikv]	1.12   	table:fcbox_pre_post_order, index:idx_post_id(post_id), range:["123456","123456"], keep order:false	1      	time:2.21ms, loops:5, index_task: {total_time: 461µs, fetch_handle: 457.8µs, build: 1.09µs, wait: 2.12µs}, table_task: {total_time: 2.98ms, num: 1, concurrency: 4}, cop_task: {num: 1, max: 674.3µs, proc_keys: 1, tot_proc: 1ms, rpc_num: 1, rpc_time: 655.6µs, copr_cache: disabled}, tikv_task:{time:1ms, loops:1}, scan_detail: {total_process_keys: 1, total_keys: 2}	22.4 KB	N/A
	    └─TableRowIDScan_12	cop[tikv]	1.12   	table:fcbox_pre_post_order, keep order:false                                                                                                       	1      	time:1.93ms, loops:2, cop_task: {num: 1, max: 1.84ms, proc_keys: 1, rpc_num: 1, rpc_time: 1.82ms, copr_cache: disabled}, tikv_task:{time:0s, loops:1}, scan_detail: {total_process_keys: 1, total_keys: 1}                                                                                                                                                                       	N/A    	N/A

该SQL有两个执行计划,一个正常一个不正常。

正常的执行计划

	id                   	task     	estRows	operator info                                                                                                                                      	actRows	execution info                                                                                                                                                                                                               	memory 	disk
	Update_4             	root     	0      	N/A                                                                                                                                                	0      	time:1.55ms, loops:1, , commit_txn: {prewrite:1.28ms, get_commit_ts:137.2µs, commit:1.01ms, region_num:3, write_keys:3, write_byte:395}                                                                                     	14.4 KB	N/A
	└─IndexLookUp_11     	root     	1.12   	                                                                                                                                                   	1      	time:1.39ms, loops:2, index_task: {total_time: 663.1µs, fetch_handle: 660.5µs, build: 710ns, wait: 1.87µs}, table_task: {total_time: 3.44ms, num: 1, concurrency: 4}                                                      	22.4 KB	N/A
	  ├─IndexRangeScan_9 	cop[tikv]	1.12   	table:fcbox_pre_post_order, index:idx_post_id(post_id), range:["123456","123456"], keep order:false	1      	time:657.8µs, loops:3, cop_task: {num: 1, max: 626µs, proc_keys: 1, tot_proc: 1ms, rpc_num: 1, rpc_time: 613.9µs, copr_cache: disabled}, tikv_task:{time:0s, loops:1}, scan_detail: {total_process_keys: 1, total_keys: 2}	N/A    	N/A
	  └─TableRowIDScan_10	cop[tikv]	1.12   	table:fcbox_pre_post_order, keep order:false       

不正常的执行计划里面多了└─SelectLock_7 root 1.12 for update
在3.0都没遇到过这种语句。

【问题】 消除该执行计划不同

【TiDB 版本】
V4.0.13
【应用软件及版本】

【附件】 相关日志及配置信息

  • TiUP Cluster Display 信息
  • TiUP CLuster Edit config 信息

监控(https://metricstool.pingcap.com/)

  • TiDB-Overview Grafana监控
  • TiDB Grafana 监控
  • TiKV Grafana 监控
  • PD Grafana 监控
  • 对应模块日志(包含问题前后 1 小时日志)
2 个赞

你现在用的环境,是乐观事务,还是悲观事务?
select * from xxx for update 是带有锁性质的请求,你希望锁住,还是不锁住?

3.0 默认是乐观事务,不支持悲观模式
4.0 支持乐观和悲观两种事务模型,你想切换到3.0 的模式么? 改下配置就好了

1 个赞

3.0使用的是悲观锁

4.0也是

目前是update语句执行计划中出现了 for update的操作,目前正在翻代码看下啥情况,这个执行计划在3.0是没有出现过的。而且文档也没介绍过

1 个赞

收到,这个可能需要CTC 的小伙伴来支持了

3.0.8 以后的版本是支持悲观锁的

tidb-server 的一个参数: txn-local-latches ,看看 v3 和 v4 配置有没有不同?(2个版本都是 默认关闭)

该参数都没调整过,v4当中查询tidb_config参数和CLUSTER_CONFIG表中都未发现该配置,v3中显示enabled = false

1、先回答问题本身哈,这个时间长,是锁冲突导致,解决问题,建议还是程序去避免这个情况
2、v4和 v3 表现不一致,我建议查看一下 % tidb_retry_limit% 重试参数,看看2个版本是否一致



要不然把需要的信息都列一下,一起看下。

现在的问题是3.0会一点问题都没,4.0有问题。这个跟程序也没直接关系吧。大概率是设置上面有问题。

1、能介绍下遇到txnNotFound根本原因是啥么。
2、有少量的执行计划会出现└─SelectLock_7 root 1.12 for update。这块有介绍么。

1、txnnotfound 直接官网搜就ok,SelectLock 没有相关介绍
2、你的语句正常执行时间没差多少(v4 时间超过3s 是因为锁冲突导致 backoff,锁冲突本身代表程序逻辑可以优化,这个也是彻底解决问题的办法)
3、至于 v3 和 v4 执行计划不一样的问题,是不是导致锁冲突,这个有待验证(我上面验证几个参数,是确认tidb测锁相关的事情),你的 SQL 前后v3 和 v4 写的 key 及 涉及的region 数量是不同的,所以建议,你不要通过 对比来解决,按照正常的问题排查问题就好了(tidb-server 日志,确定锁冲突的原因是 表 还是索引,找到原因后,建议让程序就修改)

进一步看了下数据先看下冲突的表

MySQL [INFORMATION_SCHEMA]> select * from tables where tidb_table_id=4302;
+---------------+--------------+----------------------+------------+--------+---------+------------+------------+----------------+--------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+-----------------+---------------+---------------------------+
| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME           | TABLE_TYPE | ENGINE | VERSION | ROW_FORMAT | TABLE_ROWS | AVG_ROW_LENGTH | DATA_LENGTH  | MAX_DATA_LENGTH | INDEX_LENGTH | DATA_FREE | AUTO_INCREMENT | CREATE_TIME         | UPDATE_TIME | CHECK_TIME | TABLE_COLLATION | CHECKSUM | CREATE_OPTIONS | TABLE_COMMENT   | TIDB_TABLE_ID | TIDB_ROW_ID_SHARDING_INFO |
+---------------+--------------+----------------------+------------+--------+---------+------------+------------+----------------+--------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+-----------------+---------------+---------------------------+
| def           | es_post      | fcbox_pre_post_order | BASE TABLE | InnoDB |      10 | Compact    |  658224607 |            338 | 222916716074 |               0 |  69970441680 |         0 |      659022330 | 2021-07-02 11:26:39 | NULL        | NULL       | utf8mb4_bin     |     NULL |                | xxxx      |          4302 | NOT_SHARDED(PK_IS_HANDLE) |
+---------------+--------------+----------------------+------------+--------+---------+------------+------------+----------------+--------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+-----------------+---------------+---------------------------+
1 row in set (0.12 sec)

所有的index

MySQL [INFORMATION_SCHEMA]> select * from tidb_indexes where table_name='fcbox_pre_post_order';
+--------------+----------------------+------------+----------------------+--------------+------------------+----------+---------------+------------+----------+
| TABLE_SCHEMA | TABLE_NAME           | NON_UNIQUE | KEY_NAME             | SEQ_IN_INDEX | COLUMN_NAME      | SUB_PART | INDEX_COMMENT | Expression | INDEX_ID |
+--------------+----------------------+------------+----------------------+--------------+------------------+----------+---------------+------------+----------+
| es_post      | fcbox_pre_post_order |          0 | PRIMARY              |            1 | id               |     NULL |               | NULL       |        0 |
| es_post      | fcbox_pre_post_order |          1 | idx_create_sys_tm    |            1 | create_sys_tm    |     NULL |               | NULL       |        1 |
| es_post      | fcbox_pre_post_order |          1 | idx_update_sys_tm    |            1 | update_sys_tm    |     NULL |               | NULL       |        2 |
| es_post      | fcbox_pre_post_order |          1 | idx_post_id          |            1 | post_id          |     NULL |               | NULL       |        3 |
| es_post      | fcbox_pre_post_order |          1 | idx_session_token_id |            1 | session_token_id |     NULL |               | NULL       |        4 |
| es_post      | fcbox_pre_post_order |          1 | idx_express_id       |            1 | express_id       |     NULL |               | NULL       |        5 |
| es_post      | fcbox_pre_post_order |          1 | idx_order_status     |            1 | order_status     |     NULL |               | NULL       |        6 |
+--------------+----------------------+------------+----------------------+--------------+------------------+----------+---------------+------------+----------+
7 rows in set (0.04 sec)

在TiDB中日志中遇到锁的情况

[2021/07/30 19:13:55.842 +08:00] [WARN] [session.go:717] ["transaction association"] [conn=47236] ["retrying txnStartTS"=426677013267349529] ["original txnStartTS"=426677013254242399] [isPessimistic=true]
[2021/07/30 19:13:56.491 +08:00] [INFO] [2pc.go:858] ["prewrite encounters lock"] [conn=47313] [lock="key: {tableID=4302, handle=659033768}, primary: {tableID=4302, indexID=2, indexValues={1849357888238125056, 659033768, }}, txnStartTS: 426677013437743150, lockForUpdateTS:0, ttl: 3002, type: Put"]
[2021/07/30 19:13:56.491 +08:00] [INFO] [2pc.go:858] ["prewrite encounters lock"] [conn=47313] [lock="key: {tableID=4302, indexID=2, indexValues={1849357888238125056, 659033768, }}, primary: {tableID=4302, indexID=2, indexValues={1849357888238125056, 659033768, }}, txnStartTS: 426677013437743150, lockForUpdateTS:0, ttl: 3002, type: Del"]
[2021/07/30 19:13:56.491 +08:00] [INFO] [2pc.go:858] ["prewrite encounters lock"] [conn=47313] [lock="key: {tableID=4302, indexID=2, indexValues={1849357888254902272, 659033768, }}, primary: {tableID=4302, indexID=2, indexValues={1849357888238125056, 659033768, }}, txnStartTS: 426677013437743150, lockForUpdateTS:0, ttl: 3002, type: Put"]
[2021/07/30 19:13:56.616 +08:00] [WARN] [client_batch.go:638] ["wait response is cancelled"] [to=10.204.11.103:20161] [cause="context canceled"]
[2021/07/30 19:13:56.617 +08:00] [WARN] [session.go:484] [sql] [conn=47313] [label=general] [error="[kv:9007]Write conflict, txnStartTS=426677013437743163, conflictStartTS=426677013437743150, conflictCommitTS=426677013437743176, key={tableID=4302, handle=659033768} primary={tableID=4302, indexID=2, indexValues={1849357888238125056, 659033768, }} [try again later]"] [txn="Txn{state=invalid}"]
[2021/07/30 19:13:56.617 +08:00] [WARN] [session.go:692] [retrying] [conn=47313] [schemaVersion=3075] [retryCnt=0] [queryNum=0] [sql="update fcbox_pre_post_order      SET post_id = 'FC371188420210730191355206237M',                                                                                                    pay_serial_number = '20210730191355851059017622202026',                              pay_status = 4,                       pay_tm = '2021-07-30 19:13:56.405',                                                                        pay_money = 20,                                                   update_sys_tm = '2021-07-30 19:13:56.405'      where  post_id = 'FC371188420210730191355206237M'"] [isPessimistic=true]
[2021/07/30 19:13:56.621 +08:00] [INFO] [2pc.go:1399] ["2PC clean up done"] [conn=47313] [txnStartTS=426677013437743163]

检查是锁在了tableID=4302, indexID=2,中的比较多,对应的字段是idx_update_sys_tm

每个索引锁的情况

[appdeploy@dcn-post-tidb-p-l-1:/data12/delivery/tidb-4000/log]$cat tidb.log |grep -c 'tableID=4302, indexID=0'
0
[appdeploy@dcn-post-tidb-p-l-1:/data12/delivery/tidb-4000/log]$cat tidb.log |grep -c 'tableID=4302, indexID=1'
109
[appdeploy@dcn-post-tidb-p-l-1:/data12/delivery/tidb-4000/log]$cat tidb.log |grep -c 'tableID=4302, indexID=2'
220469
[appdeploy@dcn-post-tidb-p-l-1:/data12/delivery/tidb-4000/log]$cat tidb.log |grep -c 'tableID=4302, indexID=3'
0
[appdeploy@dcn-post-tidb-p-l-1:/data12/delivery/tidb-4000/log]$cat tidb.log |grep -c 'tableID=4302, indexID=4'
0
[appdeploy@dcn-post-tidb-p-l-1:/data12/delivery/tidb-4000/log]$cat tidb.log |grep -c 'tableID=4302, indexID=5'
0
[appdeploy@dcn-post-tidb-p-l-1:/data12/delivery/tidb-4000/log]$cat tidb.log |grep -c 'tableID=4302, indexID=6'
11714

但是在TiKV中大量这类的报错

[2021/07/30 19:19:44.302 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D34EC0000000380000000274894D6 lock_version: 426677104611426337 key: 7480000000000010CE5F698000000000000003014643353934313332FF3532303231303733FF3031393139343432FF31303839394D0000FD0380000000274894D6 lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:19:46.290 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D34EE0000000380000000274828CC lock_version: 426677105135714343 key: 7480000000000010CE5F698000000000000003014643353733313037FF3132303231303733FF3031393139343632FF35383331384D0000FD0380000000274828CC lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:19:47.793 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D34EF000000038000000027489527 lock_version: 426677105528930382 key: 7480000000000010CE5F698000000000000003014643373535323230FF3236323032313037FF3330313931393437FF3730363132354D00FE038000000027489527 lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:19:48.225 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D34F0000000038000000027489531 lock_version: 426677105646895117 key: 7480000000000010CE5F698000000000000003014643373331313935FF3132303231303733FF3031393139343831FF38333437374D0000FD038000000027489531 lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:19:48.917 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D34F0000000038000000027482903 lock_version: 426677105817288785 key: 7480000000000010CE5F698000000000000003014643373535313935FF3332303231303733FF3031393139343838FF35323135394D0000FD038000000027482903 lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:19:50.087 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D34F2000000038000000027489557 lock_version: 426677106131861523 key: 7480000000000010CE5F698000000000000003014643303230313635FF3932323032313037FF3330313931393530FF3031333030314D00FE038000000027489557 lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:19:50.092 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D34F2000000038000000027489557 lock_version: 426677106131861523 key: 7480000000000010CE5F698000000000000003014643303230313635FF3932323032313037FF3330313931393530FF3031333030314D00FE038000000027489557 lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:19:54.780 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D34F60000000380000000274895B4 lock_version: 426677107363938325 key: 7480000000000010CE5F698000000000000003014643353132323231FF3132303231303733FF3031393139353437FF31343135304D0000FD0380000000274895B4 lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:19:59.142 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D34FB000000038000000027489601 lock_version: 426677108504264714 key: 7480000000000010CE5F698000000000000003014643303230313632FF3032323032313037FF3330313931393539FF3031383135324D00FE038000000027489601 lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:20:09.102 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D350900000003800000002748B780 lock_version: 426677111112597569 key: 7480000000000010CE5F698000000000000003014643353735323132FF3832303231303733FF3031393230303930FF35393635364D0000FD03800000002748B780 lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:20:14.007 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D350D00000003800000002748B7AE lock_version: 426677112397103149 key: 7480000000000010CE5F698000000000000003014643373535313236FF3938323032313037FF3330313932303133FF3930353535304D00FE03800000002748B7AE lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:20:18.985 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D3512000000038000000027489790 lock_version: 426677113707823126 key: 7480000000000010CE5F698000000000000003014643373532313632FF3432303231303733FF3031393230313839FF33323036364D0000FD038000000027489790 lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:20:27.485 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D351B000000038000000027489839 lock_version: 426677115936047164 key: 7480000000000010CE5F698000000000000003014643353733313037FF3132303231303733FF3031393230323734FF34363233344D0000FD038000000027489839 lock_ttl: 3000 txn_size: 1"]
[2021/07/30 19:20:32.001 +08:00] [WARN] [endpoint.rs:537] [error-response] [err="Key is locked (will clean up) primary_lock: 7480000000000010CE5F6980000000000000010419AA3D351F0000000380000000274898A4 lock_version: 426677117115695175 key: 7480000000000010CE5F698000000000000003014643353735323132FF3832303231303733FF3031393230333139FF33313038314D0000FD0380000000274898A4 lock_ttl: 3000 txn_size: 1"]

经过检查解锁的是index是3居多。

zhengjunbo@zhengjunbodeMacBook-Pro mok % ./mok 7480000000000010CE5F698000000000000003014643373532313632FF3432303231303733FF3031393230313839FF33323036364D0000FD038000000027489790
"7480000000000010CE5F698000000000000003014643373532313632FF3432303231303733FF3031393230313839FF33323036364D0000FD038000000027489790"
└─## decode hex key
  └─"t\200\000\000\000\000\000\020\316_i\200\000\000\000\000\000\000\003\001FC752162\37742021073\37701920189\37732066M\000\000\375\003\200\000\000\000'H\227\220"
    └─## table index key
      ├─table: 4302
      ├─index: 3
      └─"\001FC752162\37742021073\37701920189\37732066M\000\000\375\003\200\000\000\000'H\227\220"
        └─## decode index values
          ├─kind: Bytes, value: FC752162420210730192018932066M
          └─kind: Int64, value: 659068816

整个过程在3.0中展现的都是index3的冲突多一些,其他的很少存在,不是很明白为啥会提示所在了update这个字段。

1、SelectLock是在遇到重试的时候强制会进入forupdate的阶段降低冲突。
2、4.0主要有很多3秒超时现象,如果这么上会影响业务,现在是发现锁冲突的点不太正常。有优化建议么,SQL在日志中有体现

1、你上面的 信息: tidb 显示冲突的 是 index 2,但 tikv 显示的 index 3,问一下,这2个信息都是 同一个版本的么吗?都是 v4 还是 v3

同一版本下的,都是V4。只是TiKV显示的冲突点和TiDB的日志中的不同。在V3版本冲突量不大。

好的,收到

此话题已在最后回复的 1 分钟后被自动关闭。不再允许新回复。