select慢SQL时间消耗在prewrite

【TiDB 版本】5.0.2
内部SQL 查询 mysql.stats_histograms 成为慢SQL,slow 日志如下,时间主要消耗在prewrite, 该表的select操作触发了内部数据变更?
select table_id, is_index, hist_id, distinct_count, version, null_count, tot_col_size, stats_ver, flag, correlation, last_analyze_pos from mysql.stats_histograms where table_id = 296;

Time: 2021-06-17T14:51:38.307281364+08:00

Txn_start_ts: 425698968233836568

# Query_time: 8.192957657000001

Parse_time: 0

Compile_time: 0.000549031

Rewrite_time: 0.000148509

Optimize_time: 0.000193462

Wait_TS: 0.000007247

# Prewrite_time: 8.17349929 Commit_time: 0.006987921 Get_commit_ts_time: 0.000822984 Commit_backoff_time: 172.092 Backoff_types: [txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock txnLock] Resolve_lock_time: 2.271464247 Write_keys: 84 Write_size: 1919366 Prewrite_region: 6

Is_internal: true

Digest: 4b1bd9a81f4d47a43392b6e996af9f45d6b5b545e2799ad16623bd38493b2e41

Num_cop_tasks: 0

Mem_max: 1997990

Prepared: false

Plan_from_cache: false

Plan_from_binding: false

Has_more_results: false

KV_total: 3.866736011

PD_total: 0.000826715

Backoff_total: 172.092

Write_sql_response_total: 0

Succ: true

Plan: tidb_decode_plan('qgTwTDAJMjdfM…

id                  	task     	estRows	operator info                                                                                                                                                                                                                                                                                                                                                                              	actRows	execution info                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   	memory 	disk
Projection_4        	root     	10     	mysql.stats_histograms.table_id, mysql.stats_histograms.is_index, mysql.stats_histograms.hist_id, mysql.stats_histograms.distinct_count, mysql.stats_histograms.version, mysql.stats_histograms.null_count, mysql.stats_histograms.tot_col_size, mysql.stats_histograms.stats_ver, mysql.stats_histograms.flag, mysql.stats_histograms.correlation, mysql.stats_histograms.last_analyze_pos	88     	time:5.09s, loops:2, Concurrency:OFF                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             	13.5 KB	N/A
└─IndexLookUp_10    	root     	10     	                                                                                                                                                                                                                                                                                                                                                                                           	88     	time:5.09s, loops:2, index_task: {total_time: 459.8µs, fetch_handle: 456.1µs, build: 640ns, wait: 3.13µs}, table_task: {total_time: 5.09s, num: 1, concurrency: 5}                                                                                                                                                                                                                                                                                                                                                                                            	22.9 KB	N/A
  ├─IndexRangeScan_8	cop[tikv]	10     	table:stats_histograms, index:tbl(table_id, is_index, hist_id), range:[296,296], keep order:false, stats:pseudo                                                                                                                                                                                                                                                                            	88     	time:447µs, loops:3, cop_task: {num: 1, max: 465.1µs, proc_keys: 88, rpc_num: 1, rpc_time: 453.5µs, copr_cache_hit_ratio: 0.00}, tikv_task:{time:0s, loops:2}, scan_detail: {total_process_keys: 88, total_keys: 89, rocksdb: {delete_skipped_count: 0, key_skipped_count: 88, block: {cache_hit_count: 15, read_count: 0, read_byte: 0 Bytes}}}                                                                                                                                                                                                              	N/A    	N/A
  └─TableRowIDScan_9	cop[tikv]	10     	table:stats_histograms, keep order:false, stats:pseudo                                                                                                                                                                                                                                                                                                                                     	88     	time:5.09s, loops:2, cop_task: {num: 6, max: 2.27ms, min: 871.2µs, avg: 1.41ms, p95: 2.27ms, max_proc_keys: 25, p95_proc_keys: 25, tot_proc: 4ms, tot_wait: 1ms, rpc_num: 15, rpc_time: 13.8ms, copr_cache_hit_ratio: 0.00}, ResolveLock:{num_rpc:9, total_time:20.3s}, backoff{txnNotFound: 20s}, tikv_task:{proc max:1ms, min:0s, p80:1ms, p95:1ms, iters:6, tasks:6}, scan_detail: {total_process_keys: 88, total_keys: 787, rocksdb: {delete_skipped_count: 117, key_skipped_count: 1551, block: {cache_hit_count: 483, read_count: 0, read_byte: 0 Bytes}}}	N/A    	N/A

操作期间磁盘存在性能问题

这个需要确认一下

上面粘的慢日志信息里最后的那个 Plan 那一行可以粘全吗?可以把慢日志内容粘在代码块里避免被解析成 markdown

Succ: true

Plan: tidb_decode_plan(‘qgTwTDAJMjdfMQkwCTAJTi9BCTAJdGltZToxMW1zLCBsb29wczoxLCBwcmVwYXJlOjE0Ny41wrVzLCBjaGVja19pbnNlcnQ6IHt0b3RhbF90ATsQIDEwLjkBPgxtZW1fCSEAXwVUDCAzLjcFVzRwcmVmZXRjaDogNy4xOAErSHJwYzp7QmF0Y2hHZXQ6e251bV8BEww4LCB0GVlQMTkuNm1zfSwgc2Nhbl9kZXRhaWw6EXs8cHJvY2Vzc19rZXlzOiAxNhU7HRGYcm9ja3NkYjoge2RlbGV0ZV9za2lwcGVkX2NvdW50OiAwLCBrZXlfRhYAQGJsb2NrOiB7Y2FjaGVfaGl0ETIgMTQ3MywgcmVhLkQABQ80Ynl0ZTogMCBCeXRlc30BAYAsIGNvbW1pdF90eG46IHtwcmV3cml0ZTo4LjE3cywgZ2UBVgkhFHM6ODIzwiVnBRIQOjYuOTkhKBhiYWNrb2ZmAfZ0aW1lOiAybTUyLjFzLCB0eXBlOiBbdHhuTG9ja119AZUUc29sdmVfCbsIMi4yAWU0cmVnaW9uX251bTo2LCAFgCkhBDg0EQ8FuVAxOTE5MzY2fQkxLjkxIE1CCU4vQQo=’)

这个plan用什么工具解析?

直接在 tidb 里 select tidb_decode_plan(...)
这个解出来是个 insert。可不可以确认一下是不是把慢日志里 SQL 语句和其他信息看串了?每条慢查询信息应该是其他信息在前、SQL 语句在后

慢SQL信息确实看串了,目前看是查询时遇到txnNotFound错误,根据tidb log分析到是[2021/06/15 09:57:52.905 +08:00]这个系统表开始出现[2021/06/15 09:57:52.905 +08:00]错误,后续该如何分析及处理?
# Time: 2021-06-17T14:51:38.306988395+08:00
# Txn_start_ts: 425698968980946959
# Query_time: 5.31243996
# Parse_time: 0
# Compile_time: 0.000269616
# Rewrite_time: 0.000082682
# Optimize_time: 0.000111699
# Wait_TS: 0.000014801
# Cop_time: 5.31172952 Process_time: 0.011 Backoff_time: 30.06 Request_count: 7 Process_keys: 176 Total_keys: 801 Rocksdb_delete_skipped_count: 2 Rocksdb_key_skipped_count: 1477 Rocksdb_block_cache_hit_count: 481
# Index_names: [stats_histograms:tbl]
# Is_internal: true
# Digest: 1772d6feeaef69bc4752bc98819036e7f725f5e752a8756f269ddcdfe147b24a
# Stats: stats_histograms:pseudo
# Num_cop_tasks: 7
# Cop_proc_avg: 0.001571428 Cop_proc_p90: 0.003 Cop_proc_max: 0.003 Cop_proc_addr: 10.161.67.85:20160
# Cop_wait_avg: 0 Cop_wait_p90: 0 Cop_wait_max: 0 Cop_wait_addr: 10.161.67.85:20160
**# Cop_backoff_txnNotFound_total_times: 102 Cop_backoff_txnNotFound_total_time: 30.06 Cop_backoff_txnNotFound_max_time: 5.01 Cop_backoff_txnNotFound_max_addr: 10.161.67.85:20160 Cop_backoff_txnNotFound_avg_time: 5.01 Cop_backoff_txnNotFound_p90_time: 5.01
# Mem_max: 37232
# Prepared: false
# Plan_from_cache: false
# Plan_from_binding: false
# Has_more_results: false
# KV_total: 1.780037186
# PD_total: 0.000015171
# Backoff_total: 30.06
# Write_sql_response_total: 0
# Succ: true
# Plan: tidb_decode_plan(‘vA2wMAkzXzQJMAkxMAlteXNxbC5zdGF0c19oaXN0b2dyYW1zLnRhYmxlX2lkLCBtViEAIGlzX2luZGV4LF4hAAFNbkEANGRpc3RpbmN0X2NvdW50YkcAGHZlcnNpb25iIAAMbnVsbHpDAAR0bwFlFGxfc2l6ZWJIAABzJQIIdmVyYiIADGZsYWdiHQAcY29ycmVsYXRuqwDwTGxhc3RfYW5hbHl6ZV9wb3MJODgJdGltZTo1LjMxcywgbG9vcHM6MiwgQ29uY3VycmVuY3k6T0ZGCTEzLjUgS0IJTi9BCjEJMzBfMTAJJb1iQgAlmzRfdGFzazoge3RvdGFsXwVqWCA0ODguNsK1cywgZmV0Y2hfaGFuZGxlARgINy4zBRhwYnVpbGQ6IDU4Mm5zLCB3YWl0OiA3Mzduc30sIHRFFkpXAA3CJG51bTogMSwgY28ZwRwgNX0JMjIuORHBIDIJNDdfOAkxXwXCAHQBUgA6KX1BMkl/ACwJwRA6dGJsKAUiRUlZcwhoaXNBXJgpLCByYW5nZTpbMjk2LDI5Nl0sIGtlZXAgb3JkZXI6ZmFsc2UsIHMh2hg6cHNldWRvNTEQNDc3LjEF8Cl2ADMBtQBwEdsRyBxtYXg6IDQ2MC0zSHByb2Nfa2V5czogODgsIHJwY18RKgEMLWIENDgFWDxjb3ByX2NhY2hlX2hpdF9yQRwUOiAwLjAwIUkIaWt2CW0AewU2ADA5/zR9LCBzY2FuX2RldGFpbDW/AXwIZXNzHX8p1w2PaDksIHJvY2tzZGI6IHtkZWxldGVfc2tpcHBlZGkeCDogMCEdAHk+FgAByQxibG9jQSkZrmWUIDogMTQsIHJlYS5DAAUPTGJ5dGU6IDAgQnl0ZXN9fX0JTi9BAQQh3Qg1Xzl+3QGapAFNVSmhADJGoQEENiwpoUAyLjg5bXMsIG1pbjogMS4yNQENCGF2ZwENBDc2AQ0McDk1OhknBGF4JU4tOwgyNSwBID4TAIE1IXQMOiAxMQFDIeEp7QA1KfoluBAgMTcuOAEfAGNu7gEcUmVzb2x2ZUwhZkFcGF9ycGM6OSwBZHGZDDMxLjhhYmBiYWNrb2Zme3R4bk5vdEZvdW5kOiAzMC4xAR1dNgGbBf4AMwF+AfoFphBwODA6MgESAfIFGxhpdGVyczo2YbwQc2tzOjbSXwIINzEyhmACADJSYAIEMTNBnGJiAgg0Nje6YwI=’)
# Plan_digest: 6fa677a433652b855df12b5c197df0c72fd5a4bc516e08a2cd4f0ea160492a3c
select table_id, is_index, hist_id, distinct_count, version, null_count, tot_col_size, stats_ver, flag, correlation, last_analyze_pos from mysql.stats_histograms where table_id = 296;

tidb日志:
[2021/06/17 00:46:58.142 +08:00] [WARN] [lock_resolver.go:481] [“lock txn not found, lock has expired”] [CallerStartTs=425685683293388815] [“lock str”=“key: {tableID=23, handle=480239}, primary: {tableID=23, handle=480204}, txnStartTS: 425685683293388824, lockForUpdateTS:0, ttl: 5973, type: Put”]
[2021/06/17 03:39:59.975 +08:00] [WARN] [session.go:484] [sql] [label=internal] [error=“[kv:8022]Error: KV error safe to retry Txn(Mvcc(TxnLockNotFound { start_ts: TimeStamp(425688404361216016), commit_ts: TimeStamp(425688406405939207), key: [116, 128, 0, 0, 0, 0, 0, 0, 23, 95, 114, 128, 0, 0, 0, 0, 7, 83, 202] })) {tableID=23, handle=480202} [try again later]”] [txn=“Txn{state=invalid}”]
[2021/06/17 03:40:00.204 +08:00] [WARN] [lock_resolver.go:481] [“lock txn not found, lock has expired”] [CallerStartTs=425688405134540811] [“lock str”=“key: {tableID=23, handle=480216}, primary: {tableID=23, handle=480202}, txnStartTS: 425688404361216016, lockForUpdateTS:0, ttl: 7547, type: Put”]
[2021/06/17 03:41:59.681 +08:00] [WARN] [session.go:484] [sql] [label=internal] [error=“[kv:8022]Error: KV error safe to retry Txn(Mvcc(TxnLockNotFound { start_ts: TimeStamp(425688435805388823), commit_ts: TimeStamp(425688437797683201), key: [116, 128, 0, 0, 0, 0, 0, 0, 23, 95, 114, 128, 0, 0, 0, 0, 7, 83, 202] })) {tableID=23, handle=480202} [try again later]”] [txn=“Txn{state=invalid}”]
[2021/06/17 03:41:59.688 +08:00] [WARN] [lock_resolver.go:481] [“lock txn not found, lock has expired”] [CallerStartTs=425688436591820808] [“lock str”=“key: {tableID=23, handle=480204}, primary: {tableID=23, handle=480202}, txnStartTS: 425688435805388823, lockForUpdateTS:0, ttl: 7398, type: Put”]
…中间都是[“lock txn not found, lock has expired”] 报错
[2021/06/17 10:52:59.705 +08:00] [WARN] [lock_resolver.go:481] [“lock txn not found, lock has expired”] [CallerStartTs=425695214849228816] [“lock str”=“key: {tableID=23, handle=480211}, primary: {tableID=23, handle=480202}, txnStartTS: 425695214849228822, lockForUpdateTS:0, ttl: 7594, type: Put”]
[2021/06/17 10:52:59.705 +08:00] [WARN] [lock_resolver.go:481] [“lock txn not found, lock has expired”] [CallerStartTs=425695214849228816] [“lock str”=“key: {tableID=23, handle=480214}, primary: {tableID=23, handle=480202}, txnStartTS: 425695214849228822, lockForUpdateTS:0, ttl: 7594, type: Put”]
[2021/06/17 10:53:00.106 +08:00] [WARN] [lock_resolver.go:481] [“lock txn not found, lock has expired”] [CallerStartTs=425695215635660813] [“lock str”=“key: {tableID=23, handle=480208}, primary: {tableID=23, handle=480202}, txnStartTS: 425695214849228822, lockForUpdateTS:0, ttl: 7594, type: Put”]
[2021/06/17 15:04:38.232 +08:00] [WARN] [lock_resolver.go:523] [“lock txn not found, lock has expired”] [CallerStartTs=425699173518803011] [“lock str”=“key: {tableID=23, handle=480250}, primary: {tableID=23, handle=480202}, txnStartTS: 425699172705894438, lockForUpdateTS:0, minCommitTs:425699172705894439, ttl: 7818, type: Put, UseAsyncCommit: false”]
[2021/06/17 15:04:38.232 +08:00] [WARN] [lock_resolver.go:523] [“lock txn not found, lock has expired”] [CallerStartTs=425699173518803011] [“lock str”=“key: {tableID=23, handle=480220}, primary: {tableID=23, handle=480202}, txnStartTS: 425699172705894438, lockForUpdateTS:0, minCommitTs:425699172705894439, ttl: 7818, type: Put, UseAsyncCommit: false”]
[2021/06/17 15:04:38.232 +08:00] [WARN] [lock_resolver.go:523] [“lock txn not found, lock has expired”] [CallerStartTs=425699173518803011] [“lock str”=“key: {tableID=23, handle=480229}, primary: {tableID=23, handle=480202}, txnStartTS: 425699172705894438, lockForUpdateTS:0, minCommitTs:425699172705894439, ttl: 7818, type: Put, UseAsyncCommit: false”]


分析之前tidb log [2021/06/15 09:57:52.905 +08:00]开始出现[“prewrite encounters lock”],[2021/06/15 10:41:00.607 +08:00] 开始出现 [“lock txn not found, lock has expired”]
$ vi tidb-2021-06-16T06-17-39.393.log

[2021/06/15 09:57:52.905 +08:00] [INFO] [2pc.go:858] [“prewrite encounters lock”] [conn=0] [lock=“key: {tableID=23, handle=420012}, primary: {tableID=23, handle=420002}, txnStartTS: 425649051303936104, lockForUpdateTS:0, ttl: 9361, type: Put”]
[2021/06/15 09:57:52.905 +08:00] [INFO] [2pc.go:858] [“prewrite encounters lock”] [conn=0] [lock=“key: {tableID=23, handle=420002}, primary: {tableID=23, handle=420002}, txnStartTS: 425649051303936104, lockForUpdateTS:0, ttl: 9361, type: Put”]
[2021/06/15 09:57:52.905 +08:00] [INFO] [2pc.go:858] [“prewrite encounters lock”] [conn=0] [lock=“key: {tableID=23, handle=420005}, primary: {tableID=23, handle=420002}, txnStartTS: 425649051303936104, lockForUpdateTS:0, ttl: 9361, type: Put”]
[2021/06/15 09:57:52.905 +08:00] [INFO] [2pc.go:858] [“prewrite encounters lock”] [conn=0] [lock=“key: {tableID=23, handle=420016}, primary: {tableID=23, handle=420002}, txnStartTS: 425649051303936104, lockForUpdateTS:0, ttl: 9361, type: Put”]
[2021/06/15 09:57:52.905 +08:00] [INFO] [2pc.go:858] [“prewrite encounters lock”] [conn=0] [lock=“key: {tableID=23, handle=420015}, primary: {tableID=23, handle=420002}, txnStartTS: 425649051303936104, lockForUpdateTS:0, ttl: 9361, type: Put”]

[2021/06/15 10:41:00.282 +08:00] [INFO] [2pc.go:858] [“prewrite encounters lock”] [conn=0] [lock=“key: {tableID=23, handle=390263}, primary: {tableID=23, handle=390247}, txnStartTS: 425649727635456067, lockForUpdateTS:0, ttl: 8127, type: Put”]
[2021/06/15 10:41:00.284 +08:00] [INFO] [lock_resolver.go:617] [“resolveLock rollback”] [lock=“key: {tableID=23, handle=390263}, primary: {tableID=23, handle=390247}, txnStartTS: 425649727635456067, lockForUpdateTS:0, ttl: 8127, type: Put”]
[2021/06/15 10:41:00.607 +08:00] [WARN] [lock_resolver.go:481] [“lock txn not found, lock has expired”] [CallerStartTs=425649728408780832] [“lock str”=“key: {tableID=23, handle=390300}, primary: {tableID=23, handle=390247}, txnStartTS: 425649727648563205, lockForUpdateTS:0, ttl: 8162, type: Put”]
[2021/06/15 10:41:00.607 +08:00] [WARN] [lock_resolver.go:481] [“lock txn not found, lock has expired”] [CallerStartTs=425649728408780832] [“lock str”=“key: {tableID=23, handle=390273}, primary: {tableID=23, handle=390247}, txnStartTS: 425649727648563205, lockForUpdateTS:0, ttl: 8162, type: Put”]
[2021/06/15 10:41:00.620 +08:00] [WARN] [lock_resolver.go:481] [“lock txn not found, lock has expired”] [CallerStartTs=425649728408780832] [“lock str”=“key: {tableID=23, handle=390248}, primary: {tableID=23, handle=390247}, txnStartTS: 425649727648563205, lockForUpdateTS:0, ttl: 8162, type: Put”]
[2021/06/15 10:41:01.017 +08:00] [INFO] [adapter.go:633] [“pessimistic write conflict, retry statement”] [conn=115] [txn=425649729955430422] [forUpdateTS=425649729955430422] [err=“[kv:9007]Write conflict, txnStartTS=425649729955430422, conflictStartTS=425649729955430421, conflictCommitTS=425649729955430436, key={tableID=296, indexID=1, indexValues={7421061559836365, 6, 0, }} primary={tableID=296, indexID=1, indexValues={7421061559836365, 6, 0, }} [try again later]”]

image
另外在反馈一下 set global修改 tidb_auto_analyze_xxx_time 后必须退出会话重新登录才能使用show查看到修改后的值

stats_histograms 慢的有相关的几个帖子可以看一下,另外,建议咱们检查一下,tidb-server的 feedback-probability 参数是否开启,如果开启了建议找个时间 改成 0(关闭)



feedback是关闭的。

:ok_hand:咱们现在的情况如何,类似的慢日志还多吗

今天还没发现这个表select的慢SQL没有出现, 除了autoanalyze /feedback特性外还有哪些操作会修改stats_histograms表数据,发现了有些insert into mysql.stats_histograms操作的慢SQL, 是在非auto analyze时间出现的,比较多

好的,至于 insert 慢的问题,我们先内部梳理一下

除了autoanalyze /feedback特性外还有哪些操作会insert stats_histograms

没了,咱们也排查一下是否有人为因素吧,可以看看这个表的统计信息变更时间,查询 一下 tidb-server 日志等方式,看一下吧

目前这个库就我自己和一个应用的人能连,不太可能是人为的分析。整理了慢日志里insert into mysql.stats_histograms 慢SQL的出现时间,后面括号内为table_id
# Time: 2021-06-17T14:13:30.913790363+08:100 (296):
** # Time: 2021-06-17T14:13:30.956634323+08:00 (281)**
# Time: 2021-06-17T14:17:30.814118378+08:00 (296)
# Time: 2021-06-17T14:18:30.97436876+08:00 (296)
# Time: 2021-06-17T14:32:30.786172839+08:00 (296)
# Time: 2021-06-17T14:33:32.539592582+08:00 (296)
# Time: 2021-06-17T14:43:34.901406537+08:00 (296)
# Time: 2021-06-17T14:45:30.49640594+08:00 (296)
# Time: 2021-06-17T14:51:42.921794225+08:00(296)
# Time: 2021-06-17T14:55:32.18373849+08:00(296)
# Time: 2021-06-17T15:04:38.967343957+08:00 (296)
# Time: 2021-06-17T15:09:32.434526162+08:00 (296)
# Time: 2021-06-17T15:23:30.502476587+08:00(296)
# Time: 2021-06-17T15:26:30.628562751+08:00 (296)
# Time: 2021-06-17T15:29:38.051682005+08:00(296)
# Time: 2021-06-17T15:34:38.233561298+08:00(296)
# Time: 2021-06-17T15:36:31.437944874+08:00(281)
# Time: 2021-06-17T16:02:38.050683047+08:00(296)
# Time: 2021-06-17T16:06:43.49472108+08:00(296)
# Time: 2021-06-17T16:12:33.612595831+08:00(296)
# Time: 2021-06-17T16:24:32.591707047+08:00 (261)
# Time: 2021-06-17T16:30:33.307991877+08:00(275)
# Time: 2021-06-17T16:32:30.92252403+08:00(263)
# Time: 2021-06-17T16:32:34.713374577+08:00(292)
# Time: 2021-06-17T16:54:30.417033253+08:00(296)
# Time: 2021-06-17T17:02:50.255203769+08:00(296)
# Time: 2021-06-17T18:07:14.297326973+08:00(296)
# Time: 2021-06-17T18:12:38.07792602+08:00 (296)
# Time: 2021-06-18T01:49:08.826441994+08:00(296)
# Time: 2021-06-18T09:17:32.202844013+08:00(296)
# Time: 2021-06-18T12:41:28.562333029+08:00(296)
# Time: 2021-06-17T14:13:30.913790363+08:00(296)
# Time: 2021-06-17T14:13:30.956634323+08:00(281)
# Time: 2021-06-17T14:17:30.814118378+08:00(296)
# Time: 2021-06-17T14:18:30.97436876+08:00(296)
# Time: 2021-06-17T14:32:30.786172839+08:00(296)

insert into mysql.stats_histograms (table_id, is_index, hist_id, distinct_count, tot_col_size) values (281, 0, 12, 0, 32),(281, 0, 23, 0, -5),(281, 0, 21, 0, 80),(281, 0, 17, 0, 118),(281, 0, 11, 0, 128),(281, 0, 5, 0, 48),(281, 0, 9, 0, 16),(281, 0, 15, 0, 399),(281, 0, 4, 0, -8),(281, 0, 16, 0, 16),(281, 0, 3, 0, 192),(281, 0, 29, 0, 2),(281, 0, 13, 0, 58),(281, 0, 19, 0, 4),(281, 0, 26, 0, 22),(281, 0, 27, 0, 24),(281, 0, 24, 0, -92),(281, 0, 28, 0, 34),(281, 0, 20, 0, 48),(281, 0, 14, 0, 48),(281, 0, 1, 0, 272),(281, 0, 2, 0, 126),(281, 0, 18, 0, 288),(281, 0, 6, 0, -8),(281, 0, 8, 0, 112),(281, 0, 22, 0, 24),(281, 0, 7, 0, 194),(281, 0, 10, 0, 322),(281, 0, 25, 0, 80) on duplicate key update tot_col_size = tot_col_size + values(tot_col_size);




目前是只通过消费kafka同步数据,几乎还没有查询

每个 TiDB 实例把自己处理的 DML 语句相关改动记录下来(主要是 增加的行数/修改的行数/列宽度的改变),然后定期(20*stat-lease = 60s)更新到统计信息。

如果有 TiDB 实例同时进行上述更新,便会出现写写冲突。
这个行为所有 TiDB 实例都会进行,执行周期为 stas-lease*20

insert into mysql.stats_histograms (table_id, is_index, hist_id, distinct_count, tot_col_size)
这个insert语句不包含modify_count列,看起来好像更变更数量没关系

上面说的:(主要是 增加的行数/修改的行数/列宽度的改变),不止是:变更数量,这里应该是列宽度的改变,原理上来说出现冲突是预期内的,且没有错误日志,说明后续重试也成功了

明白了,官网上对这块为啥只提到了行数变化



一般执行计划的统计信息,更关注上面的几个因素,其他的官网可能不会太详细说明
补充一个信息:在最新的版本中,已经对该表相关的 SQL 进行了优化处理,避免太多的冲突及调用

感谢!在看统计信息更新时发现5.0版本一个新的变量描述


但是没有show到这个变量
image
如何确定当前使用的统计信息版本?