往右边拉一点,看看最耗时的是哪一块,或者直接复制出来
id task estRows operator info actRows execution info memory disk
Projection_11 root 1 db.table.image_ref1, db.table.image_ref2, db.table.image_ref3, db.table.confidence_ref1, db.table.confidence_ref2, db.table.confidence_ref3, db.table.ctime 1 time:548.4ms, loops:2, Concurrency:OFF 45.7 KB N/A
└─Limit_14 root 1 offset:0, count:10000 1 time:548.4ms, loops:2 N/A N/A
└─TopN_16 root 1 db.table.ctime:desc, offset:0, count:1 1 time:548.4ms, loops:2 41.9 KB N/A
└─IndexLookUp_27 root 1 2 time:548.4ms, loops:3, index_task: {total_time: 1.56ms, fetch_handle: 1.55ms, build: 702ns, wait: 8.26µs}, table_task: {total_time: 546.7ms, num: 1, concurrency: 5}, next: {wait_index: 1.62ms, wait_table_lookup_build: 72µs, wait_table_lookup_resp: 546.6ms} 42.0 KB N/A
├─IndexRangeScan_24(Build) cop[tikv] 3.04 table:table, index:account_id(account_id, order_id), range:[230331010188546504,230331010188546504], keep order:false 2 time:1.56ms, loops:2, cop_task: {num: 1, max: 1.5ms, proc_keys: 2, tot_proc: 1ms, rpc_num: 1, rpc_time: 1.48ms, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{time:1ms, loops:1}, scan_detail: {total_process_keys: 2, total_process_keys_size: 110, total_keys: 3, get_snapshot_time: 8.54µs, rocksdb: {key_skipped_count: 2, block: {cache_hit_count: 10, read_count: 2, read_byte: 52.2 KB, read_time: 868.7µs}}} N/A N/A
└─TopN_26(Probe) cop[tikv] 1 db.table.ctime:desc, offset:0, count:1 2 time:546.6ms, loops:2, cop_task: {num: 2, max: 545.5ms, min: 1.04ms, avg: 273.3ms, p95: 545.5ms, max_proc_keys: 1, p95_proc_keys: 1, rpc_num: 2, rpc_time: 546.5ms, copr_cache_hit_ratio: 0.00, distsql_concurrency: 15}, tikv_task:{proc max:1ms, min:0s, avg: 500µs, p80:1ms, p95:1ms, iters:2, tasks:2}, scan_detail: {total_process_keys: 2, total_process_keys_size: 309, total_keys: 2, get_snapshot_time: 16.8ms, rocksdb: {block: {cache_hit_count: 18, read_count: 2, read_byte: 24.3 KB, read_time: 796.3µs}}} N/A N/A
└─TableRowIDScan_25 cop[tikv] 3.04 table:table, keep order:false
正常应该是Point_Get吧,analyze table试试
看看对应时间段的网络有没有抖动
1 个赞
资源太小了,毕竟4000多万的表
它是索引查,查的account_id,不是直接查的id
rpc_time: 546.5ms
还是看下grafana → TiKV-details → coprocessor 监控
1 个赞
哦,是我看错列了。