update语句报错TiKV max timestamp is not synced

【 TiDB 使用环境】测试
【 TiDB 版本】v7.1.0
【复现路径】无
【遇到的问题:问题现象及影响】
执行select语句正常,执行update语句报错

Error occurred during SQL query execution

原因:
SQL 错误 [9011] [HY000]: TiKV max timestamp is not synced

一台服务器,1tidb、1pd、1tikv,副本数3。有遇到这种情况的么?

tidb.log

[2023/12/04 11:11:00.257 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079800581555307] [region=“{ region id: 596255, ver: 12, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:01.246 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1280050]
[2023/12/04 11:11:03.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=50100]
[2023/12/04 11:11:05.582 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079800581555307] [newTTL=90049]
[2023/12/04 11:11:11.246 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1290050]
[2023/12/04 11:11:13.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=60100]
[2023/12/04 11:11:15.271 +08:00] [INFO] [domain.go:2652] [“refreshServerIDTTL succeed”] [serverID=1179668] [“lease id”=7a0d8c32b5e91416]
[2023/12/04 11:11:15.583 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079800581555307] [newTTL=100049]
[2023/12/04 11:11:15.801 +08:00] [WARN] [backoff.go:158] [“maxTsNotSynced backoffer.maxSleep 80000ms is exceeded, errors:\nmax timestamp not synced, ctx: region ID: 596255, meta: id:596255 start_key:"t\200\000\000\000\000\000\000\025" end_key:"t\200\000\000\000\000\000\000\025_i\200\000\000\000\000\000\000\002\003\200\000\000\000\000\007\227O" region_epoch:<conf_ver:1 version:12 > peers:<id:596256 store_id:1 > , peer: id:596256 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:14.297482197+08:00\nmax timestamp not synced, ctx: region ID: 596255, meta: id:596255 start_key:"t\200\000\000\000\000\000\000\025" end_key:"t\200\000\000\000\000\000\000\025_i\200\000\000\000\000\000\000\002\003\200\000\000\000\000\007\227O" region_epoch:<conf_ver:1 version:12 > peers:<id:596256 store_id:1 > , peer: id:596256 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:14.798682359+08:00\nmax timestamp not synced, ctx: region ID: 596255, meta: id:596255 start_key:"t\200\000\000\000\000\000\000\025" end_key:"t\200\000\000\000\000\000\000\025_i\200\000\000\000\000\000\000\002\003\200\000\000\000\000\007\227O" region_epoch:<conf_ver:1 version:12 > peers:<id:596256 store_id:1 > , peer: id:596256 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:15.300027358+08:00\nlongest sleep type: maxTsNotSynced, time: 76010ms”]
[2023/12/04 11:11:15.802 +08:00] [WARN] [session.go:967] [“can not retry txn”] [label=internal] [error=“[tikv:9011]TiKV max timestamp is not synced”] [IsBatchInsert=false] [IsPessimistic=true] [InRestrictedSQL=true] [tidb_retry_limit=10] [tidb_disable_txn_auto_retry=true]
[2023/12/04 11:11:15.802 +08:00] [WARN] [session.go:983] [“commit failed”] [“finished txn”=“Txn{state=invalid}”] [error=“[tikv:9011]TiKV max timestamp is not synced”]
[2023/12/04 11:11:15.802 +08:00] [WARN] [session.go:2239] [“run statement failed”] [schemaVersion=476391] [error=“previous statement: insert into mysql.stats_meta (version, table_id, modify_count, count) values (446079800581555307, 1862, 1, 0) on duplicate key update version = values(version), modify_count = modify_count + values(modify_count), count = count + values(count): [tikv:9011]TiKV max timestamp is not synced”] [session=“{\n "currDBName": "",\n "id": 0,\n "status": 2,\n "strictMode": true,\n "user": null\n}”]
[2023/12/04 11:11:21.247 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1300050]
[2023/12/04 11:11:23.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=70100]
[2023/12/04 11:11:25.809 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079821618349591] [newTTL=30000]
[2023/12/04 11:11:31.246 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1310050]
[2023/12/04 11:11:33.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=80100]
[2023/12/04 11:11:33.851 +08:00] [WARN] [expensivequery.go:118] [expensive_query] [cost_time=60.01317402s] [conn=2594117365830517149] [user=root] [database=enjoycrm_cs] [txn_start_ts=446079810568980924] [mem_max=“0 Bytes (0 Bytes)”] [sql=COMMIT]
[2023/12/04 11:11:35.810 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079821618349591] [newTTL=40001]
[2023/12/04 11:11:38.523 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079810568980924] [region=“{ region id: 194781, ver: 618, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:38.523 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079810568980924] [region=“{ region id: 96760, ver: 1877, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:38.523 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079810568980924] [region=“{ region id: 351071, ver: 489, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:38.523 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079810568980924] [region=“{ region id: 194785, ver: 618, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:38.523 +08:00] [WARN] [prewrite.go:291] [“slow prewrite request”] [startTS=446079810568980924] [region=“{ region id: 93046, ver: 488, confVer: 1 }”] [attempts=14]
[2023/12/04 11:11:41.246 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1320050]
[2023/12/04 11:11:43.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=90100]
[2023/12/04 11:11:45.809 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079821618349591] [newTTL=50000]
[2023/12/04 11:11:51.246 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1330050]
[2023/12/04 11:11:53.752 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079810568980924] [newTTL=100100]
[2023/12/04 11:11:54.066 +08:00] [WARN] [backoff.go:158] [“maxTsNotSynced backoffer.maxSleep 80000ms is exceeded, errors:\nmax timestamp not synced, ctx: region ID: 93046, meta: id:93046 start_key:"t\200\000\000\000\000\000\006\006_i\200\000\000\000\000\000\000\003\004\031\257\251N\314\000\000\000\003\200\000\000\000\000\nK\341" end_key:"t\200\000\000\000\000\000\006\006_r\200\000\000\000\000\003\346\031" region_epoch:<conf_ver:1 version:488 > peers:<id:93047 store_id:1 > , peer: id:93047 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:52.562985987+08:00\nmax timestamp not synced, ctx: region ID: 93046, meta: id:93046 start_key:"t\200\000\000\000\000\000\006\006_i\200\000\000\000\000\000\000\003\004\031\257\251N\314\000\000\000\003\200\000\000\000\000\nK\341" end_key:"t\200\000\000\000\000\000\006\006_r\200\000\000\000\000\003\346\031" region_epoch:<conf_ver:1 version:488 > peers:<id:93047 store_id:1 > , peer: id:93047 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.064229374+08:00\nmax timestamp not synced, ctx: region ID: 93046, meta: id:93046 start_key:"t\200\000\000\000\000\000\006\006_i\200\000\000\000\000\000\000\003\004\031\257\251N\314\000\000\000\003\200\000\000\000\000\nK\341" end_key:"t\200\000\000\000\000\000\006\006_r\200\000\000\000\000\003\346\031" region_epoch:<conf_ver:1 version:488 > peers:<id:93047 store_id:1 > , peer: id:93047 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.565749458+08:00\nlongest sleep type: maxTsNotSynced, time: 76010ms”]
[2023/12/04 11:11:54.066 +08:00] [WARN] [backoff.go:158] [“maxTsNotSynced backoffer.maxSleep 80000ms is exceeded, errors:\nmax timestamp not synced, ctx: region ID: 96760, meta: id:96760 start_key:"t\200\000\000\000\000\000\026\355" end_key:"t\200\000\000\000\000\000\026\355_r\200\000\000\000\000\003\n%" region_epoch:<conf_ver:1 version:1877 > peers:<id:96761 store_id:1 > , peer: id:96761 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:52.562926189+08:00\nmax timestamp not synced, ctx: region ID: 96760, meta: id:96760 start_key:"t\200\000\000\000\000\000\026\355" end_key:"t\200\000\000\000\000\000\026\355_r\200\000\000\000\000\003\n%" region_epoch:<conf_ver:1 version:1877 > peers:<id:96761 store_id:1 > , peer: id:96761 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.064263806+08:00\nmax timestamp not synced, ctx: region ID: 96760, meta: id:96760 start_key:"t\200\000\000\000\000\000\026\355" end_key:"t\200\000\000\000\000\000\026\355_r\200\000\000\000\000\003\n%" region_epoch:<conf_ver:1 version:1877 > peers:<id:96761 store_id:1 > , peer: id:96761 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.565740895+08:00\nlongest sleep type: maxTsNotSynced, time: 76010ms”]
[2023/12/04 11:11:54.066 +08:00] [WARN] [backoff.go:158] [“maxTsNotSynced backoffer.maxSleep 80000ms is exceeded, errors:\nmax timestamp not synced, ctx: region ID: 194781, meta: id:194781 start_key:"t\200\000\000\000\000\000\007\n_i\200\000\000\000\000\000\000\003\0018\000\000\000\000\000\000\000\370\004\031\257x\252(\013\244x\003\200\000\000\000\000\007\204J" end_key:"t\200\000\000\000\000\000\007\n_i\200\000\000\000\000\000\000\005\0018\000\000\000\000\000\000\000\370\00177505798\377-b632-49\37757-a6d0-\377b98f2d1e\377aaea\000\000\000\000\373\003\200\000\000\000\000\010T:" region_epoch:<conf_ver:1 version:618 > peers:<id:194782 store_id:1 > , peer: id:194782 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:52.562932916+08:00\nmax timestamp not synced, ctx: region ID: 194781, meta: id:194781 start_key:"t\200\000\000\000\000\000\007\n_i\200\000\000\000\000\000\000\003\0018\000\000\000\000\000\000\000\370\004\031\257x\252(\013\244x\003\200\000\000\000\000\007\204J" end_key:"t\200\000\000\000\000\000\007\n_i\200\000\000\000\000\000\000\005\0018\000\000\000\000\000\000\000\370\00177505798\377-b632-49\37757-a6d0-\377b98f2d1e\377aaea\000\000\000\000\373\003\200\000\000\000\000\010T:" region_epoch:<conf_ver:1 version:618 > peers:<id:194782 store_id:1 > , peer: id:194782 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.064181042+08:00\nmax timestamp not synced, ctx: region ID: 194781, meta: id:194781 start_key:"t\200\000\000\000\000\000\007\n_i\200\000\000\000\000\000\000\003\0018\000\000\000\000\000\000\000\370\004\031\257x\252(\013\244x\003\200\000\000\000\000\007\204J" end_key:"t\200\000\000\000\000\000\007\n_i\200\000\000\000\000\000\000\005\0018\000\000\000\000\000\000\000\370\00177505798\377-b632-49\37757-a6d0-\377b98f2d1e\377aaea\000\000\000\000\373\003\200\000\000\000\000\010T:" region_epoch:<conf_ver:1 version:618 > peers:<id:194782 store_id:1 > , peer: id:194782 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.565752933+08:00\nlongest sleep type: maxTsNotSynced, time: 76010ms”]
[2023/12/04 11:11:54.066 +08:00] [WARN] [backoff.go:158] [“maxTsNotSynced backoffer.maxSleep 80000ms is exceeded, errors:\nmax timestamp not synced, ctx: region ID: 194785, meta: id:194785 start_key:"t\200\000\000\000\000\000\007\n" end_key:"t\200\000\000\000\000\000\007\n_i\200\000\000\000\000\000\000\003\0018\000\000\000\000\000\000\000\370\004\031\257x\252(\013\244x\003\200\000\000\000\000\007\204J" region_epoch:<conf_ver:1 version:618 > peers:<id:194786 store_id:1 > , peer: id:194786 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:52.562950543+08:00\nmax timestamp not synced, ctx: region ID: 194785, meta: id:194785 start_key:"t\200\000\000\000\000\000\007\n" end_key:"t\200\000\000\000\000\000\007\n_i\200\000\000\000\000\000\000\003\0018\000\000\000\000\000\000\000\370\004\031\257x\252(\013\244x\003\200\000\000\000\000\007\204J" region_epoch:<conf_ver:1 version:618 > peers:<id:194786 store_id:1 > , peer: id:194786 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.064289633+08:00\nmax timestamp not synced, ctx: region ID: 194785, meta: id:194785 start_key:"t\200\000\000\000\000\000\007\n" end_key:"t\200\000\000\000\000\000\007\n_i\200\000\000\000\000\000\000\003\0018\000\000\000\000\000\000\000\370\004\031\257x\252(\013\244x\003\200\000\000\000\000\007\204J" region_epoch:<conf_ver:1 version:618 > peers:<id:194786 store_id:1 > , peer: id:194786 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.565740915+08:00\nlongest sleep type: maxTsNotSynced, time: 76010ms”]
[2023/12/04 11:11:54.066 +08:00] [WARN] [backoff.go:158] [“maxTsNotSynced backoffer.maxSleep 80000ms is exceeded, errors:\nmax timestamp not synced, ctx: region ID: 351071, meta: id:351071 start_key:"t\200\000\000\000\000\000\006\006" end_key:"t\200\000\000\000\000\000\006\006_i\200\000\000\000\000\000\000\001\0018\000\000\000\000\000\000\000\370\00199001654\37788\000\000\000\000\000\000\371\001\344\270\273\350\264\246\346\210\377\267\000\000\000\000\000\000\000\370" region_epoch:<conf_ver:1 version:489 > peers:<id:351072 store_id:1 > , peer: id:351072 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:52.562924562+08:00\nmax timestamp not synced, ctx: region ID: 351071, meta: id:351071 start_key:"t\200\000\000\000\000\000\006\006" end_key:"t\200\000\000\000\000\000\006\006_i\200\000\000\000\000\000\000\001\0018\000\000\000\000\000\000\000\370\00199001654\37788\000\000\000\000\000\000\371\001\344\270\273\350\264\246\346\210\377\267\000\000\000\000\000\000\000\370" region_epoch:<conf_ver:1 version:489 > peers:<id:351072 store_id:1 > , peer: id:351072 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.064312652+08:00\nmax timestamp not synced, ctx: region ID: 351071, meta: id:351071 start_key:"t\200\000\000\000\000\000\006\006" end_key:"t\200\000\000\000\000\000\006\006_i\200\000\000\000\000\000\000\001\0018\000\000\000\000\000\000\000\370\00199001654\37788\000\000\000\000\000\000\371\001\344\270\273\350\264\246\346\210\377\267\000\000\000\000\000\000\000\370" region_epoch:<conf_ver:1 version:489 > peers:<id:351072 store_id:1 > , peer: id:351072 store_id:1 , addr: 192.168.1.50:20160, idx: 0, reqStoreType: TiKvOnly, runStoreType: tikv at 2023-12-04T11:11:53.565750106+08:00\nlongest sleep type: maxTsNotSynced, time: 76010ms”]
[2023/12/04 11:11:54.067 +08:00] [WARN] [session.go:967] [“can not retry txn”] [conn=2594117365830517149] [label=general] [error=“[tikv:9011]TiKV max timestamp is not synced”] [IsBatchInsert=false] [IsPessimistic=true] [InRestrictedSQL=false] [tidb_retry_limit=10] [tidb_disable_txn_auto_retry=true]
[2023/12/04 11:11:54.067 +08:00] [WARN] [session.go:983] [“commit failed”] [conn=2594117365830517149] [“finished txn”=“Txn{state=invalid}”] [error=“[tikv:9011]TiKV max timestamp is not synced”]
[2023/12/04 11:11:54.067 +08:00] [WARN] [session.go:2239] [“run statement failed”] [conn=2594117365830517149] [schemaVersion=476391] [error=“previous statement: UPDATE tb_card_ex SET c_cardno=‘210415067’ ,c_last_consume_dt=NOW() ,c_last_consume_store=‘11001’ ,c_last_consume_amount=1.0 WHERE c_cardno=‘210415067’: [tikv:9011]TiKV max timestamp is not synced”] [session=“{\n "currDBName": "enjoycrm_cs",\n "id": 2594117365830517149,\n "status": 2,\n "strictMode": true,\n "user": {\n "Username": "root",\n "Hostname": "172.19.0.29",\n "CurrentUser": false,\n "AuthUsername": "root",\n "AuthHostname": "%",\n "AuthPlugin": "mysql_native_password"\n }\n}”]
[2023/12/04 11:11:54.067 +08:00] [INFO] [conn.go:1184] [“command dispatched failed”] [conn=2594117365830517149] [connInfo=“id:2594117365830517149, addr:172.19.0.29:42046 status:10, collation:utf8_general_ci, user:root”] [command=Query] [status=“inTxn:0, autocommit:1”] [sql=COMMIT] [txn_mode=PESSIMISTIC] [timestamp=446079810568980924] [err=“[tikv:9011]TiKV max timestamp is not synced\nprevious statement: UPDATE tb_card_ex SET c_cardno=‘210415067’ ,c_last_consume_dt=NOW() ,c_last_consume_store=‘11001’ ,c_last_consume_amount=1.0 WHERE c_cardno=‘210415067’\ngithub.com/pingcap/tidb/session.autoCommitAfterStmt\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/tidb.go:299\ngithub.com/pingcap/tidb/session.finishStmt\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/tidb.go:259\ngithub.com/pingcap/tidb/session.runStmt\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/session.go:2389\ngithub.com/pingcap/tidb/session.(*session).ExecuteStmt\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/session.go:2227\ngithub.com/pingcap/tidb/server.(*TiDBContext).ExecuteStmt\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/server/driver_tidb.go:252\ngithub.com/pingcap/tidb/server.(*clientConn).handleStmt\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/server/conn.go:2094\ngithub.com/pingcap/tidb/server.(*clientConn).handleQuery\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/server/conn.go:1885\ngithub.com/pingcap/tidb/server.(*clientConn).dispatch\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/server/conn.go:1372\ngithub.com/pingcap/tidb/server.(*clientConn).Run\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/server/conn.go:1153\ngithub.com/pingcap/tidb/server.(*Server).onConn\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/server/server.go:677\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1598”]
[2023/12/04 11:11:55.810 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079821618349591] [newTTL=60000]
[2023/12/04 11:12:01.246 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079487489608742] [newTTL=1340051]
[2023/12/04 11:12:05.809 +08:00] [INFO] [2pc.go:1195] [“send TxnHeartBeat”] [startTS=446079821618349591] [newTTL=70000]

pd.log

[2023/12/04 10:45:26.500 +08:00] [INFO] [hot_region_config.go:441] [“query supported changed”] [last-query-support=false] [cluster-version=7.1.0] [config=“{"min-hot-byte-rate":100,"min-hot-key-rate":10,"min-hot-query-rate":10,"max-zombie-rounds":3,"max-peer-number":1000,"byte-rate-rank-step-ratio":0.05,"key-rate-rank-step-ratio":0.05,"query-rate-rank-step-ratio":0.05,"count-rank-step-ratio":0.01,"great-dec-ratio":0.95,"minor-dec-ratio":0.99,"src-tolerance-ratio":1.05,"dst-tolerance-ratio":1.05,"write-leader-priorities":["query","byte"],"write-peer-priorities":["byte","key"],"read-priorities":["query","byte"],"strict-picking-store":"true","enable-for-tiflash":"true","rank-formula-version":"v2","forbid-rw-type":"none"}”] [valid-config=“{"min-hot-byte-rate":100,"min-hot-key-rate":10,"min-hot-query-rate":10,"max-zombie-rounds":3,"max-peer-number":1000,"byte-rate-rank-step-ratio":0.05,"key-rate-rank-step-ratio":0.05,"query-rate-rank-step-ratio":0.05,"count-rank-step-ratio":0.01,"great-dec-ratio":0.95,"minor-dec-ratio":0.99,"src-tolerance-ratio":1.05,"dst-tolerance-ratio":1.05,"write-leader-priorities":["key","byte"],"write-peer-priorities":["byte","key"],"read-priorities":["byte","key"],"strict-picking-store":"true","enable-for-tiflash":"true","rank-formula-version":"v2"}”]
[2023/12/04 10:45:27.263 +08:00] [WARN] [proxy.go:193] [“fail to recv activity from remote, stay inactive and wait to next checking round”] [remote=192.168.1.50:10080] [interval=2s] [error=“dial tcp 192.168.1.50:10080: connect: connection refused”]
[2023/12/04 10:45:29.263 +08:00] [WARN] [proxy.go:193] [“fail to recv activity from remote, stay inactive and wait to next checking round”] [remote=192.168.1.50:10080] [interval=2s] [error=“dial tcp 192.168.1.50:10080: connect: connection refused”]
[2023/12/04 10:45:31.263 +08:00] [WARN] [proxy.go:193] [“fail to recv activity from remote, stay inactive and wait to next checking round”] [remote=192.168.1.50:10080] [interval=2s] [error=“dial tcp 192.168.1.50:10080: connect: connection refused”]
[2023/12/04 10:45:33.263 +08:00] [WARN] [proxy.go:193] [“fail to recv activity from remote, stay inactive and wait to next checking round”] [remote=192.168.1.50:10080] [interval=2s] [error=“dial tcp 192.168.1.50:10080: connect: connection refused”]
[2023/12/04 10:45:33.273 +08:00] [WARN] [tidb.go:65] [“Alive of TiDB has expired, maybe local time in different hosts are not synchronized”] [key=/topology/tidb/192.168.1.50:4000/ttl] [value=1701657887798304957]
[2023/12/04 10:45:35.263 +08:00] [WARN] [proxy.go:193] [“fail to recv activity from remote, stay inactive and wait to next checking round”] [remote=192.168.1.50:10080] [interval=2s] [error=“dial tcp 192.168.1.50:10080: connect: connection refused”]
[2023/12/04 10:45:37.263 +08:00] [WARN] [proxy.go:193] [“fail to recv activity from remote, stay inactive and wait to next checking round”] [remote=192.168.1.50:10080] [interval=2s] [error=“dial tcp 192.168.1.50:10080: connect: connection refused”]
[2023/12/04 10:45:53.250 +08:00] [WARN] [forwarder.go:106] [“Unable to resolve connection address since no alive TiDB instance”]
[2023/12/04 10:45:53.250 +08:00] [ERROR] [tidb_requests.go:64] [“fail to send schema request”] [component=TiDB] [error=error.tidb.no_alive_tidb]
[2023/12/04 10:46:21.314 +08:00] [INFO] [audit.go:126] [“audit log”] [service-info=“{ServiceLabel:SetReplicationModeConfig, Method:HTTP/1.1/POST:/pd/api/v1/config/replication-mode, Component:anonymous, IP:192.168.1.50, StartTime:2023-12-04 10:46:21 +0800 CST, URLParam:{}, BodyParam:{"set":{"leader-schedule-limit":4}}}”]
[2023/12/04 10:46:21.315 +08:00] [INFO] [server.go:1395] [“replication mode config is updated”] [new=“{"replication-mode":"majority","dr-auto-sync":{"label-key":"","primary":"","dr":"","primary-replicas":0,"dr-replicas":0,"wait-store-timeout":"1m0s","pause-region-split":"false"}}”] [old=“{"replication-mode":"majority","dr-auto-sync":{"label-key":"","primary":"","dr":"","primary-replicas":0,"dr-replicas":0,"wait-store-timeout":"1m0s","pause-region-split":"false"}}”]
[2023/12/04 10:46:21.316 +08:00] [INFO] [audit.go:126] [“audit log”] [service-info=“{ServiceLabel:SetReplicationModeConfig, Method:HTTP/1.1/POST:/pd/api/v1/config/replication-mode, Component:anonymous, IP:192.168.1.50, StartTime:2023-12-04 10:46:21 +0800 CST, URLParam:{}, BodyParam:{"set":{"region-schedule-limit":2048}}}”]
[2023/12/04 10:46:21.317 +08:00] [INFO] [server.go:1395] [“replication mode config is updated”] [new=“{"replication-mode":"majority","dr-auto-sync":{"label-key":"","primary":"","dr":"","primary-replicas":0,"dr-replicas":0,"wait-store-timeout":"1m0s","pause-region-split":"false"}}”] [old=“{"replication-mode":"majority","dr-auto-sync":{"label-key":"","primary":"","dr":"","primary-replicas":0,"dr-replicas":0,"wait-store-timeout":"1m0s","pause-region-split":"false"}}”]
[2023/12/04 10:53:16.566 +08:00] [INFO] [grpc_service.go:1345] [“update service GC safe point”] [service-id=gc_worker] [expire-at=-9223372035153117413] [safepoint=446079381413036032]
[2023/12/04 10:57:36.808 +08:00] [INFO] [grpc_service.go:1290] [“updated gc safe point”] [safe-point=446079381413036032]
[2023/12/04 11:08:30.118 +08:00] [INFO] [grpc_service.go:1345] [“update service GC safe point”] [service-id=gc_worker] [expire-at=-9223372035153116499] [safepoint=446079452729312589]

感觉是服务器时钟有问题

:thinking:单节点应该不会存在时间不一致的情况吧?

有没有发生时间回调啊。

这个怎么验证?不确定有没有发生。

试试吧 tidb_enable_1pc tidb_enable_async_commit 都关了呢

类似的报错他们重启解决了

:joy:这是我同事,这次重启不管用了……
另外,我们这个是物理服务器,不是虚拟机。

我试试~

PD的TSO异常,检查一下PD,以及通信的相关参数

:thinking:就很奇怪,改了之后确实可以了。再观察观察吧~


1 个赞

按说单节点不应该有这种问题,可能是即使单节点部署,内部通信也是按集群模式进行的。

1 个赞

此话题已在最后回复的 60 天后被自动关闭。不再允许新回复。