Sysbench 压测tidb执行超时

系统版本: CentOS Linux release 7.3.1611 (Core) tidb 版本:2.1.14 tidb集群服务器列表: 192.168.144.199 (ansible+monitor) 10.111.9.248 (kv1) 10.111.22.39(kv2) 10.111.9.251(kv3) 10.111.22.49(kv4) 10.111.10.7(kv5) 10.111.9.247(pd1) 10.111.22.40(pd2) 10.111.9.253(pd3) 10.111.9.254(tidb1) 10.111.22.41(tidb2)

./sysbench --mysql-host=$dbhost --mysql-port=$dbport --mysql-user=$dbuser --mysql-password=$dbpassword --mysql-db=$dbname --report-interval=10 --time=2400 --tables=32 --table-size=100000000 --threads=64 /usr/local/sysbench/share/sysbench/oltp_read_write.lua prepare 在执行一段时间之后会报以下错误,观察过tidb相关负载,没有超负荷 sysbench报错信息: FATAL: mysql_drv_query() returned error 2013 (Lost connection to MySQL server during query) for query 'INSERT INTO sbtest15(k, c, pad) VALUES(49911145, ‘59960019663-25640819853-61127355052-74857592073-81131632320-07476102322-33971442005-85214825626-14476366729-17053738700’, ‘26773999302-14104406580-66002275545-36216041739-20895508858’),(50190943, ‘08194771588-40397654250-25818157482-69915154010-03537450975-65104531903-14915353382-21891166098-53109800114-55356327675’, ‘92179704258-91919379900-95094846846-37791369837-56258808698’),(49875250, ‘64180849434-33833696158-73749216091-16785357959-48599870403-51994164071-19642870708-62028591791-55201798802-74717447615’, ‘63350622379-90838743552-86839714846-95050594026-28256695217’),(49787346, ‘84360222834-94321032988-25338183687-51271199443-35104441788-58926811748-46350938460-47089684067-50453659635-77090618817’, ‘66177782919-06339056287-22378179152-67206322546-96372724226’),(54385749, ‘60073396516-35681216257-63533051846-21307370284-65686402462-33503324783-55835394397-98139314985-33006992638-44304412273’, ‘01217916953-54564647365-29862869723-45940157855-25472902185’),(50070872, ‘04570597300-42505056259-45963246780-36044801650-35012145460-79302207157-07773251865-72713854689-52333766255-81897998936’, ‘41558349025-60807047192-82098481522-27071257442-17102687196’),(50171910, ‘13765434577-79097103383-78376925939-90052317100-03496403571-64534928064-07046116375-42333811988-61608846460-59851681701’, ‘31943384235-98745452919-36076354199-19445303822-81685879909’),(49808423, ‘66717958490-55471590184-68060847548-33253476140-92449906063-33425406737-96708856849-40242335005-71301477894-33992115412’, ‘59134031900-08788018391-19360392178-49648836166-69756395509’),(50288912, ‘39573568083-86691807190-06205412702-24733511745-69664530267-07778754524-98886317740-27575874410-60925413345-71069462621’, ‘50015712358-15192926107-91382168815-02817018138-35992125553’),(49817586, ‘36316297494-99950941947-96019241575-26291276730-01230967629-48664661211-19080505770-00900294260-25309905225-29921352224’, ‘67073862877-60216089244-46790303050-03036575005-07261056369’),(50261186, ‘28803112694-56641392895-80644598006-28564113781-42129790536-95575258371-62584991555-55592881526-21156926622-61484478537’, ‘12360975816-56863470412-04622285474-23324267570-84404699658’),(50456589, ‘99971051083-04615472828-70018921521-50973172555-79636149949-24173108254-09426757240-87394306647-90248408596-37952057946’, ‘07474389011-00231286718-74455492680-40680829082-26693878040’),(49880905, ‘41776040420-14869758717-48478087009-15969562928-16854205483-43938685051-66671437519-32867217965-34382723331-08004863306’, ‘41550577960-67298447551-31282160039-51321395060-11179003403’),(50259826, ‘38252346491-99988229133-56807414200-30703471426-44785662348-25427293477-90909062032-06294066223-74829424236-06097591139’, ‘28632098669-54276900146-62593213300-62361913728-41550274676’),(50061782, ‘19999213242-17873218587-32689543080-23267705982-46933376415-84129646875-77591064958-15153835492-94901526694-78100686566’, ‘48508488070-89719031681-06032792388-68060194657-56680821967’),(34746512, ‘28567932413-77504232041-48047109382-25821828793-48312413651-01703227449-16241275025-15952150074-62161976158-62320004695’, ‘64508649888-08466921001-20271407689-45453429336-92072172955’),(39731862, ‘21236993739-96247393957-44962973890-87391555182-38599571570-38634477589-33433346168-78244196236-91112946348-52295592030’, ‘58700706177-24741844232-27490039808-39727243651-15725960772’),(50455459, ‘31368130342-39021642717-46507600455-19375798850-00453532762-05637536075-18775170995-52223117622-80131228549-15599070398’, ‘69041215116-40233089325-84679553514-34079351264-68998010019’),(49772992, ‘38671544679-52168514215-21082333775-88386027254-35429819205-95293723643-05678960966-03446340421-82134096224-88462720880’, ‘43596348119-26099969752-34053255962-24153165618-55234761623’),(50360322, ‘09902259434-91433455683-81738911048-70058794846-98154938936-96100110602-64917573223-03918330392-87120699834-51359250433’, ‘33081549529-53372070067-65562125838-88871821995-59577822961’),(33736763, ‘562670549FATAL: `sysbench.cmdline.call_command’ function failed: /usr/local/sysbench/share/sysbench/oltp_common.lua:230: db_bulk_insert_next() failed

tidb.log信息: [2019/11/06 16:12:43.247 +08:00] [WARN] [txn.go:69] [RunInNewTxn] [“retry txn”=412359842360197127] [“original txn”=412359841639301133] [error="[try again later]: WriteConflict: txnStartTS=412359842360197127, conflictTS=412359842347089923, key=[]byte{0x6d, 0x44, 0x44, 0x4c, 0x4a, 0x6f, 0x62, 0x4c, 0x69, 0xff, 0x73, 0x74, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xf9, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x4c} primary=[]byte{0x6d, 0x44, 0x44, 0x4c, 0x4a, 0x6f, 0x62, 0x4c, 0x69, 0xff, 0x73, 0x74, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xf9, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x4c}"] [errorVerbose=“WriteConflict: txnStartTS=412359842360197127, conflictTS=412359842347089923, key=[]byte{0x6d, 0x44, 0x44, 0x4c, 0x4a, 0x6f, 0x62, 0x4c, 0x69, 0xff, 0x73, 0x74, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xf9, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x4c} primary=[]byte{0x6d, 0x44, 0x44, 0x4c, 0x4a, 0x6f, 0x62, 0x4c, 0x69, 0xff, 0x73, 0x74, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xf9, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x4c}\ngithub.com/pingcap/tidb/store/tikv.extractLockFromKeyErr\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/store/tikv/snapshot.go:302\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).prewriteSingleBatch\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:415\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).doActionOnBatches\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:291\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).doActionOnKeys\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:271\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).prewriteKeys\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:594\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).execute\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:646\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).executeAndWriteFinishBinlog\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:606\ngithub.com/pingcap/tidb/store/tikv.(*tikvTxn).Commit\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/store/tikv/txn.go:237\ngithub.com/pingcap/tidb/kv.RunInNewTxn\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/kv/txn.go:64\ngithub.com/pingcap/tidb/ddl.(*ddl).addDDLJob\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/ddl/ddl_worker.go:196\ngithub.com/pingcap/tidb/ddl.(*ddl).doDDLJob\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/ddl/ddl.go:502\ngithub.com/pingcap/tidb/ddl.(*ddl).CreateTable\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/ddl/ddl_api.go:1170\ngithub.com/pingcap/tidb/executor.(*DDLExec).executeCreateTable\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/executor/ddl.go:167\ngithub.com/pingcap/tidb/executor.(*DDLExec).Next\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/executor/ddl.go:93\ngithub.com/pingcap/tidb/executor.Next\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/executor/executor.go:185\ngithub.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelayExecutor\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/executor/adapter.go:299\ngithub.com/pingcap/tidb/executor.(*ExecStmt).Exec\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/executor/adapter.go:245\ngithub.com/pingcap/tidb/session.runStmt\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/session/tidb.go:198\ngithub.com/pingcap/tidb/session.(*session).executeStatement\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/session/session.go:831\ngithub.com/pingcap/tidb/session.(*session).execute\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/session/session.go:901\ngithub.com/pingcap/tidb/session.(*session).Execute\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/session/session.go:850\ngithub.com/pingcap/tidb/server.(*TiDBContext).Execute\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/server/driver_tidb.go:242\ngithub.com/pingcap/tidb/server.(*clientConn).handleQuery\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/server/conn.go:933\ngithub.com/pingcap/tidb/server.(*clientConn).dispatch\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/server/conn.go:667\ngithub.com/pingcap/tidb/server.(*clientConn).Run\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/server/conn.go:504\ngithub.com/pingcap/tidb/server.(*Server).onConn\n\t/home/jenkins/workspace/release_tidb_2.1-ga/go/src/github.com/pingcap/tidb/server/server.go:383\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1337\n[try again later]”] [2019/11/06 16:12:43.255 +08:00] [INFO] [2pc.go:635] [“2PC clean up done”] [txnStartTS=412359842360197127]

把压测的线程改成32之后就没出现这个问题

可以把 tidb 的 grafana 监控导出一份看下。另外,可以上传下压测连接的 tidb-server 的 log 以及 std err 日志 看下

Cloud-Cluster-TiDB-1573031498411.json (186.7 KB) tidb.zip (547.7 KB)

附件上传了error.log tidb-server的日志, 还有grafana数据, stderr文件是空的

不是 grafana 的数据,是导出一份 grafana 监控中 tidb 面板的数据,保存成 pdf 文件哈~~~

不晓得咋操作:joy:

是grafana截图吗

截图也行,tidb 面板的~~~

下午4点50左右开始

从日志看压测期间存在 TiKV server timeout 以及 drop region cache [store=11990] [“store addr”=10.111.9.248:20160] ,可能是这个 10.111.9.248 达到性能瓶颈,检查下这个 tikv 的监控是否有热点;另外 v2.1.14 更新 region cache 时不够及时,导致所有到 tidb 的连接卡住,日志中有 [error] connection was bad 的报错,这个问题在 v2.1.17 做了优化 [ 改进 RegionCache :当一个 Region 失效时,它将会更快地从 RegionCache 中移除,减少向该 Region 发送请求的个数 #11931],可以考虑升级到 2.1 的新版本再进行测试。

你好,我这边已经把版本升级到2.1.17,还是出现压测初始化数据时超时问题

kv报错: [ERROR] [process.rs:179] [“get snapshot failed for cid=3635087, error Request(message: “region epoch is not match” epoch_not_match {current_regions {id: 8145 start_key: “t\200\000\000\000\000\000\000\377\204_r\200\000\000\000\002\377r\277\277\000\000\000\000\000\372” end_key: “t\200\000\000\000\000\000\000\377\205\000\000\000\000\000\000\000\370” region_epoch {conf_ver: 9 version: 177} peers {id: 8146 store_id: 1} peers {id: 8147 store_id: 4} peers {id: 8148 store_id: 7} peers {id: 8149 store_id: 5} peers {id: 8150 store_id: 6}} current_regions {id: 27783 start_key: “t\200\000\000\000\000\000\000\377\204_r\200\000\000\000\002\377m3\332\000\000\000\000\000\372” end_key: “t\200\000\000\000\000\000\000\377\204_r\200\000\000\000\002\377r\277\277\000\000\000\000\000\372” region_epoch {conf_ver: 9 version: 177} peers {id: 27784 store_id: 1} peers {id: 27785 store_id: 4} peers {id: 27786 store_id: 7} peers {id: 27787 store_id: 5} peers {id: 27788 store_id: 6}}})”] [2019/11/11 19:25:35.731 +08:00] [ERROR] [process.rs:179] [“get snapshot failed for cid=3635090, error Request(message: “region epoch is not match” epoch_not_match {current_regions {id: 8145 start_key: “t\200\000\000\000\000\000\000\377\204_r\200\000\000\000\002\377r\277\277\000\000\000\000\000\372” end_key: “t\200\000\000\000\000\000\000\377\205\000\000\000\000\000\000\000\370” region_epoch {conf_ver: 9 version: 177} peers {id: 8146 store_id: 1} peers {id: 8147 store_id: 4} peers {id: 8148 store_id: 7} peers {id: 8149 store_id: 5} peers {id: 8150 store_id: 6}} current_regions {id: 27783 start_key: “t\200\000\000\000\000\000\000\377\204_r\200\000\000\000\002\377m3\332\000\000\000\000\000\372” end_key: “t\200\000\000\000\000\000\000\377\204_r\200\000\000\000\002\377r\277\277\000\000\000\000\000\372” region_epoch {conf_ver: 9 version: 177} peers {id: 27784 store_id: 1} peers {id: 27785 store_id: 4} peers {id: 27786 store_id: 7} peers {id: 27787 store_id: 5} peers {id: 27788 store_id: 6}}})”]

tidb.log信息:

[2019/11/08 16:11:34.764 +08:00] [WARN] [txn.go:69] [RunInNewTxn] [“retry txn”=412405122897018911] [“original txn”=412405122897018911] [error="[try again later]: WriteConflict: txnStartTS=412405122897018911, conflictTS=412405122897018908, key=[]byte{0x6d, 0x4e, 0x65, 0x78, 0x74, 0x47, 0x6c, 0x6f, 0x62, 0xff, 0x61, 0x6c, 0x49, 0x44, 0x0, 0x0, 0x0, 0x0, 0xfb, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x73} primary=[]byte{0x6d, 0x4e, 0x65, 0x78, 0x74, 0x47, 0x6c, 0x6f, 0x62, 0xff, 0x61, 0x6c, 0x49, 0x44, 0x0, 0x0, 0x0, 0x0, 0xfb, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x73}"] [errorVerbose=“WriteConflict: txnStartTS=412405122897018911, conflictTS=412405122897018908, key=[]byte{0x6d, 0x4e, 0x65, 0x78, 0x74, 0x47, 0x6c, 0x6f, 0x62, 0xff, 0x61, 0x6c, 0x49, 0x44, 0x0, 0x0, 0x0, 0x0, 0xfb, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x73} primary=[]byte{0x6d, 0x4e, 0x65, 0x78, 0x74, 0x47, 0x6c, 0x6f, 0x62, 0xff, 0x61, 0x6c, 0x49, 0x44, 0x0, 0x0, 0x0, 0x0, 0xfb, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x73}\ngithub.com/pingcap/tidb/store/tikv.extractLockFromKeyErr\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/store/tikv/snapshot.go:303\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).prewriteSingleBatch\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:416\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).doActionOnBatches\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:291\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).doActionOnKeys\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:271\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).prewriteKeys\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:595\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).execute\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:647\ngithub.com/pingcap/tidb/store/tikv.(*twoPhaseCommitter).executeAndWriteFinishBinlog\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/store/tikv/2pc.go:607\ngithub.com/pingcap/tidb/store/tikv.(*tikvTxn).Commit\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/store/tikv/txn.go:237\ngithub.com/pingcap/tidb/kv.RunInNewTxn\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/kv/txn.go:64\ngithub.com/pingcap/tidb/ddl.(*ddl).genGlobalID\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/ddl/ddl.go:463\ngithub.com/pingcap/tidb/ddl.buildTableInfo\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/ddl/ddl_api.go:904\ngithub.com/pingcap/tidb/ddl.(*ddl).CreateTable\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/ddl/ddl_api.go:1135\ngithub.com/pingcap/tidb/executor.(*DDLExec).executeCreateTable\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/executor/ddl.go:167\ngithub.com/pingcap/tidb/executor.(*DDLExec).Next\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/executor/ddl.go:93\ngithub.com/pingcap/tidb/executor.Next\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/executor/executor.go:186\ngithub.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelayExecutor\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/executor/adapter.go:312\ngithub.com/pingcap/tidb/executor.(*ExecStmt).Exec\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/executor/adapter.go:258\ngithub.com/pingcap/tidb/session.runStmt\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/session/tidb.go:207\ngithub.com/pingcap/tidb/session.(*session).executeStatement\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/session/session.go:831\ngithub.com/pingcap/tidb/session.(*session).execute\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/session/session.go:901\ngithub.com/pingcap/tidb/session.(*session).Execute\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/session/session.go:850\ngithub.com/pingcap/tidb/server.(*TiDBContext).Execute\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/server/driver_tidb.go:242\ngithub.com/pingcap/tidb/server.(*clientConn).handleQuery\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/server/conn.go:934\ngithub.com/pingcap/tidb/server.(*clientConn).dispatch\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/server/conn.go:668\ngithub.com/pingcap/tidb/server.(*clientConn).Run\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/server/conn.go:505\ngithub.com/pingcap/tidb/server.(*Server).onConn\n\t/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb/server/server.go:385\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1337\n[try again later]”]

tikv 日志中的报错是 region 在获取 snapshot 过程中拿到了旧版本的 region(epoch_not_match),可能是 region 分裂导致;tidb 日志中的 WriteConflict 报错是在获取自增 ID 时的冲突报错,在多个 tidb 压测并发较高时可能会出现。

建议将 sysbench 表结构修改为如下形式再进行压测,以避免写热点等相关问题。

CREATE TABLE sbtest1 (id int(11) NOT NULL, k int(11) NOT NULL DEFAULT ‘0’, c char(120) NOT NULL DEFAULT ‘’, pad char(60) NOT NULL DEFAULT ‘’) SHARD_ROW_ID_BITS = 4;

ERROR 1105 (HY000): unsupported shard_row_id_bits for table with primary key as row id ,重建表出现这个问题, 压测的tidb是5副本,如果改成3副本,然后把region大小调大是不是会好点

您好: 1. 主键不支持这样操作,能否尝试改为唯一索引?再打散 2. 改成3副本应该可以降低写入的时延,同时写入成功的数量要求会低一些,更改大小应该作用不大。