tpch 100g 耗时5000多秒 太慢了

Q1 263.7
Q2 287.37
Q3 236.78
Q4 240.2
Q5 261.2
Q6 356.91
Q7 270.07
Q8 261.64
Q9 257.6
Q10 259
Q11 18.46
Q12 266.63
Q13 258.2
Q14 263.57
Q15 260.6
Q16 68.75
Q17 507.64
Q18 2.03
Q19 269.96
Q20 265.18
Q21 240.02
Q22 260.01
sum 5375.52

整个耗时5000 多秒,这个时间正常吗? 不知道,你们自己跑是不是这个时间, 如果也是差不多,我觉得tidb 让我有点失望,
配置如下文所示, 运行方法如下图
我是连接tidb 发出sql 命令, 不是直接发送命令到tiflash上, 而且我也安装了tiflash
而且运行过程中,不断出现TiKV server timeout, 对于大查询, tikv 是不是非常容易timeout

2020-06-18 18:24:21,354 INFO tpch.py:437 Begin to query 17.sql 0
2020-06-18 18:32:48,997 INFO tpch.py:441 mysql -htidb001 -P4000 -uroot -p’xxxxx’ -Dtpch1000 -e"source tidb-sql/17.sql" result :0, output:
ERROR 9002 (HY000) at line 4 in file: ‘tidb-sql/17.sql’: TiKV server timeout
2020-06-18 18:32:48,998 INFO tpch.py:450 mysql -htidb001 -P4000 -uroot -p’xxxx’ -Dtpch1000 -e"source tidb-sql/17.sql" result :0, cost:
507.64
2020-06-18 18:32:48,998 INFO tpch.py:437 Begin to query 18.sql 0

/////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////
之前老问题 tiup load tpch 100g 总是出问题
/////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////

根据 https://docs.pingcap.com/zh/tidb/v4.0/v4.0-performance-benchmarking-with-tpch

我用tiup load tpch 10 g 一直ok, 但load 100g 的时候, 总是失败, 报
execute prepare failed, err fail to generate customer.tbl, err: Error 9002: TiKV server timeout

有一些错误

[endpoint.rs:540] [error-response] [err=“Key is locked (will clean up) primary_lock: 7480000000000000155F698000000000000002038000000000000A74 lock_version: 417415986885165057 key: 7480000000000000175F698000000000000001038000000000000A74038000000000000000038000000000000010 lock_ttl: 20050 txn_size: 3 lock_for_update_ts: 417415986898272258”]

pd_servers:
  - host: 172.19.32.173

tidb_servers:
  - host: 172.19.32.223
  - host: 172.19.32.224

tikv_servers:
  - host: 172.19.32.226
  - host: 172.19.32.225
  - host: 172.19.32.227

tiflash_servers:
  - host: 172.19.32.223
  - host: 172.19.32.224

monitoring_servers:
  - host: 172.19.32.173

grafana_servers:
  - host: 172.19.32.173

alertmanager_servers:
  - host: 172.19.32.173

tidb/tikv 都是16核 64g , tikv 配essd
pd 是8核32g

server_configs:
  tidb:
    log.slow-threshold: 300
    binlog.enable: false
    binlog.ignore-error: false
    log.level: "error"
    performance.max-procs: 20
    prepared-plan-cache.enabled: true
    tikv-client.max-batch-wait-time: 2000000
  tikv:
    # server.grpc-concurrency: 4
    # raftstore.apply-pool-size: 2
    # raftstore.store-pool-size: 2
    # rocksdb.max-sub-compactions: 1
    # storage.block-cache.capacity: "16GB"
    readpool.unifiy-read-pool: true
    readpool.unified.min-thread-count: 5
    readpool.unified.max-thread-count: 20
    readpool.storage.normal-concurrency: 10
    readpool.storage.use-unified-pool: false
    readpool.coprocessor.use-unified-pool: true
    storage.block-cache.shared: true
    storage.block-cache.capacity: 10
    storage.scheduler-worker-pool-size: 5
    raftstore.store-pool-size: 3
    raftstore.apply-pool-size: 3
    rocksdb.max-background-jobs: 3
    raftdb.max-background-jobs: 3
    raftdb.allow-concurrent-memtable-write: true
    server.request-batch-enable-cross-command: false
    server.grpc-concurrency: 6
        pessimistic-txn.pipelined: true
  pd:
    schedule.leader-schedule-limit: 4
    schedule.region-schedule-limit: 2048
    schedule.replica-schedule-limit: 64
    replication.enable-placement-rules: true
  tiflash:
    logger.level: "info"
  # pump:
  #   gc: 7

你好,可以详细描述此处的操作命令,

tiup bench tpch prepare
–host tidb001 --port 4000 --db tpch100 --password xxxxx
–sf 100
–tiflash
–analyze --tidb_build_stats_concurrency 8 --tidb_distsql_scan_concurrency 30

上传下 tikv.log 看下是否有更加详细的报错