PD集群瘫痪,疑似 etcd 性能劣化无法提供 TSO

通过编写Flink程序将大量MySQL数据CDC同步到TiDB时,遇到TiDB数据库连接断开问题。观察日志所有 TiDB 节点无法获取 TSO,TiKV/TiFlash 无法上报心跳,TiProxy 连接 PD 超时。原因为 PD 集群底层 etcd 出现严重 I/O 延迟,导致 leader 选举失败、TSO 服务中断。

仅TiKV和TiFlash节点是SSD硬盘。PD节点同时部署了Kafka。如何诊断该问题?

生产环境集群情况

主机地址 CPU CPU 架构 CPU 使用率 物理内存 内存使用率 实例
192.168.10.147 16 (16 vCore) arm64 62.6 GiB 1 PD
192.168.10.171 16 (16 vCore) arm64 极低 62.6 GiB 1 PD
192.168.10.75 16 (16 vCore) arm64 62.6 GiB 1 PD
192.168.10.170 16 (16 vCore) arm64 62.6 GiB 1 TiDB
192.168.10.200 16 (16 vCore) arm64 极低 30.6 GiB 中等 1 TiDB
192.168.10.150 16 (16 vCore) arm64 极低 30.6 GiB 中等 1 TiFlash
192.168.10.11 8 (8 vCore) arm64 30.6 GiB 1 TiKV
192.168.10.158 8 (8 vCore) arm64 30.6 GiB 1 TiKV
192.168.10.204 8 (8 vCore) arm64 极低 30.6 GiB 1 TiKV
192.168.10.87 8 (8 vCore) arm64 30.6 GiB 1 TiKV

···
Cluster type: tidb
Cluster name: kylin
Cluster version: v8.5.4
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.10.75:2379/dashboard
Dashboard URLs: http://192.168.10.75:2379/dashboard
Grafana URL: http://192.168.10.150:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir


192.168.10.150:9093 alertmanager 192.168.10.150 9093/9094 linux/aarch64 Up /data/tidb/alertmanager-9093 /opt/tidb/alertmanager-9093
192.168.10.150:3000 grafana 192.168.10.150 3000 linux/aarch64 Up - /opt/tidb/grafana-3000
192.168.10.147:2379 pd 192.168.10.147 2379/2380 linux/aarch64 Up /data/tidb/pd-2379 /opt/tidb/pd-2379
192.168.10.171:2379 pd 192.168.10.171 2379/2380 linux/aarch64 Up /data/tidb/pd-2379 /opt/tidb/pd-2379
192.168.10.75:2379 pd 192.168.10.75 2379/2380 linux/aarch64 Up|L|UI /data/tidb/pd-2379 /opt/tidb/pd-2379
192.168.10.150:9090 prometheus 192.168.10.150 9090/9115/9100/12020 linux/aarch64 Up /data/tidb/prometheus-9090 /opt/tidb/prometheus-9090
192.168.10.170:4000 tidb 192.168.10.170 4000/10080 linux/aarch64 Up - /opt/tidb/tidb-4000
192.168.10.200:4000 tidb 192.168.10.200 4000/10080 linux/aarch64 Up - /opt/tidb/tidb-4000
192.168.10.150:9000 tiflash 192.168.10.150 9000/3930/20170/20292/8234/8123 linux/aarch64 Up /data/tidb/tiflash-9000 /opt/tidb/tiflash-9000
192.168.10.11:20160 tikv 192.168.10.11 20160/20180 linux/aarch64 Up /data/tidb/tikv-20160 /opt/tidb/tikv-20160
192.168.10.158:20160 tikv 192.168.10.158 20160/20180 linux/aarch64 Up /data/tidb/tikv-20160 /opt/tidb/tikv-20160
192.168.10.204:20160 tikv 192.168.10.204 20160/20180 linux/aarch64 Up /data/tidb/tikv-20160 /opt/tidb/tikv-20160
192.168.10.87:20160 tikv 192.168.10.87 20160/20180 linux/aarch64 Up /data/tidb/tikv-20160 /opt/tidb/tikv-20160
192.168.10.170:6000 tiproxy 192.168.10.170 6000/3080 linux/aarch64 Up - /opt/tidb/tiproxy-6000
192.168.10.200:6000 tiproxy 192.168.10.200 6000/3080 linux/aarch64 Up - /opt/tidb/tiproxy-6000
Total nodes: 15
···

日志去重整理(按组件分类)

TiProxy组件

tiproxy_192.168.10.170_6000

[Warn] [main.etcd.etcdcli] [v3@v3.5.6/retry_interceptor.go:62] [retrying of unary invoker failed] [target=etcd-endpoints://0x40006a01c0/192.168.10.171:2379] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]

tiproxy_192.168.10.200_6000

[Warn] [main.etcd.etcdcli] [v3@v3.5.6/retry_interceptor.go:62] [retrying of unary invoker failed] [target=etcd-endpoints://0x40003728c0/192.168.10.171:2379] [attempt=0] [error="rpc error: code = DeadlineExceeded desc = context deadline exceeded"]

TiKV组件

tikv_192.168.10.204_20160

[Warn] [pd.rs:1907] ["report min resolved_ts failed"] [err="Grpc(RpcFailure(RpcStatus { code: 14-UNAVAILABLE, message: \"not leader\", details: [] }))"] [thread_id=26]
[Warn] [util.rs:497] ["request failed, retry"] [err="Grpc(RpcFailure(RpcStatus { code: 14-UNAVAILABLE, message: \"not leader\", details: [] }))"] [thread_id=103]
[Warn] [client.rs:155] ["failed to update PD client"] [error="Other(\"[components/pd_client/src/util.rs:377]: cancel reconnection due to too small interval\")"] [thread_id=12]

tikv_192.168.10.158_20160

[Warn] [util.rs:497] ["request failed, retry"] [err="Grpc(RpcFailure(RpcStatus { code: 4-DEADLINE_EXCEEDED, message: \"Deadline Exceeded\", details: [] }))"] [thread_id=103]
[Warn] [client.rs:155] ["failed to update PD client"] [error="Other(\"[components/pd_client/src/util.rs:377]: cancel reconnection due to too small interval\")"] [thread_id=12]
[Warn] [pd.rs:1907] ["report min resolved_ts failed"] [err="Grpc(RpcFailure(RpcStatus { code: 14-UNAVAILABLE, message: \"not leader\", details: [] }))"] [thread_id=26]
[Warn] [client.rs:655] ["failed to send heartbeat"] [err="Grpc(RpcFinished(Some(RpcStatus { code: 0-OK, message: \"\", details: [] })))"] [thread_id=8]

tikv_192.168.10.87_20160

[Warn] [util.rs:497] ["request failed, retry"] [err="Grpc(RpcFailure(RpcStatus { code: 14-UNAVAILABLE, message: \"not leader\", details: [] }))"] [thread_id=103]
[Warn] [client.rs:155] ["failed to update PD client"] [error="Other(\"[components/pd_client/src/util.rs:377]: cancel reconnection due to too small interval\")"] [thread_id=12]
[Warn] [pd.rs:1907] ["report min resolved_ts failed"] [err="Grpc(RpcFailure(RpcStatus { code: 14-UNAVAILABLE, message: \"not leader\", details: [] }))"] [thread_id=26]
[Warn] [client.rs:655] ["failed to send heartbeat"] [err="Grpc(RpcFinished(Some(RpcStatus { code: 0-OK, message: \"\", details: [] })))"] [thread_id=8]

tikv_192.168.10.11_20160

[Warn] [pd.rs:1907] ["report min resolved_ts failed"] [err="Grpc(RpcFailure(RpcStatus { code: 14-UNAVAILABLE, message: \"not leader\", details: [] }))"] [thread_id=26]
[Warn] [util.rs:497] ["request failed, retry"] [err="Grpc(RpcFailure(RpcStatus { code: 14-UNAVAILABLE, message: \"not leader\", details: [] }))"] [thread_id=103]
[Warn] [client.rs:155] ["failed to update PD client"] [error="Other(\"[components/pd_client/src/util.rs:377]: cancel reconnection due to too small interval\")"] [thread_id=12]

TiFlash组件

tiflash_192.168.10.150_3930

[Warn] [<unknown>] ["Receive TsoResponse failed"] [source=pingcap.pd] [thread_id=97]
[Warn] [<unknown>] ["update ts error: Exception: Receive TsoResponse failed"] [source=pd/oracle] [thread_id=97]
[Warn] [pd.rs:1776] ["report min resolved_ts failed"] [err="Grpc(RpcFailure(RpcStatus { code: 14-UNAVAILABLE, message: \"not leader\", details: [] }))"] [thread_id=43]

TiDB组件

tidb_192.168.10.200_4000

[Warn] [nodes.go:79] ["generate task executor nodes met error"] [error="context deadline exceeded"]
[Warn] [txn.go:691] ["wait tso failed"] [error="rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"]
[Error] [tso_dispatcher.go:438] ["[tso] getTS error after processing requests"] [dc-location=global] [stream-url=http://192.168.10.147:2379] [error="[PD:client:ErrClientGetTSO]get TSO failed, rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"]
[Warn] [tso_stream.go:315] ["failed to send RPC request through tsoStream"] [stream=192.168.10.147:2379-11323] [error=EOF]
[Error] [pd.go:484] ["updateTS error"] [txnScope=global] [error="rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"]
[Warn] [controller.go:623] ["[resource group controller] token bucket rpc error"] [error="rpc error: code = Unavailable desc = not leader"]

tidb_192.168.10.170_4000

[Warn] [txn.go:691] ["wait tso failed"] [error="rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"]
[Error] [tso_dispatcher.go:438] ["[tso] getTS error after processing requests"] [dc-location=global] [stream-url=http://192.168.10.147:2379] [error="[PD:client:ErrClientGetTSO]get TSO failed, rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"]
[Warn] [tso_stream.go:315] ["failed to send RPC request through tsoStream"] [stream=192.168.10.147:2379-10981] [error=EOF]
[Error] [pd.go:484] ["updateTS error"] [txnScope=global] [error="rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"]

PD组件

pd_192.168.10.171_2379

[Warn] [v3_server.go:920] ["waiting for ReadIndex response took too long, retrying"] [sent-request-id=10879178303427950155] [retry-timeout=500ms]
[Warn] [util.go:170] ["apply request took too long"] [took=999.938013ms] [expected-duration=100ms] [prefix="read-only range "] [request="key:\"/tidb/server/info\" range_end:\"/tidb/server/infp\" "] [response=] [error="context canceled"]
[Warn] [interceptor.go:197] ["request stats"] ["start time"=2025/12/16 11:13:00.371 +08:00] ["time spent"=1.000162695s] [remote=192.168.10.200:35266] ["response type"=/etcdserverpb.KV/Range] ["request count"=0] ["request size"=38] ["response count"=0] ["response size"=0] ["request content"="key:\"/tidb/server/info\" range_end:\"/tidb/server/infp\" "]
[Warn] [health_checker.go:194] ["etcd endpoint is unhealthy"] [endpoint=http://192.168.10.75:2379] [took=2.592153211s] [source=election-etcd-client]
[Warn] [etcdutil.go:157] ["kv gets too slow"] [request-key=/pd/7582102668489220810/config] [cost=7.451833882s] []
[Warn] [leadership.go:345] ["the connection maybe unhealthy, retry to watch later"] [revision=367538] [leader-key=/pd/7582102668489220810/leader] [purpose="leader election"]
[Warn] [leadership.go:291] ["the connection maybe unhealthy, retry to watch later"] [revision=367538] [leader-key=/pd/7582102668489220810/leader] [purpose="leader election"]
[Warn] [leadership.go:373] ["required revision has been compacted, use the compact revision"] [required-revision=367538] [compact-revision=381984] [leader-key=/pd/7582102668489220810/leader] [purpose="leader election"]
[Warn] [client.go:170] ["region sync with leader meet error"] [error="[PD:grpc:ErrGRPCRecv]receive response error: rpc error: code = Canceled desc = context canceled"]

pd_192.168.10.147_2379

[Warn] [v3_server.go:920] ["waiting for ReadIndex response took too long, retrying"] [sent-request-id=4305048747374098940] [retry-timeout=500ms]
[Warn] [util.go:170] ["apply request took too long"] [took=999.833317ms] [expected-duration=100ms] [prefix="read-only range "] [request="key:\"/tidb/server/info\" range_end:\"/tidb/server/infp\" "] [response=] [error="context canceled"]
[Warn] [interceptor.go:197] ["request stats"] ["start time"=2025/12/16 11:13:01.570 +08:00] ["time spent"=1.000000338s] [remote=192.168.10.200:39954] ["response type"=/etcdserverpb.KV/Range] ["request count"=0] ["request size"=38] ["response count"=0] ["response size"=0] ["request content"="key:\"/tidb/server/info\" range_end:\"/tidb/server/infp\" "]
[Warn] [health_checker.go:194] ["etcd endpoint is unhealthy"] [endpoint=http://192.168.10.75:2379] [took=2.577287751s] [source=election-etcd-client]
[Warn] [wal.go:805] ["slow fdatasync"] [took=7.167304116s] [expected-duration=1s]
[Warn] [raft.go:416] ["leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk"] [to=9b7ca93817c496fa] [heartbeat-interval=500ms] [expected-duration=1s] [exceeded-duration=6.297016327s]
[Warn] [etcd_kv.go:178] ["txn runs too slow"] [response="{\"header\":{\"cluster_id\":16225759819247731649,\"member_id\":11204016031372973818,\"revision\":385524,\"raft_term\":36},\"succeeded\":true,\"responses\":[{\"Response\":{\"response_put\":{\"header\":{\"revision\":385524}}}}]}"] [cost=7.169645888s] []
[Warn] [lease.go:185] ["lease keep alive failed"] [purpose="leader election"] [start=2025/12/16 11:13:07.933 +08:00] [error="context canceled"]
[Error] [lease.go:97] ["revoke lease failed"] [purpose="leader election"] [error="context deadline exceeded"]
[Warn] [etcdutil.go:157] ["kv gets too slow"] [request-key=/pd/7582102668489220810/timestamp] [cost=3.961635432s] []
[Warn] [etcdutil.go:157] ["kv gets too slow"] [request-key=/pd/7582102668489220810/gc/safe_point] [cost=3.191099973s] []
[Warn] [leadership.go:345] ["the connection maybe unhealthy, retry to watch later"] [revision=367538] [leader-key=/pd/7582102668489220810/leader] [purpose="leader election"]
[Warn] [client.go:170] ["region sync with leader meet error"] [error="[PD:grpc:ErrGRPCRecv]receive response error: rpc error: code = Canceled desc = context canceled"]

pd_192.168.10.75_2379

[Warn] [v3_server.go:920] ["waiting for ReadIndex response took too long, retrying"] [sent-request-id=5183813624664499738] [retry-timeout=500ms]
[Warn] [util.go:170] ["apply request took too long"] [took=999.878058ms] [expected-duration=100ms] [prefix="read-only range "] [request="key:\"/tidb/server/info\" range_end:\"/tidb/server/infp\" "] [response=] [error="context canceled"]
[Warn] [interceptor.go:197] ["request stats"] ["start time"=2025/12/16 11:13:05.171 +08:00] ["time spent"=1.000020318s] [remote=192.168.10.200:39954] ["response type"=/etcdserverpb.KV/Range] ["request count"=0] ["request size"=38] ["response count"=0] ["response size"=0] ["request content"="key:\"/tidb/server/info\" range_end:\"/tidb/server/infp\" "]
[Warn] [health_checker.go:194] ["etcd endpoint is unhealthy"] [endpoint=http://192.168.10.75:2379] [took=2.557590056s] [source=server-etcd-client]
[Warn] [wal.go:805] ["slow fdatasync"] [took=3.542783197s] [expected-duration=1s]
[Warn] [etcdutil.go:157] ["kv gets too slow"] [request-key=/pd/7582102668489220810/config] [cost=10.000828399s] [error="context deadline exceeded"]
[Error] [etcdutil.go:162] ["load from etcd meet error"] [key=/pd/7582102668489220810/config] [error="[PD:etcd:ErrEtcdKVGet]context deadline exceeded: context deadline exceeded"]
[Warn] [manager.go:105] ["failed to reload persist options"]
[Warn] [leadership.go:291] ["the connection maybe unhealthy, retry to watch later"] [revision=367538] [leader-key=/pd/7582102668489220810/leader] [purpose="leader election"]
[Warn] [leadership.go:373] ["required revision has been compacted, use the compact revision"] [required-revision=367538] [compact-revision=381984] [leader-key=/pd/7582102668489220810/leader] [purpose="leader election"]
[Warn] [client.go:170] ["region sync with leader meet error"] [error="[PD:grpc:ErrGRPCRecv]receive response error: rpc error: code = Canceled desc = context canceled"]
[Warn] [v3_server.go:932] ["timed out waiting for read index response (local node might have slow network)"] [timeout=11s]
[Warn] [member.go:234] ["failed to pass pre-check, check pd leader later"] [error="[PD:member:ErrEtcdLeaderNotFound]etcd leader not found"]
[Warn] [v3_server.go:897] ["ignored out-of-date read index response; local node read indexes queueing up and waiting to be in sync with leader"] [sent-request-id=5183813624664499743] [received-request-id=5183813624664499738]

诊断

诊断结果

该表显示的是自动诊断的结果,即集群中出现的问题。

RULE ITEM TYPE INSTANCE STATUS_ADDRESS VALUE REFERENCE SEVERITY DETAILS
config proxy.advertise-addr tiproxy inconsistent consistent warning 192.168.10.170:6000 config value is 192.168.10.170 192.168.10.200:6000 config value is 192.168.10.200
node-load 展开 swap-memory-used node 192.168.10.171:9100 12582912.0 0 warning
threshold-check 展开 leader-score-balance tikv 192.168.10.150:3930 192.168.10.150:20292 100.00% < 5.00% warning 192.168.10.158:20160 max leader_score is 21.00, much more than 192.168.10.150:3930 min leader_score 0.00
threshold-check 展开 region-score-balance tikv 192.168.10.150:3930 192.168.10.150:20292 100.00% < 5.00% warning 192.168.10.11:20160 max region_score is 51962.20, much more than 192.168.10.150:3930 min region_score 0.00
threshold-check 展开 data-block-cache-hit tikv 192.168.10.158:20160 192.168.10.158:20180 0.113 > 0.800 warning min data-block-cache-hit rate of 192.168.10.158:20180 tikv is too low
threshold-check 展开 rocksdb-write-duration tikv 192.168.10.204:20160 192.168.10.204:20180 0.211 < 0.100 warning max duration of 192.168.10.204:20180 tikv rocksdb-write-duration is too slow
threshold-check scheduler-cmd-duration tikv 192.168.10.158:20160 192.168.10.158:20180 0.105 < 0.100 warning max duration of 192.168.10.158:20180 tikv scheduler-cmd-duration is too slow

负载

服务器负载信息

METRIC_NAME instance AVG MAX MIN
node_cpu_usage 展开 11.83% 33.62% 0.48%
node_mem_usage 展开 46.1% 82.62% 5.13%
node_disk_io_utilization 收起 13% 92% 0%
– node_disk_io_utilization 192.168.10.204:9100,vda 60% 87% 20%
– node_disk_io_utilization 192.168.10.158:9100,vda 60% 92% 16%
– node_disk_io_utilization 192.168.10.11:9100,vda 58% 86% 28%
– node_disk_io_utilization 192.168.10.87:9100,vda 58% 82% 15%
– node_disk_io_utilization 192.168.10.75:9100,vda 6% 10% 4%
– node_disk_io_utilization 192.168.10.147:9100,vda 6% 9% 4%
– node_disk_io_utilization 192.168.10.171:9100,vda 5% 10% 3%
– node_disk_io_utilization 192.168.10.150:9100,vda 0.2% 0.3% 0.2%
– node_disk_io_utilization 192.168.10.170:9100,vda 0.2% 0.4% 0.1%
– node_disk_io_utilization 192.168.10.200:9100,vda 0.1% 0.2% 0.07%
– node_disk_io_utilization 192.168.10.158:9100,sr0 0% 0% 0%
– node_disk_io_utilization 192.168.10.170:9100,sr0 0% 0% 0%
– node_disk_io_utilization 192.168.10.87:9100,sr0 0% 0% 0%
– node_disk_io_utilization 192.168.10.147:9100,sr0 0% 0% 0%
– node_disk_io_utilization 192.168.10.150:9100,sr0 0% 0% 0%
– node_disk_io_utilization 192.168.10.200:9100,sr0 0% 0% 0%
– node_disk_io_utilization 192.168.10.75:9100,sr0 0% 0% 0%
– node_disk_io_utilization 192.168.10.11:9100,sr0 0% 0% 0%
– node_disk_io_utilization 192.168.10.171:9100,sr0 0% 0% 0%
– node_disk_io_utilization 192.168.10.204:9100,sr0 0% 0% 0%
磁盘写延迟

收起 20 ms 230 ms 500 us
– node_disk_write_latency 192.168.10.75:9100,vda 60 ms 230 ms 9000 us
– node_disk_write_latency 192.168.10.171:9100,vda 60 ms 200 ms 5000 us
– node_disk_write_latency 192.168.10.147:9100,vda 30 ms 80 ms 10000 us
– node_disk_write_latency 192.168.10.200:9100,vda 9000 us 10000 us 2000 us
– node_disk_write_latency 192.168.10.170:9100,vda 9000 us 20 ms 3000 us
– node_disk_write_latency 192.168.10.150:9100,vda 3000 us 4000 us 3000 us
– node_disk_write_latency 192.168.10.87:9100,vda 1000 us 5000 us 500 us
– node_disk_write_latency 192.168.10.158:9100,vda 900 us 2000 us 500 us
– node_disk_write_latency 192.168.10.204:9100,vda 800 us 1000 us 500 us
– node_disk_write_latency 192.168.10.11:9100,vda 600 us 600 us 500 us
– node_disk_write_latency 192.168.10.158:9100,sr0
– node_disk_write_latency 192.168.10.170:9100,sr0
– node_disk_write_latency 192.168.10.87:9100,sr0
– node_disk_write_latency 192.168.10.75:9100,sr0
– node_disk_write_latency 192.168.10.11:9100,sr0
– node_disk_write_latency 192.168.10.171:9100,sr0
– node_disk_write_latency 192.168.10.204:9100,sr0
– node_disk_write_latency 192.168.10.147:9100,sr0
– node_disk_write_latency 192.168.10.150:9100,sr0
– node_disk_write_latency 192.168.10.200:9100,sr0
磁盘读取延迟

展开 3000 us 30 ms 300 us
tikv_disk_read_bytes 展开 273.760 KB 4.377 MB 0.000 KB
tikv_disk_write_bytes 展开 4.799 MB 82.034 MB 0.000 KB
node_network_in_traffic 展开 1.526 MB 7.124 MB 0.017 KB
node_network_out_traffic 展开 2.028 MB 13.903 MB 0.017 KB
node_tcp_in_use 展开 73 524 19
node_tcp_connections 展开 203 1254 60

各实例 CPU 使用率

INSTANCE JOB AVG MAX MIN
192.168.10.170:10080 tidb 234% 358% 92%
192.168.10.200:10080 tidb 226% 355% 84%
192.168.10.204:20180 tikv 107% 184% 56%
192.168.10.87:20180 tikv 96% 149% 47%
192.168.10.11:20180 tikv 92% 160% 44%
192.168.10.158:20180 tikv 91% 182% 16%
192.168.10.200:3080 tiproxy 74% 126% 18%
192.168.10.147:2379 pd 37% 51% 22%
192.168.10.171:2379 pd 4% 10% 2%
192.168.10.75:2379 pd 2% 2% 2%
192.168.10.170:3080 tiproxy 2% 4% 1%
192.168.10.150:12020 ng-monitoring 0.6% 0.7% 0.5%

PD 组件

PD 中事件耗时

该表显示的是 PD 组件中各事件的耗时。METRIC_NAME 是事件名称;LABEL 是事件标签,如实例、事件类型等;TIME_RATIO 是该事件的总时间除以 TIME_RATIO 为 1 的事件的总时间;TOTAL_TIME 是该事件的总耗时;TOTAL_COUNT 是该事件的总计数;P999 是 0.999 分位数的最大时间;P99 是 0.99 分位数的最大时间;P90 是 0.90 分位数的最大时间;P80 是 0.80 分位数的最大时间。

METRIC_NAME LABEL TIME_RATIO TOTAL_TIME TOTAL_COUNT P999 P99 P90 P80
PD 客户端命令
展开 1 802.11 1590636 0.01 0.008 0.008 0.007
pd_client_request_rpc 展开 1 122.82 365199 0.004 0.002 0.0008 0.0005
pd_grpc_completed_commands
展开 1 1340795.04 45919 10 10 10 10
pd_operator_finish
展开 1 8.99 1 8 7.96 7.6 7.2
pd_operator_step_finish
展开 1 8.99 5 8 7.96 7.6 7.2
etcd 事务
展开 1 21.73 153 2.05 2.05 0.18 0.007
pd_region_heartbeat
1 0 395 0 0 0 0
etcd_wal_fsync
展开 1 15.95 804 7.91 5.41 0.005 0.003
网络延迟

展开 1 0.08 107 0.003 0.003 0.003 0.002

Leader/Region 调度数

METRIC_NAME LABEL TOTAL_COUNT
blance-leader-in
blance-leader-out
blance-region-in
blance-region-out

集群状态

TYPE MAX MIN
store_serving_count 5 5
store_unhealth_count 0 0
store_up_count 5 5
leader_count
104 104
storage_capacity 976.466 GB 976.466 GB
store_removed_count 0 0
storage_size 61.882 GB 60.804 GB
store_low_space_count 0 0
store_preparing_count 0 0
learner_count 0 0
region_count
312 312
store_disconnected_count 0 0
store_removing_count 0 0
store_tombstone_count 0 0
witness_count 0 0
store_down_count 0 0
store_offline_count 0 0
store_slow_count 0 0

TiKV 节点的存储状态

METRIC_NAME INSTANCE AVG MAX MIN
region_score
展开 41251.39 52316 0
leader_score
展开 20.8 31 0
region_count
展开 62.4 80 0
leader_count
展开 20.8 31 0
region_size
展开 15378.36 19495 0
leader_size

展开 5126.12 8104 0

etcd 状态

TYPE MAX MIN
appliedIndex 386038 385815
committedIndex 386038 385815
term 38 36

TiKV 组件

TiKV 中事件耗时

该表显示的是 TiKV 组件中各事件的耗时。METRIC_NAME 是事件名称;LABEL 是事件标签,如实例、事件类型等;TIME_RATIO 是该事件的总时间除以 TIME_RATIO 为 1 的事件的总时间;TOTAL_TIME 是该事件的总耗时;TOTAL_COUNT 是该事件的总计数;P999 是 0.999 分位数的最大时间;P99 是 0.99 分位数的最大时间;P90 是 0.90 分位数的最大时间;P80 是 0.80 分位数的最大时间。

METRIC_NAME LABEL TIME_RATIO TOTAL_TIME TOTAL_COUNT P999 P99 P90 P80
tikv_grpc_message
展开 1 6510.22 1958331 0.37 0.11 0.009 0.006
Coprocessor 读请求
展开 0.0003 1.91 7000 0.005 0.001 0.0006 0.0005
Coprocessor 请求
展开 0.0002 1.02 7000 0.001 0.001 0.0003 0.0003
tikv_cop_wait
展开 0.0001 0.89 21000 0.005 0.0006 0.0001 0.0001
tikv_scheduler_command
展开 0.98 6370.95 1953987 0.42 0.1 0.009 0.008
tikv_scheduler_latch_wait
展开 0.002 15.25 1512561 0.005 0.002 0.001 0.001
tikv_storage_async_request
展开 0.93 6066.05 5306351 0.35 0.08 0.009 0.008
tikv_scheduler_processing_read 展开 0.02 123.08 1953360 0.005 0.005 0.002 0.002
tikv_raft_propose_wait
展开 0.006 36.33 1332409 0.002 0.0005 0.0001 0.00006
tikv_raft_process
展开 0.02 116.15 4523504 0.0003 0.0002 0.00007 0.00005
tikv_raft_append_log
展开 0.08 506.66 330384 0.21 0.01 0.002 0.002
tikv_raft_commit_log 展开 0.29 1860.63 647121 0.03 0.01 0.005 0.004
tikv_raft_apply_wait
展开 0.02 108.16 1737984 0.14 0.0006 0.0001 0.00007
tikv_raft_apply_log
展开 0.03 163.51 1335307 0.01 0.007 0.000009 0.000008
tikv_raft_store_events
展开 0.008 49.08 7783616 0.01 0.009 0.0009 0.0008
快照处理
展开 0.001 7.45 3 10.48 10.43 9.96 9.44
快照发送
展开 0.0002 1.48 1 1.6 1.59 1.52 1.44
tikv_check_split
展开 0.0008 4.96 4 2.62 2.61 2.49 2.36
tikv_ingest_sst
tikv_gc_tasks
0 0 0
tikv_pd_request
展开 0.002 12.53 2804 4.99 4.9 4 3
tikv_lock_manager_deadlock_detect 0 0 0
tikv_lock_manager_waiter_lifetime 0 0 0
tikv_backup_range
tikv_backup

RocksDB 事件耗时

METRIC_NAME LABEL AVG MAX P99 P95
get duration
展开 10 us 1069 us 157 us 80 us
seek duration
展开 33 us 4326 us 2496 us 1870 us
write duration
展开 68 us 211 ms 374 us 195 us
WAL sync duration
展开 2940 us 140 ms 140 ms 140 ms
compaction duration
展开 642 ms 7231 ms 7231 ms 7231 ms
SST read duration
展开 17 us 4279 us 862 us 723 us
write stall duration

展开 0 us 0 us 0 us 0 us

TiKV 错误

METRIC_NAME LABEL TOTAL_COUNT
gRPC 消息失败的总数量
TiKV 临界误差的总数量
tikv_scheduler_is_busy_total_count
tikv_channel_full_total_count
Coprocessor 错误总数量
0
tikv_engine_write_stall
0
tikv_server_report_failures_total_count
0
tikv_storage_async_request_error

收起 13
– tikv_storage_async_request_error 192.168.10.87:20180,err_stale_command,snapshot 7
– tikv_storage_async_request_error 192.168.10.87:20180,err_not_leader,snapshot 4
– tikv_storage_async_request_error 192.168.10.87:20180,err_epoch_not_match,snapshot 3
tikv_lock_manager_detect_error_total_count

0
tikv_backup_errors_total_count

TiKV 实例存储大小

METRIC_NAME LABEL TOTAL_COUNT
store size

展开 118.557 GB

Approximate Region 大小

METRIC_NAME LABEL P99 P90 P80 P50
Approximate Region size

展开 509.440 MB 486.400 MB 460.800 MB 384.000 MB

Coprocessor 信息

METRIC_NAME LABEL TOTAL_VALUE
TiKV Coprocessor 扫描键总数量
展开 171126.67
TiKV Coprocessor 响应总大小
展开 577.367 KB
TiKV Coprocessor 扫描操作总数量

展开 189069.33

TiKV 调度器信息

METRIC_NAME LABEL TOTAL_VALUE TOTAL_COUNT P999 P99 P90 P80
tikv_scheduler_keys_read
展开 305573 440969 16 15 10 4
tikv_scheduler_keys_written
展开 1392256 1383963 64 63 54 45
tikv_scheduler_scan_details_total_num
展开 4263774.67
调度程序状态的总数量

展开 9087057.33

Raft 信息

METRIC_NAME LABEL TOTAL_VALUE
发送的 Raft 消息的总数量
展开 2806624
持久化 Raft 消息的总数量
展开 3647493.33
接受 Raft 消息的总数量
展开 2878125.33
丢弃 Raft 消息的总数量
展开 4
Raft proposal 的总数量

展开 2013950.67

TiKV 快照信息

METRIC_NAME LABEL TOTAL_VALUE TOTAL_COUNT P999 P99 P90 P80
快照的 KV 数量
展开 1852072 1 1637581 1630208 1556480 1474560
快照大小
展开 147549357 1 134150619 133546639 127506842 120795955
tikv_snapshot_state_total_count

0

GC 信息

METRIC_NAME LABEL TOTAL_VALUE
tikv_gc_keys_total_num
0
tidb_gc_worker_action_total_num

展开 19

TiKV 任务信息

METRIC_NAME LABEL TOTAL_VALUE
worker 处理的任务总数量
展开 587317
工作进程的挂起和运行任务的总数量
展开 3
future_pool 处理的任务总数量
展开 1520912
future_pool 总挂起和运行任务数量

展开 10157

缓存命中率

METRIC_NAME INSTANCE AVG MAX MIN
memtable 命中率
展开 14% 23% 3%
所有块缓存命中率
展开 92% 100% 22%
索引块缓存命中率
展开 100% 100% 100%
过滤块缓存命中率
展开 100% 100% 100%
数据块缓存命中率
展开 64% 92% 11%
bloom_prefix 块缓存命中率

展开 0.7% 2% 0%

配置

调度器初始配置

PD 调度器的初始配置值。初始时间是报表的开始时间。

CONFIG_ITEM VALUE CURRENT_VALUE DIFF_WITH_CURRENT
max-replicas 3 3 0
region-max-keys 3840000 3840000 0
region-schedule-limit 2048 2048 0
enable-makeup-replica 1 1 0
enable-replace-offline-replica 1 1 0
max-merge-region-size 54 54 0
region-max-size 384 384 0
low-space-ratio 0.8 0.8 0
max-merge-region-keys 540000 540000 0
merge-schedule-limit 8 8 0
hot-region-schedule-limit 4 4 0
leader-schedule-limit 4 4 0
region-split-keys 2560000 2560000 0
region-split-size 256 256 0
enable-remove-down-replica 1 1 0
enable-remove-extra-replica 1 1 0
high-space-ratio 0.7 0.7 0
hot-region-cache-hits-threshold 3 3 0
max-pending-peer-count 64 64 0
max-snapshot-count 64 64 0
replica-schedule-limit 64 64 0

调度器配置修改历史

PD 调度器的配置更改历史。APPROXIMATE_CHANGE_TIME 为最近的有效更改时间。

APPROXIMATE_CHANGE_TIME CONFIG_ITEM VALUE

TiDB GC 初始配置

TiDB GC 的初始配置值。初始时间是报表的开始时间。

CONFIG_ITEM VALUE CURRENT_VALUE DIFF_WITH_CURRENT
tikv_gc_life_time 600 600 0
tikv_gc_run_interval 600 600 0

TiDB GC 配置修改历史

TiDB GC 的配置更改历史。APPROXIMATE_CHANGE_TIME 为最近的有效更改时间。

APPROXIMATE_CHANGE_TIME CONFIG_ITEM VALUE

TiKV RocksDB 初始配置

TiKV RocksDB 的初始配置值。初始时间是报表的开始时间。

CONFIG_ITEM INSTANCE VALUE CURRENT_VALUE DIFF_WITH_CURRENT DISTINCT_VALUES_IN_INSTANCE
block_based_bloom_filter , default 0 0 0 1
block_based_bloom_filter , lock 0 0 0 1
block_based_bloom_filter , raft 0 0 0 1
block_based_bloom_filter , write 0 0 0 1
block_cache_size , default 0 0 0 1
block_cache_size , lock 0 0 0 1
block_cache_size , raft 0 0 0 1
block_cache_size , write 0 0 0 1
block_size , default 32768 32768 0 1
block_size , lock 16384 16384 0 1
block_size , raft 16384 16384 0 1
block_size , write 32768 32768 0 1
bloom_filter_bits_per_key , default 10 10 0 1
bloom_filter_bits_per_key , lock 10 10 0 1
bloom_filter_bits_per_key , raft 10 10 0 1
bloom_filter_bits_per_key , write 10 10 0 1
cache_index_and_filter_blocks , default 1 1 0 1
cache_index_and_filter_blocks , lock 1 1 0 1
cache_index_and_filter_blocks , raft 1 1 0 1
cache_index_and_filter_blocks , write 1 1 0 1
disable_auto_compactions , default 0 0 0 1
disable_auto_compactions , lock 0 0 0 1
disable_auto_compactions , raft 0 0 0 1
disable_auto_compactions , write 0 0 0 1
disable_block_cache , default 0 0 0 1
disable_block_cache , lock 0 0 0 1
disable_block_cache , raft 0 0 0 1
disable_block_cache , write 0 0 0 1
disable_write_stall , default 1 1 0 1
disable_write_stall , lock 1 1 0 1
disable_write_stall , raft 1 1 0 1
disable_write_stall , write 1 1 0 1
dynamic_level_bytes , default 1 1 0 1
dynamic_level_bytes , lock 1 1 0 1
dynamic_level_bytes , raft 1 1 0 1
dynamic_level_bytes , write 1 1 0 1
enable_doubly_skiplist , default 1 1 0 1
enable_doubly_skiplist , lock 1 1 0 1
enable_doubly_skiplist , raft 1 1 0 1
enable_doubly_skiplist , write 1 1 0 1
force_consistency_checks , default 0 0 0 1
force_consistency_checks , lock 0 0 0 1
force_consistency_checks , raft 0 0 0 1
force_consistency_checks , write 0 0 0 1
format_version , default 2 2 0 1
format_version , lock 2 2 0 1
format_version , raft 2 2 0 1
format_version , write 2 2 0 1
hard_pending_compaction_bytes_limit , default 1099511627776 1099511627776 0 1
hard_pending_compaction_bytes_limit , lock 1099511627776 1099511627776 0 1
hard_pending_compaction_bytes_limit , raft 1099511627776 1099511627776 0 1
hard_pending_compaction_bytes_limit , write 1099511627776 1099511627776 0 1
level0_file_num_compaction_trigger , default 4 4 0 1
level0_file_num_compaction_trigger , lock 1 1 0 1
level0_file_num_compaction_trigger , raft 1 1 0 1
level0_file_num_compaction_trigger , write 4 4 0 1
level0_slowdown_writes_trigger , default 20 20 0 1
level0_slowdown_writes_trigger , lock 20 20 0 1
level0_slowdown_writes_trigger , raft 20 20 0 1
level0_slowdown_writes_trigger , write 20 20 0 1
level0_stop_writes_trigger , default 20 20 0 1
level0_stop_writes_trigger , lock 20 20 0 1
level0_stop_writes_trigger , raft 20 20 0 1
level0_stop_writes_trigger , write 20 20 0 1
max_bytes_for_level_base , default 536870912 536870912 0 1
max_bytes_for_level_base , lock 134217728 134217728 0 1
max_bytes_for_level_base , raft 134217728 134217728 0 1
max_bytes_for_level_base , write 536870912 536870912 0 1
max_bytes_for_level_multiplier , default 10 10 0 1
max_bytes_for_level_multiplier , lock 10 10 0 1
max_bytes_for_level_multiplier , raft 10 10 0 1
max_bytes_for_level_multiplier , write 10 10 0 1
max_compaction_bytes , default 2147483648 2147483648 0 1
max_compaction_bytes , lock 2147483648 2147483648 0 1
max_compaction_bytes , raft 2147483648 2147483648 0 1
max_compaction_bytes , write 2147483648 2147483648 0 1
max_write_buffer_number , default 5 5 0 1
max_write_buffer_number , lock 5 5 0 1
max_write_buffer_number , raft 5 5 0 1
max_write_buffer_number , write 5 5 0 1
min_write_buffer_number_to_merge , default 1 1 0 1
min_write_buffer_number_to_merge , lock 1 1 0 1
min_write_buffer_number_to_merge , raft 1 1 0 1
min_write_buffer_number_to_merge , write 1 1 0 1
num_levels , default 7 7 0 1
num_levels , lock 7 7 0 1
num_levels , raft 7 7 0 1
num_levels , write 7 7 0 1
optimize_filters_for_hits , default 1 1 0 1
optimize_filters_for_hits , lock 0 0 0 1
optimize_filters_for_hits , raft 1 1 0 1
optimize_filters_for_hits , write 0 0 0 1
optimize_filters_for_memory , default 0 0 0 1
optimize_filters_for_memory , lock 0 0 0 1
optimize_filters_for_memory , raft 0 0 0 1
optimize_filters_for_memory , write 0 0 0 1
pin_l0_filter_and_index_blocks , default 1 1 0 1
pin_l0_filter_and_index_blocks , lock 1 1 0 1
pin_l0_filter_and_index_blocks , raft 1 1 0 1
pin_l0_filter_and_index_blocks , write 1 1 0 1
read_amp_bytes_per_bit , default 0 0 0 1
read_amp_bytes_per_bit , lock 0 0 0 1
read_amp_bytes_per_bit , raft 0 0 0 1
read_amp_bytes_per_bit , write 0 0 0 1
soft_pending_compaction_bytes_limit , default 206158430208 206158430208 0 1
soft_pending_compaction_bytes_limit , lock 206158430208 206158430208 0 1
soft_pending_compaction_bytes_limit , raft 206158430208 206158430208 0 1
soft_pending_compaction_bytes_limit , write 206158430208 206158430208 0 1
target_file_size_base , default 8388608 8388608 0 1
target_file_size_base , lock 8388608 8388608 0 1
target_file_size_base , raft 8388608 8388608 0 1
target_file_size_base , write 8388608 8388608 0 1
titan_discardable_ratio , default 0.5 0.5 0 1
titan_discardable_ratio , lock 0.5 0.5 0 1
titan_discardable_ratio , raft 0.5 0.5 0 1
titan_discardable_ratio , write 0.5 0.5 0 1
titan_max_gc_batch_size , default 67108864 67108864 0 1
titan_max_gc_batch_size , lock 67108864 67108864 0 1
titan_max_gc_batch_size , raft 67108864 67108864 0 1
titan_max_gc_batch_size , write 67108864 67108864 0 1
titan_merge_small_file_threshold , default 8388608 8388608 0 1
titan_merge_small_file_threshold , lock 8388608 8388608 0 1
titan_merge_small_file_threshold , raft 8388608 8388608 0 1
titan_merge_small_file_threshold , write 8388608 8388608 0 1
titan_min_blob_size , default 32768 32768 0 1
titan_min_blob_size , lock 0 0 0 1
titan_min_blob_size , raft 0 0 0 1
titan_min_blob_size , write 0 0 0 1
titan_min_gc_batch_size , default 16777216 16777216 0 1
titan_min_gc_batch_size , lock 16777216 16777216 0 1
titan_min_gc_batch_size , raft 16777216 16777216 0 1
titan_min_gc_batch_size , write 16777216 16777216 0 1
use_bloom_filter , default 1 1 0 1
use_bloom_filter , lock 1 1 0 1
use_bloom_filter , raft 1 1 0 1
use_bloom_filter , write 1 1 0 1
whole_key_filtering , default 1 1 0 1
whole_key_filtering , lock 1 1 0 1
whole_key_filtering , raft 1 1 0 1
whole_key_filtering , write 0 0 0 1
write_buffer_size , default 134217728 134217728 0 1
write_buffer_size , lock 33554432 33554432 0 1
write_buffer_size , raft 134217728 134217728 0 1
write_buffer_size , write 134217728 134217728 0 1

TiKV RocksDB 配置修改历史

TiKV RocksDB 的配置更改历史。APPROXIMATE_CHANGE_TIME 为最近的有效更改时间。

APPROXIMATE_CHANGE_TIME CONFIG_ITEM INSTANCE VALUE

TiKV RaftStore 初始配置

TiKV RaftStore 的初始配置值。初始时间是报表的开始时间。

CONFIG_ITEM INSTANCE VALUE CURRENT_VALUE DIFF_WITH_CURRENT DISTINCT_VALUES_IN_INSTANCE
abnormal_leader_missing_duration 600 600 0 1
apply_max_batch_size 256 256 0 1
apply_pool_size 2 2 0 1
apply_yield_write_size 32768 32768 0 1
capacity 0 0 0 1
cleanup_import_sst_interval 600 600 0 1
cmd_batch 1 1 0 1
cmd_batch_concurrent_ready_max_count 1 1 0 1
consistency_check_interval_seconds 0 0 0 1
future_poll_size 1 1 0 1
gc_peer_check_interval 60 60 0 1
hibernate_regions 1 1 0 1
io_reschedule_concurrent_max_count 4 4 0 1
io_reschedule_hotpot_duration 5 5 0 1
leader_transfer_max_log_lag 128 128 0 1
local_read_batch_size 1024 1024 0 1
lock_cf_compact_bytes_threshold 268435456 268435456 0 1
lock_cf_compact_interval 600 600 0 1
max_apply_unpersisted_log_limit 1024 1024 0 1
max_leader_missing_duration 7200 7200 0 1
max_manual_flush_rate 3 3 0 1
max_peer_down_duration 600 600 0 1
merge_check_tick_interval 2 2 0 1
merge_max_log_gap 10 10 0 1
messages_per_tick 4096 4096 0 1
notify_capacity 40960 40960 0 1
pd_heartbeat_tick_interval 60 60 0 1
pd_report_min_resolved_ts_interval 1 1 0 1
pd_store_heartbeat_tick_interval 10 10 0 1
peer_stale_state_check_interval 300 300 0 1
prevote 1 1 0 1
raft_base_tick_interval 1 1 0 1
raft_election_timeout_ticks 10 10 0 1
raft_engine_purge_interval 10 10 0 1
raft_entry_cache_life_time 30 30 0 1
raft_entry_max_size 8388608 8388608 0 1
raft_heartbeat_ticks 2 2 0 1
raft_log_compact_sync_interval 2 2 0 1
raft_log_gc_count_limit 196608 196608 0 1
raft_log_gc_size_limit 201326592 201326592 0 1
raft_log_gc_threshold 50 50 0 1
raft_log_gc_tick_interval 3 3 0 1
raft_log_reserve_max_ticks 6 6 0 1
raft_max_election_timeout_ticks 20 20 0 1
raft_max_inflight_msgs 256 256 0 1
raft_max_size_per_msg 1048576 1048576 0 1
raft_min_election_timeout_ticks 10 10 0 1
raft_read_index_retry_interval_ticks 4 4 0 1
raft_store_max_leader_lease 9 9 0 1
raft_write_batch_size_hint 8192 8192 0 1
raft_write_size_limit 1048576 1048576 0 1
raft_write_wait_duration 20 20 0 1
region_split_check_diff 16777216 16777216 0 1
report_region_flow_interval 60 60 0 1
request_voter_replicated_index_interval 300 300 0 1
right_derive_when_split 1 1 0 1
snap_apply_batch_size 10485760 10485760 0 1
snap_gc_timeout 14400 14400 0 1
snap_generator_pool_size 2 2 0 1
snap_mgr_gc_tick_interval 60 60 0 1
split_region_check_tick_interval 10 10 0 1
store_io_notify_capacity 40960 40960 0 1
store_io_pool_size 1 1 0 1
store_max_batch_size 256 256 0 1
store_pool_size 2 2 0 1
use_delete_range 0 0 0 1
waterfall_metrics 1 1 0 1

TiKV RaftStore 配置修改历史

TiKV RaftStore 的配置更改历史。APPROXIMATE_CHANGE_TIME 为最近的有效更改时间。

APPROXIMATE_CHANGE_TIME CONFIG_ITEM INSTANCE VALUE

TiDB 当前配置

KEY VALUE
advertise-address 192.168.10.170
advertise-address 192.168.10.200
alter-primary-key false
autoscaler-addr tiflash-autoscale-lb.tiflash-autoscale.svc.cluster.local:8081
autoscaler-cluster-id
autoscaler-type aws
ballast-object-size 0
compatible-kill-query false
cors
delay-clean-table-lock 0
deprecate-integer-display-length true
disaggregated-tiflash false
enable-32bits-connection-id true
enable-enum-length-limit true
enable-forwarding false
enable-global-kill true
enable-table-lock false
enable-tcp4-only false
enable-telemetry false
experimental.allow-expression-index false
graceful-wait-before-shutdown 30
host 0.0.0.0
in-mem-slow-query-recent-num 500
in-mem-slow-query-topn-num 30
index-limit 64
initialize-sql-file
instance.ddl_slow_threshold 300
instance.max_connections 0
instance.plugin_audit_log_buffer_size 0
instance.plugin_audit_log_flush_interval 30
instance.plugin_dir /data/deploy/plugin
instance.plugin_load
instance.tidb_check_mb4_value_in_utf8 true
instance.tidb_enable_collect_execution_info true
instance.tidb_enable_ddl true
instance.tidb_enable_slow_log true
instance.tidb_enable_stats_owner true
instance.tidb_expensive_query_time_threshold 60
instance.tidb_expensive_txn_time_threshold 600
instance.tidb_force_priority NO_PRIORITY
instance.tidb_general_log false
instance.tidb_pprof_sql_cpu false
instance.tidb_rc_read_check_ts false
instance.tidb_record_plan_in_slow_log 1
instance.tidb_service_scope
instance.tidb_slow_log_threshold 300
instance.tidb_stmt_summary_enable_persistent false
instance.tidb_stmt_summary_file_max_backups 0
instance.tidb_stmt_summary_file_max_days 3
instance.tidb_stmt_summary_file_max_size 64
instance.tidb_stmt_summary_filename tidb-statements.log
is-tiflashcompute-fixed-pool false
isolation-read.engines [“tikv”,“tiflash”,“tidb”]
keyspace-name
lease 45s
log.disable-error-stack null
log.disable-timestamp null
log.enable-error-stack null
log.enable-timestamp null
log.file.buffer-flush-interval 0
log.file.buffer-size 0
log.file.compression
log.file.filename /opt/tidb/tidb-4000/log/tidb.log
log.file.is-buffered false
log.file.max-backups 0
log.file.max-days 7
log.file.max-size 300
log.format text
log.general-log-file
log.level warn
log.slow-query-file /opt/tidb/tidb-4000/log/tidb_slow_query.log
log.timeout 0
max-ballast-object-size 0
max-index-length 6144
new_collations_enabled_on_first_bootstrap true
opentracing.enable false
opentracing.reporter.buffer-flush-interval 0
opentracing.reporter.local-agent-host-port
opentracing.reporter.log-spans false
opentracing.reporter.queue-size 0
opentracing.rpc-metrics false
opentracing.sampler.max-operations 0
opentracing.sampler.param 1
opentracing.sampler.sampling-refresh-interval 0
opentracing.sampler.sampling-server-url
opentracing.sampler.type const
path 192.168.10.171:2379,192.168.10.75:2379,192.168.10.147:2379
pd-client.pd-server-timeout 3
performance.analyze-partition-concurrency-quota 16
performance.bind-info-lease 3s
performance.concurrently-init-stats true
performance.cross-join true
performance.distinct-agg-push-down false
performance.enable-load-fmsketch false
performance.enable-stats-cache-mem-quota true
performance.enforce-mpp false
performance.force-init-stats true
performance.gogc 100
performance.lite-init-stats true
performance.max-procs 0
performance.max-txn-ttl 3600000
performance.plan-replayer-dump-worker-concurrency 1
performance.plan-replayer-gc-lease 10m
performance.projection-push-down true
performance.pseudo-estimate-ratio 0.8
performance.server-memory-quota 0
performance.stats-lease 3s
performance.stats-load-concurrency 0
performance.stats-load-queue-size 1000
performance.stmt-count-limit 5000
performance.tcp-keep-alive true
performance.tcp-no-delay true
performance.txn-entry-size-limit 6291456
performance.txn-total-size-limit 104857600
pessimistic-txn.constraint-check-in-place-pessimistic true
pessimistic-txn.deadlock-history-capacity 10
pessimistic-txn.deadlock-history-collect-retryable false
pessimistic-txn.max-retry-count 256
pessimistic-txn.pessimistic-auto-commit false
port 4000
proxy-protocol.fallbackable false
proxy-protocol.header-timeout 5
proxy-protocol.networks
repair-mode false
repair-table-list
security.auth-token-jwks
security.auth-token-refresh-interval 1h0m0s
security.auto-tls false
security.cluster-ssl-ca
security.cluster-ssl-cert
security.cluster-ssl-key
security.cluster-verify-cn null
security.disconnect-on-expired-password true
security.enable-sem false
security.rsa-key-size 4096
security.secure-bootstrap false
security.session-token-signing-cert /opt/tidb/tidb-4000/tls/tiproxy-session.crt
security.session-token-signing-key /opt/tidb/tidb-4000/tls/tiproxy-session.key
security.skip-grant-table false
security.spilled-file-encryption-method plaintext
security.ssl-ca
security.ssl-cert
security.ssl-key
security.tls-version
server-version
skip-register-to-dashboard false
socket /tmp/tidb-4000.sock
split-region-max-num 1000
split-table true
status.grpc-concurrent-streams 1024
status.grpc-initial-window-size 2097152
status.grpc-keepalive-time 10
status.grpc-keepalive-timeout 3
status.grpc-max-send-msg-size 2147483647
status.metrics-addr
status.metrics-interval 15
status.record-db-label false
status.record-db-qps false
status.report-status true
status.status-host 0.0.0.0
status.status-port 10080
store tikv
stores-refresh-interval 60
table-column-count-limit 1017
temp-dir /tmp/tidb
tidb-edition
tidb-enable-exit-check false
tidb-max-reuse-chunk 64
tidb-max-reuse-column 256
tidb-release-version
tikv-client.async-commit.allowed-clock-drift 500000000
tikv-client.async-commit.keys-limit 256
tikv-client.async-commit.safe-window 2000000000
tikv-client.async-commit.total-key-size-limit 4096
tikv-client.batch-policy standard
tikv-client.batch-wait-size 8
tikv-client.commit-timeout 41s
tikv-client.copr-cache.capacity-mb 1000
tikv-client.copr-req-timeout 60000000000
tikv-client.enable-chunk-rpc true
tikv-client.enable-replica-selector-v2 true
tikv-client.grpc-compression-type none
tikv-client.grpc-connection-count 4
tikv-client.grpc-initial-conn-window-size 134217728
tikv-client.grpc-initial-window-size 134217728
tikv-client.grpc-keepalive-time 10
tikv-client.grpc-keepalive-timeout 3
tikv-client.grpc-shared-buffer-pool false
tikv-client.max-batch-size 128
tikv-client.max-batch-wait-time 0
tikv-client.max-concurrency-request-limit 9223372036854776000
tikv-client.overload-threshold 200
tikv-client.region-cache-ttl 600
tikv-client.resolve-lock-lite-threshold 16
tikv-client.store-limit 0
tikv-client.store-liveness-timeout 1s
tikv-client.ttl-refreshed-txn-size 33554432
tmp-storage-path /tmp/1001_tidb/MC4wLjAuMDo0MDAwLzAuMC4wLjA6MTAwODA=/tmp-storage
tmp-storage-quota -1
token-limit 1000
top-sql.receiver-address
transaction-summary.transaction-id-digest-min-duration 2147483647
transaction-summary.transaction-summary-capacity 500
treat-old-version-utf8-as-utf8mb4 true
use-autoscaler false
version-comment

PD 当前配置

KEY VALUE
advertise-client-urls http://192.168.10.171:2379
advertise-client-urls http://192.168.10.75:2379
advertise-client-urls http://192.168.10.147:2379
advertise-peer-urls http://192.168.10.75:2380
advertise-peer-urls http://192.168.10.171:2380
advertise-peer-urls http://192.168.10.147:2380
auto-compaction-mode periodic
auto-compaction-retention-v2 1h
client-urls http://0.0.0.0:2379
cluster-version 8.5.4
controller.degraded-mode-wait-duration 0s
controller.enable-controller-trace-log false
controller.ltb-max-wait-duration 30s
controller.ltb-token-rpc-max-delay 1s
controller.request-unit.read-base-cost 0.125
controller.request-unit.read-cost-per-byte 0.0000152587890625
controller.request-unit.read-cpu-ms-cost 0.3333333333333333
controller.request-unit.read-per-batch-base-cost 0.5
controller.request-unit.write-base-cost 1
controller.request-unit.write-cost-per-byte 0.0009765625
controller.request-unit.write-per-batch-base-cost 1
dashboard.disable-custom-prom-addr false
dashboard.enable-experimental false
dashboard.enable-telemetry false
dashboard.internal-proxy false
dashboard.public-path-prefix
dashboard.tidb-cacert-path
dashboard.tidb-cert-path
dashboard.tidb-key-path
data-dir /data/tidb/pd-2379
election-interval 3s
enable-grpc-gateway true
enable-local-tso false
enable-prevote true
force-new-cluster false
initial-cluster pd-192.168.10.171-2379=http://192.168.10.171:2380,pd-192.168.10.75-2379=http://192.168.10.75:2380,pd-192.168.10.147-2379=http://192.168.10.147:2380
initial-cluster-state new
initial-cluster-token pd-cluster
join
keyspace.check-region-split-interval 50ms
keyspace.pre-alloc null
keyspace.wait-region-split true
keyspace.wait-region-split-timeout 30s
lease 5
log.development false
log.disable-caller false
log.disable-error-verbose true
log.disable-stacktrace false
log.disable-timestamp false
log.error-output-path
log.file.filename /opt/tidb/pd-2379/log/pd.log
log.file.max-backups 0
log.file.max-days 7
log.file.max-size 300
log.format text
log.level warn
log.sampling null
max-concurrent-tso-proxy-streamings 5000
max-request-bytes 157286400
metric.address
metric.interval 15s
metric.job pd-192.168.10.147-2379
metric.job pd-192.168.10.171-2379
metric.job pd-192.168.10.75-2379
micro-service.enable-scheduling-fallback true
name pd-192.168.10.147-2379
name pd-192.168.10.75-2379
name pd-192.168.10.171-2379
pd-server.block-safe-point-v1 false
pd-server.dashboard-address http://192.168.10.75:2379
pd-server.enable-gogc-tuner false
pd-server.flow-round-by-digit 3
pd-server.gc-tuner-threshold 0.6
pd-server.key-type table
pd-server.max-gap-reset-ts 24h0m0s
pd-server.metric-storage
pd-server.min-resolved-ts-persistence-interval 1s
pd-server.runtime-services
pd-server.server-memory-limit 0
pd-server.server-memory-limit-gc-trigger 0.7
pd-server.use-region-storage true
peer-urls http://0.0.0.0:2380
quota-backend-bytes 8GiB
replication-mode.dr-auto-sync.dr
replication-mode.dr-auto-sync.dr-replicas 0
replication-mode.dr-auto-sync.label-key
replication-mode.dr-auto-sync.pause-region-split false
replication-mode.dr-auto-sync.primary
replication-mode.dr-auto-sync.primary-replicas 0
replication-mode.dr-auto-sync.wait-recover-timeout 0s
replication-mode.dr-auto-sync.wait-store-timeout 1m0s
replication-mode.replication-mode majority
replication.enable-placement-rules true
replication.enable-placement-rules-cache false
replication.isolation-level
replication.location-labels
replication.max-replicas 3
replication.strictly-match-label false
schedule.enable-cross-table-merge true
schedule.enable-debug-metrics false
schedule.enable-diagnostic true
schedule.enable-heartbeat-breakdown-metrics true
schedule.enable-heartbeat-concurrent-runner true
schedule.enable-joint-consensus true
schedule.enable-location-replacement true
schedule.enable-make-up-replica true
schedule.enable-one-way-merge false
schedule.enable-remove-down-replica true
schedule.enable-remove-extra-replica true
schedule.enable-replace-offline-replica true
schedule.enable-tikv-split-region true
schedule.enable-witness false
schedule.high-space-ratio 0.7
schedule.hot-region-cache-hits-threshold 3
schedule.hot-region-schedule-limit 4
schedule.hot-regions-reserved-days 7
schedule.hot-regions-write-interval 10m0s
schedule.leader-schedule-limit 4
schedule.leader-schedule-policy count
schedule.low-space-ratio 0.8
schedule.max-merge-region-keys 540000
schedule.max-merge-region-size 54
schedule.max-movable-hot-peer-size 512
schedule.max-pending-peer-count 64
schedule.max-snapshot-count 64
schedule.max-store-down-time 30m0s
schedule.max-store-preparing-time 48h0m0s
schedule.merge-schedule-limit 8
schedule.patrol-region-interval 10ms
schedule.patrol-region-worker-count 1
schedule.region-schedule-limit 2048
schedule.region-score-formula-version v2
schedule.replica-schedule-limit 64
schedule.scheduler-max-waiting-operator 5
schedule.schedulers-v2 [{“args”:null,“args-payload”:“”,“disable”:false,“type”:“balance-region”},{“args”:null,“args-payload”:“”,“disable”:false,“type”:“balance-leader”},{“args”:null,“args-payload”:“”,“disable”:false,“type”:“hot-region”},{“args”:null,“args-payload”:“”,“disable”:false,“type”:“evict-slow-store”}]
schedule.slow-store-evicting-affected-store-ratio-threshold 0.3
schedule.split-merge-interval 1h0m0s
schedule.store-limit-version v1
schedule.store-limit.1.add-peer 15
schedule.store-limit.1.remove-peer 15
schedule.store-limit.137.add-peer 30
schedule.store-limit.137.remove-peer 30
schedule.store-limit.4.add-peer 15
schedule.store-limit.4.remove-peer 15
schedule.store-limit.5.add-peer 15
schedule.store-limit.5.remove-peer 15
schedule.store-limit.7.add-peer 15
schedule.store-limit.7.remove-peer 15
schedule.switch-witness-interval 1h0m0s
schedule.tolerant-size-ratio 0
schedule.witness-schedule-limit 4
security.SSLCABytes null
security.SSLCertBytes null
security.SSLKEYBytes null
security.cacert-path
security.cert-allowed-cn null
security.cert-path
security.encryption.data-encryption-method plaintext
security.encryption.data-key-rotation-period 168h0m0s
security.encryption.master-key.endpoint
security.encryption.master-key.key-id
security.encryption.master-key.path
security.encryption.master-key.region
security.encryption.master-key.type plaintext
security.key-path
security.redact-info-log false
tick-interval 500ms
tso-proxy-recv-from-client-timeout 1h0m0s
tso-save-interval 5s
tso-update-physical-interval 50ms

TiKV 当前配置

KEY VALUE
abort-on-panic false
backup.auto-tune-refresh-interval 1m
backup.auto-tune-remain-threads 2
backup.batch-size 8
backup.enable-auto-tune true
backup.hadoop.home
backup.hadoop.linux-user
backup.io-thread-size 2
backup.num-threads 4
backup.s3-multi-part-size 5MiB
backup.sst-max-size 384MiB
causal-ts.alloc-ahead-buffer 3s
causal-ts.renew-batch-max-size 8192
causal-ts.renew-batch-min-size 100
causal-ts.renew-interval 100ms
cdc.hibernate-regions-compatible true
cdc.incremental-fetch-speed-limit 512MiB
cdc.incremental-scan-concurrency 6
cdc.incremental-scan-concurrency-limit 10000
cdc.incremental-scan-speed-limit 128MiB
cdc.incremental-scan-threads 4
cdc.incremental-scan-ts-filter-ratio 0.2
cdc.min-ts-interval 1s
cdc.old-value-cache-memory-quota 512MiB
cdc.sink-memory-quota 512MiB
cdc.tso-worker-threads 1
coprocessor-v2.coprocessor-plugin-directory null
coprocessor.batch-split-limit 10
coprocessor.consistency-check-method mvcc
coprocessor.enable-region-bucket null
coprocessor.prefer-approximate-bucket true
coprocessor.region-bucket-merge-size-ratio 0.33
coprocessor.region-bucket-size 50MiB
coprocessor.region-max-keys 3840000
coprocessor.region-max-size 384MiB
coprocessor.region-size-threshold-for-approximate 750MiB
coprocessor.region-split-keys 2560000
coprocessor.region-split-size 256MiB
coprocessor.split-region-on-table false
gc.auto-compaction.bottommost-level-force false
gc.auto-compaction.check-interval 5m
gc.auto-compaction.redundant-rows-percent-threshold 20
gc.auto-compaction.redundant-rows-threshold 50000
gc.auto-compaction.tombstones-num-threshold 10000
gc.auto-compaction.tombstones-percent-threshold 30
gc.batch-keys 512
gc.compaction-filter-skip-version-check false
gc.enable-compaction-filter true
gc.max-write-bytes-per-sec 0KiB
gc.num-threads 1
gc.ratio-threshold 1.1
import.import-mode-timeout 10m
import.memory-use-ratio 0.3
import.num-threads 8
import.stream-channel-window 128
in-memory-engine.capacity null
in-memory-engine.cross-check-interval 0s
in-memory-engine.enable false
in-memory-engine.evict-threshold null
in-memory-engine.gc-run-interval 3m
in-memory-engine.load-evict-interval 5m
in-memory-engine.mvcc-amplification-threshold 10
in-memory-engine.stop-load-threshold null
log-backup.enable true
log-backup.file-size-limit 256MiB
log-backup.initial-scan-concurrency 6
log-backup.initial-scan-pending-memory-quota 512MiB
log-backup.initial-scan-rate-limit 60MiB
log-backup.max-flush-interval 3m
log-backup.min-ts-interval 10s
log-backup.num-threads 4
log-backup.temp-path /data/tidb/tikv-20160/log-backup-temp
log.enable-timestamp true
log.file.filename /opt/tidb/tikv-20160/log/tikv.log
log.file.max-backups 0
log.file.max-days 7
log.file.max-size 300
log.format text
log.level warn
memory-usage-high-water 0.9
memory-usage-limit 24654200831B
memory.enable-heap-profiling true
memory.enable-thread-exclusive-arena true
memory.profiling-sample-per-bytes 512KiB
pd.enable-forwarding false
pd.endpoints [“192.168.10.171:2379”,“192.168.10.75:2379”,“192.168.10.147:2379”]
pd.retry-interval 300ms
pd.retry-log-every 10
pd.retry-max-count 9223372036854776000
pd.update-interval 10m
pessimistic-txn.in-memory true
pessimistic-txn.in-memory-instance-size-limit 100MiB
pessimistic-txn.in-memory-peer-size-limit 512KiB
pessimistic-txn.pipelined true
pessimistic-txn.wait-for-lock-timeout 1s
pessimistic-txn.wake-up-delay-duration 20ms
quota.background-cpu-time 0
quota.background-read-bandwidth 0KiB
quota.background-write-bandwidth 0KiB
quota.enable-auto-tune false
quota.foreground-cpu-time 0
quota.foreground-read-bandwidth 0KiB
quota.foreground-write-bandwidth 0KiB
quota.max-delay-duration 500ms
raft-engine.batch-compression-threshold 4KiB
raft-engine.bytes-per-sync null
raft-engine.compression-level null
raft-engine.dir /data/tidb/tikv-20160/raft-engine
raft-engine.enable true
raft-engine.enable-log-recycle true
raft-engine.format-version 2
raft-engine.memory-limit 4930840166B
raft-engine.prefill-for-recycle false
raft-engine.prefill-limit null
raft-engine.purge-rewrite-garbage-ratio 0.6
raft-engine.purge-rewrite-threshold 1GiB
raft-engine.purge-threshold 10GiB
raft-engine.recovery-mode tolerate-corrupted-tail-records
raft-engine.recovery-read-block-size 16KiB
raft-engine.recovery-threads 4
raft-engine.spill-dir null
raft-engine.target-file-size 128MiB
raftdb.allow-concurrent-memtable-write true
raftdb.bytes-per-sync 1MiB
raftdb.compaction-readahead-size 0KiB
raftdb.create-if-missing true
raftdb.defaultcf.block-based-bloom-filter false
raftdb.defaultcf.block-cache-size null
raftdb.defaultcf.block-size 64KiB
raftdb.defaultcf.bloom-filter-bits-per-key 10
raftdb.defaultcf.bottommost-level-compression disable
raftdb.defaultcf.bottommost-zstd-compression-dict-size 0
raftdb.defaultcf.bottommost-zstd-compression-sample-size 0
raftdb.defaultcf.cache-index-and-filter-blocks true
raftdb.defaultcf.checksum crc32c
raftdb.defaultcf.compaction-guard-max-output-file-size 128MiB
raftdb.defaultcf.compaction-guard-min-output-file-size 8MiB
raftdb.defaultcf.compaction-pri 0
raftdb.defaultcf.compaction-style 0
raftdb.defaultcf.compression-per-level [“no”,“no”,“lz4”,“lz4”,“lz4”,“zstd”,“zstd”]
raftdb.defaultcf.disable-auto-compactions false
raftdb.defaultcf.disable-block-cache false
raftdb.defaultcf.disable-write-stall false
raftdb.defaultcf.dynamic-level-bytes true
raftdb.defaultcf.enable-compaction-guard null
raftdb.defaultcf.enable-doubly-skiplist true
raftdb.defaultcf.force-consistency-checks false
raftdb.defaultcf.format-version 2
raftdb.defaultcf.hard-pending-compaction-bytes-limit 1TiB
raftdb.defaultcf.level0-file-num-compaction-trigger 4
raftdb.defaultcf.level0-slowdown-writes-trigger 20
raftdb.defaultcf.level0-stop-writes-trigger 20
raftdb.defaultcf.max-bytes-for-level-base 512MiB
raftdb.defaultcf.max-bytes-for-level-multiplier 10
raftdb.defaultcf.max-compaction-bytes 2GiB
raftdb.defaultcf.max-compactions null
raftdb.defaultcf.max-write-buffer-number 5
raftdb.defaultcf.min-write-buffer-number-to-merge 1
raftdb.defaultcf.num-levels 7
raftdb.defaultcf.optimize-filters-for-hits true
raftdb.defaultcf.optimize-filters-for-memory false
raftdb.defaultcf.periodic-compaction-seconds null
raftdb.defaultcf.pin-l0-filter-and-index-blocks true
raftdb.defaultcf.prepopulate-block-cache disabled
raftdb.defaultcf.prop-keys-index-distance 40960
raftdb.defaultcf.prop-size-index-distance 4194304
raftdb.defaultcf.read-amp-bytes-per-bit 0
raftdb.defaultcf.ribbon-filter-above-level null
raftdb.defaultcf.soft-pending-compaction-bytes-limit 192GiB
raftdb.defaultcf.target-file-size-base null
raftdb.defaultcf.titan.blob-cache-size 0KiB
raftdb.defaultcf.titan.blob-file-compression zstd
raftdb.defaultcf.titan.blob-run-mode normal
raftdb.defaultcf.titan.discardable-ratio 0.5
raftdb.defaultcf.titan.level-merge false
raftdb.defaultcf.titan.max-gc-batch-size 64MiB
raftdb.defaultcf.titan.max-sorted-runs 20
raftdb.defaultcf.titan.merge-small-file-threshold 8MiB
raftdb.defaultcf.titan.min-blob-size null
raftdb.defaultcf.titan.min-gc-batch-size 16MiB
raftdb.defaultcf.titan.range-merge true
raftdb.defaultcf.titan.shared-blob-cache true
raftdb.defaultcf.titan.zstd-dict-size 0KiB
raftdb.defaultcf.ttl null
raftdb.defaultcf.use-bloom-filter false
raftdb.defaultcf.whole-key-filtering true
raftdb.defaultcf.write-buffer-limit null
raftdb.defaultcf.write-buffer-size 128MiB
raftdb.enable-pipelined-write true
raftdb.enable-unordered-write false
raftdb.info-log-dir
raftdb.max-background-flushes 1
raftdb.max-background-jobs 4
raftdb.max-manifest-file-size 20MiB
raftdb.max-open-files 40960
raftdb.max-sub-compactions 2
raftdb.max-total-wal-size 4GiB
raftdb.stats-dump-period 10m
raftdb.titan.dirname
raftdb.titan.disable-gc false
raftdb.titan.enabled false
raftdb.titan.max-background-gc 1
raftdb.titan.purge-obsolete-files-period 10s
raftdb.use-direct-io-for-flush-and-compaction false
raftdb.wal-bytes-per-sync 512KiB
raftdb.wal-dir
raftdb.wal-recovery-mode 2
raftdb.wal-size-limit 0KiB
raftdb.wal-ttl-seconds 0
raftdb.writable-file-max-buffer-size 1MiB
raftstore.abnormal-leader-missing-duration 10m
raftstore.apply-low-priority-pool-size 1
raftstore.apply-max-batch-size 256
raftstore.apply-pool-size 2
raftstore.apply-reschedule-duration 5s
raftstore.apply-yield-write-size 32KiB
raftstore.capacity 0KiB
raftstore.check-leader-lease-interval 2s250ms
raftstore.check-long-uncommitted-interval 10s
raftstore.clean-stale-ranges-tick 10
raftstore.cleanup-import-sst-interval 10m
raftstore.cmd-batch true
raftstore.cmd-batch-concurrent-ready-max-count 1
raftstore.consistency-check-interval 0s
raftstore.evict-cache-on-memory-ratio 0.1
raftstore.follower-read-max-log-gap 100
raftstore.future-poll-size 1
raftstore.gc-peer-check-interval 1m
raftstore.hibernate-regions true
raftstore.io-reschedule-concurrent-max-count 4
raftstore.io-reschedule-hotpot-duration 5s
raftstore.local-read-batch-size 1024
raftstore.lock-cf-compact-bytes-threshold 256MiB
raftstore.lock-cf-compact-interval 10m
raftstore.long-uncommitted-base-threshold 20s
raftstore.max-apply-unpersisted-log-limit 1024
raftstore.max-entry-cache-warmup-duration 1s
raftstore.max-leader-missing-duration 2h
raftstore.max-peer-down-duration 10m
raftstore.max-snapshot-file-raw-size 100MiB
raftstore.merge-check-tick-interval 2s
raftstore.messages-per-tick 4096
raftstore.notify-capacity 40960
raftstore.pd-heartbeat-tick-interval 1m
raftstore.pd-report-min-resolved-ts-interval 1s
raftstore.pd-store-heartbeat-tick-interval 10s
raftstore.peer-stale-state-check-interval 5m
raftstore.perf-level 0
raftstore.periodic-full-compact-start-max-cpu 0.1
raftstore.periodic-full-compact-start-times
raftstore.prevote true
raftstore.raft-engine-purge-interval 10s
raftstore.raft-entry-cache-life-time 30s
raftstore.raft-entry-max-size 8MiB
raftstore.raft-log-compact-sync-interval 2s
raftstore.raft-log-gc-count-limit 196608
raftstore.raft-log-gc-size-limit 192MiB
raftstore.raft-log-gc-threshold 50
raftstore.raft-log-gc-tick-interval 3s
raftstore.raft-max-inflight-msgs 256
raftstore.raft-max-size-per-msg 1MiB
raftstore.raft-read-index-retry-interval-ticks 4
raftstore.raft-store-max-leader-lease 9s
raftstore.raft-write-batch-size-hint 8KiB
raftstore.raft-write-size-limit 1MiB
raftstore.raft-write-wait-duration 20us
raftstore.raftdb-path /data/tidb/tikv-20160/raft
raftstore.reactive-memory-lock-tick-interval 2s
raftstore.reactive-memory-lock-timeout-tick 5
raftstore.region-split-check-diff 16MiB
raftstore.region-worker-tick-interval 1s
raftstore.renew-leader-lease-advance-duration 2s250ms
raftstore.report-region-buckets-tick-interval 10s
raftstore.request-voter-replicated-index-interval 5m
raftstore.snap-apply-batch-size 10MiB
raftstore.snap-apply-copy-symlink false
raftstore.snap-gc-timeout 4h
raftstore.snap-generator-pool-size 2
raftstore.snap-mgr-gc-tick-interval 1m
raftstore.snap-wait-split-duration 34s
raftstore.split-region-check-tick-interval 10s
raftstore.store-io-notify-capacity 40960
raftstore.store-io-pool-size 1
raftstore.store-low-priority-pool-size 0
raftstore.store-max-batch-size 256
raftstore.store-pool-size 2
raftstore.store-reschedule-duration 5s
raftstore.unreachable-backoff 10s
raftstore.waterfall-metrics true
readpool.coprocessor.high-concurrency 6
readpool.coprocessor.low-concurrency 6
readpool.coprocessor.max-tasks-per-worker-high 2000
readpool.coprocessor.max-tasks-per-worker-low 2000
readpool.coprocessor.max-tasks-per-worker-normal 2000
readpool.coprocessor.normal-concurrency 6
readpool.coprocessor.stack-size 10MiB
readpool.coprocessor.use-unified-pool true
readpool.storage.high-concurrency 4
readpool.storage.low-concurrency 4
readpool.storage.max-tasks-per-worker-high 2000
readpool.storage.max-tasks-per-worker-low 2000
readpool.storage.max-tasks-per-worker-normal 2000
readpool.storage.normal-concurrency 4
readpool.storage.stack-size 10MiB
readpool.storage.use-unified-pool true
readpool.unified.auto-adjust-pool-size false
readpool.unified.max-tasks-per-worker 2000
readpool.unified.max-thread-count 6
readpool.unified.min-thread-count 1
readpool.unified.stack-size 10MiB
resolved-ts.advance-ts-interval 20s
resolved-ts.enable true
resolved-ts.incremental-scan-concurrency 6
resolved-ts.memory-quota 256MiB
resolved-ts.scan-lock-pool-size 2
resource-control.enabled true
resource-control.priority-ctl-strategy moderate
resource-metering.max-resource-groups 100
resource-metering.precision 1s
resource-metering.receiver-address
resource-metering.report-receiver-interval 1m
rocksdb.allow-concurrent-memtable-write true
rocksdb.bytes-per-sync 1MiB
rocksdb.compaction-readahead-size 0KiB
rocksdb.create-if-missing true
rocksdb.defaultcf.block-based-bloom-filter false
rocksdb.defaultcf.block-cache-size null
rocksdb.defaultcf.block-size 32KiB
rocksdb.defaultcf.bloom-filter-bits-per-key 10
rocksdb.defaultcf.bottommost-level-compression zstd
rocksdb.defaultcf.bottommost-zstd-compression-dict-size 0
rocksdb.defaultcf.bottommost-zstd-compression-sample-size 0
rocksdb.defaultcf.cache-index-and-filter-blocks true
rocksdb.defaultcf.checksum crc32c
rocksdb.defaultcf.compaction-guard-max-output-file-size 128MiB
rocksdb.defaultcf.compaction-guard-min-output-file-size 8MiB
rocksdb.defaultcf.compaction-pri 3
rocksdb.defaultcf.compaction-style 0
rocksdb.defaultcf.compression-per-level [“no”,“no”,“lz4”,“lz4”,“lz4”,“zstd”,“zstd”]
rocksdb.defaultcf.disable-auto-compactions false
rocksdb.defaultcf.disable-block-cache false
rocksdb.defaultcf.disable-write-stall true
rocksdb.defaultcf.dynamic-level-bytes true
rocksdb.defaultcf.enable-compaction-guard true
rocksdb.defaultcf.enable-doubly-skiplist true
rocksdb.defaultcf.force-consistency-checks false
rocksdb.defaultcf.format-version null
rocksdb.defaultcf.hard-pending-compaction-bytes-limit 1TiB
rocksdb.defaultcf.level0-file-num-compaction-trigger 4
rocksdb.defaultcf.level0-slowdown-writes-trigger 20
rocksdb.defaultcf.level0-stop-writes-trigger 20
rocksdb.defaultcf.max-bytes-for-level-base 512MiB
rocksdb.defaultcf.max-bytes-for-level-multiplier 10
rocksdb.defaultcf.max-compaction-bytes 2GiB
rocksdb.defaultcf.max-compactions null
rocksdb.defaultcf.max-write-buffer-number 5
rocksdb.defaultcf.min-write-buffer-number-to-merge 1
rocksdb.defaultcf.num-levels 7
rocksdb.defaultcf.optimize-filters-for-hits true
rocksdb.defaultcf.optimize-filters-for-memory false
rocksdb.defaultcf.periodic-compaction-seconds null
rocksdb.defaultcf.pin-l0-filter-and-index-blocks true
rocksdb.defaultcf.prepopulate-block-cache disabled
rocksdb.defaultcf.prop-keys-index-distance 40960
rocksdb.defaultcf.prop-size-index-distance 4194304
rocksdb.defaultcf.read-amp-bytes-per-bit 0
rocksdb.defaultcf.ribbon-filter-above-level null
rocksdb.defaultcf.soft-pending-compaction-bytes-limit 192GiB
rocksdb.defaultcf.target-file-size-base null
rocksdb.defaultcf.titan.blob-cache-size 0KiB
rocksdb.defaultcf.titan.blob-file-compression zstd
rocksdb.defaultcf.titan.blob-run-mode normal
rocksdb.defaultcf.titan.discardable-ratio 0.5
rocksdb.defaultcf.titan.level-merge false
rocksdb.defaultcf.titan.max-gc-batch-size 64MiB
rocksdb.defaultcf.titan.max-sorted-runs 20
rocksdb.defaultcf.titan.merge-small-file-threshold 8MiB
rocksdb.defaultcf.titan.min-blob-size 32KiB
rocksdb.defaultcf.titan.min-gc-batch-size 16MiB
rocksdb.defaultcf.titan.range-merge true
rocksdb.defaultcf.titan.shared-blob-cache true
rocksdb.defaultcf.titan.zstd-dict-size 0KiB
rocksdb.defaultcf.ttl null
rocksdb.defaultcf.use-bloom-filter true
rocksdb.defaultcf.whole-key-filtering true
rocksdb.defaultcf.write-buffer-limit null
rocksdb.defaultcf.write-buffer-size 128MiB
rocksdb.enable-multi-batch-write null
rocksdb.enable-pipelined-write false
rocksdb.enable-unordered-write false
rocksdb.info-log-dir
rocksdb.lockcf.block-based-bloom-filter false
rocksdb.lockcf.block-cache-size null
rocksdb.lockcf.block-size 16KiB
rocksdb.lockcf.bloom-filter-bits-per-key 10
rocksdb.lockcf.bottommost-level-compression disable
rocksdb.lockcf.bottommost-zstd-compression-dict-size 0
rocksdb.lockcf.bottommost-zstd-compression-sample-size 0
rocksdb.lockcf.cache-index-and-filter-blocks true
rocksdb.lockcf.checksum crc32c
rocksdb.lockcf.compaction-guard-max-output-file-size 128MiB
rocksdb.lockcf.compaction-guard-min-output-file-size 8MiB
rocksdb.lockcf.compaction-pri 0
rocksdb.lockcf.compaction-style 0
rocksdb.lockcf.compression-per-level [“no”,“no”,“no”,“no”,“no”,“no”,“no”]
rocksdb.lockcf.disable-auto-compactions false
rocksdb.lockcf.disable-block-cache false
rocksdb.lockcf.disable-write-stall true
rocksdb.lockcf.dynamic-level-bytes true
rocksdb.lockcf.enable-compaction-guard null
rocksdb.lockcf.enable-doubly-skiplist true
rocksdb.lockcf.force-consistency-checks false
rocksdb.lockcf.format-version null
rocksdb.lockcf.hard-pending-compaction-bytes-limit 1TiB
rocksdb.lockcf.level0-file-num-compaction-trigger 1
rocksdb.lockcf.level0-slowdown-writes-trigger 20
rocksdb.lockcf.level0-stop-writes-trigger 20
rocksdb.lockcf.max-bytes-for-level-base 128MiB
rocksdb.lockcf.max-bytes-for-level-multiplier 10
rocksdb.lockcf.max-compaction-bytes 2GiB
rocksdb.lockcf.max-compactions null
rocksdb.lockcf.max-write-buffer-number 5
rocksdb.lockcf.min-write-buffer-number-to-merge 1
rocksdb.lockcf.num-levels 7
rocksdb.lockcf.optimize-filters-for-hits false
rocksdb.lockcf.optimize-filters-for-memory false
rocksdb.lockcf.periodic-compaction-seconds null
rocksdb.lockcf.pin-l0-filter-and-index-blocks true
rocksdb.lockcf.prepopulate-block-cache disabled
rocksdb.lockcf.prop-keys-index-distance 40960
rocksdb.lockcf.prop-size-index-distance 4194304
rocksdb.lockcf.read-amp-bytes-per-bit 0
rocksdb.lockcf.ribbon-filter-above-level null
rocksdb.lockcf.soft-pending-compaction-bytes-limit 192GiB
rocksdb.lockcf.target-file-size-base null
rocksdb.lockcf.titan.blob-cache-size 0KiB
rocksdb.lockcf.titan.blob-file-compression zstd
rocksdb.lockcf.titan.blob-run-mode read-only
rocksdb.lockcf.titan.discardable-ratio 0.5
rocksdb.lockcf.titan.level-merge false
rocksdb.lockcf.titan.max-gc-batch-size 64MiB
rocksdb.lockcf.titan.max-sorted-runs 20
rocksdb.lockcf.titan.merge-small-file-threshold 8MiB
rocksdb.lockcf.titan.min-blob-size null
rocksdb.lockcf.titan.min-gc-batch-size 16MiB
rocksdb.lockcf.titan.range-merge true
rocksdb.lockcf.titan.shared-blob-cache true
rocksdb.lockcf.titan.zstd-dict-size 0KiB
rocksdb.lockcf.ttl null
rocksdb.lockcf.use-bloom-filter true
rocksdb.lockcf.whole-key-filtering true
rocksdb.lockcf.write-buffer-limit null
rocksdb.lockcf.write-buffer-size 32MiB
rocksdb.max-background-flushes 2
rocksdb.max-background-jobs 7
rocksdb.max-manifest-file-size 256MiB
rocksdb.max-open-files 40960
rocksdb.max-sub-compactions 3
rocksdb.max-total-wal-size 4GiB
rocksdb.raftcf.block-based-bloom-filter false
rocksdb.raftcf.block-cache-size null
rocksdb.raftcf.block-size 16KiB
rocksdb.raftcf.bloom-filter-bits-per-key 10
rocksdb.raftcf.bottommost-level-compression disable
rocksdb.raftcf.bottommost-zstd-compression-dict-size 0
rocksdb.raftcf.bottommost-zstd-compression-sample-size 0
rocksdb.raftcf.cache-index-and-filter-blocks true
rocksdb.raftcf.checksum crc32c
rocksdb.raftcf.compaction-guard-max-output-file-size 128MiB
rocksdb.raftcf.compaction-guard-min-output-file-size 8MiB
rocksdb.raftcf.compaction-pri 0
rocksdb.raftcf.compaction-style 0
rocksdb.raftcf.compression-per-level [“no”,“no”,“no”,“no”,“no”,“no”,“no”]
rocksdb.raftcf.disable-auto-compactions false
rocksdb.raftcf.disable-block-cache false
rocksdb.raftcf.disable-write-stall true
rocksdb.raftcf.dynamic-level-bytes true
rocksdb.raftcf.enable-compaction-guard null
rocksdb.raftcf.enable-doubly-skiplist true
rocksdb.raftcf.force-consistency-checks false
rocksdb.raftcf.format-version null
rocksdb.raftcf.hard-pending-compaction-bytes-limit 1TiB
rocksdb.raftcf.level0-file-num-compaction-trigger 1
rocksdb.raftcf.level0-slowdown-writes-trigger 20
rocksdb.raftcf.level0-stop-writes-trigger 20
rocksdb.raftcf.max-bytes-for-level-base 128MiB
rocksdb.raftcf.max-bytes-for-level-multiplier 10
rocksdb.raftcf.max-compaction-bytes 2GiB
rocksdb.raftcf.max-compactions null
rocksdb.raftcf.max-write-buffer-number 5
rocksdb.raftcf.min-write-buffer-number-to-merge 1
rocksdb.raftcf.num-levels 7
rocksdb.raftcf.optimize-filters-for-hits true
rocksdb.raftcf.optimize-filters-for-memory false
rocksdb.raftcf.periodic-compaction-seconds null
rocksdb.raftcf.pin-l0-filter-and-index-blocks true
rocksdb.raftcf.prepopulate-block-cache disabled
rocksdb.raftcf.prop-keys-index-distance 40960
rocksdb.raftcf.prop-size-index-distance 4194304
rocksdb.raftcf.read-amp-bytes-per-bit 0
rocksdb.raftcf.ribbon-filter-above-level null
rocksdb.raftcf.soft-pending-compaction-bytes-limit 192GiB
rocksdb.raftcf.target-file-size-base null
rocksdb.raftcf.titan.blob-cache-size 0KiB
rocksdb.raftcf.titan.blob-file-compression zstd
rocksdb.raftcf.titan.blob-run-mode read-only
rocksdb.raftcf.titan.discardable-ratio 0.5
rocksdb.raftcf.titan.level-merge false
rocksdb.raftcf.titan.max-gc-batch-size 64MiB
rocksdb.raftcf.titan.max-sorted-runs 20
rocksdb.raftcf.titan.merge-small-file-threshold 8MiB
rocksdb.raftcf.titan.min-blob-size null
rocksdb.raftcf.titan.min-gc-batch-size 16MiB
rocksdb.raftcf.titan.range-merge true
rocksdb.raftcf.titan.shared-blob-cache true
rocksdb.raftcf.titan.zstd-dict-size 0KiB
rocksdb.raftcf.ttl null
rocksdb.raftcf.use-bloom-filter true
rocksdb.raftcf.whole-key-filtering true
rocksdb.raftcf.write-buffer-limit null
rocksdb.raftcf.write-buffer-size 128MiB
rocksdb.rate-bytes-per-sec 10GiB
rocksdb.rate-limiter-auto-tuned true
rocksdb.rate-limiter-mode 2
rocksdb.rate-limiter-refill-period 100ms
rocksdb.stats-dump-period 10m
rocksdb.titan.dirname
rocksdb.titan.disable-gc false
rocksdb.titan.enabled true
rocksdb.titan.max-background-gc 1
rocksdb.titan.purge-obsolete-files-period 10s
rocksdb.track-and-verify-wals-in-manifest true
rocksdb.use-direct-io-for-flush-and-compaction false
rocksdb.wal-bytes-per-sync 512KiB
rocksdb.wal-dir
rocksdb.wal-recovery-mode 2
rocksdb.wal-size-limit 0KiB
rocksdb.wal-ttl-seconds 0
rocksdb.writable-file-max-buffer-size 1MiB
rocksdb.write-buffer-limit null
rocksdb.writecf.block-based-bloom-filter false
rocksdb.writecf.block-cache-size null
rocksdb.writecf.block-size 32KiB
rocksdb.writecf.bloom-filter-bits-per-key 10
rocksdb.writecf.bottommost-level-compression zstd
rocksdb.writecf.bottommost-zstd-compression-dict-size 0
rocksdb.writecf.bottommost-zstd-compression-sample-size 0
rocksdb.writecf.cache-index-and-filter-blocks true
rocksdb.writecf.checksum crc32c
rocksdb.writecf.compaction-guard-max-output-file-size 128MiB
rocksdb.writecf.compaction-guard-min-output-file-size 8MiB
rocksdb.writecf.compaction-pri 3
rocksdb.writecf.compaction-style 0
rocksdb.writecf.compression-per-level [“no”,“no”,“lz4”,“lz4”,“lz4”,“zstd”,“zstd”]
rocksdb.writecf.disable-auto-compactions false
rocksdb.writecf.disable-block-cache false
rocksdb.writecf.disable-write-stall true
rocksdb.writecf.dynamic-level-bytes true
rocksdb.writecf.enable-compaction-guard true
rocksdb.writecf.enable-doubly-skiplist true
rocksdb.writecf.force-consistency-checks false
rocksdb.writecf.format-version null
rocksdb.writecf.hard-pending-compaction-bytes-limit 1TiB
rocksdb.writecf.level0-file-num-compaction-trigger 4
rocksdb.writecf.level0-slowdown-writes-trigger 20
rocksdb.writecf.level0-stop-writes-trigger 20
rocksdb.writecf.max-bytes-for-level-base 512MiB
rocksdb.writecf.max-bytes-for-level-multiplier 10
rocksdb.writecf.max-compaction-bytes 2GiB
rocksdb.writecf.max-compactions null
rocksdb.writecf.max-write-buffer-number 5
rocksdb.writecf.min-write-buffer-number-to-merge 1
rocksdb.writecf.num-levels 7
rocksdb.writecf.optimize-filters-for-hits false
rocksdb.writecf.optimize-filters-for-memory false
rocksdb.writecf.periodic-compaction-seconds null
rocksdb.writecf.pin-l0-filter-and-index-blocks true
rocksdb.writecf.prepopulate-block-cache disabled
rocksdb.writecf.prop-keys-index-distance 40960
rocksdb.writecf.prop-size-index-distance 4194304
rocksdb.writecf.read-amp-bytes-per-bit 0
rocksdb.writecf.ribbon-filter-above-level null
rocksdb.writecf.soft-pending-compaction-bytes-limit 192GiB
rocksdb.writecf.target-file-size-base null
rocksdb.writecf.titan.blob-cache-size 0KiB
rocksdb.writecf.titan.blob-file-compression zstd
rocksdb.writecf.titan.blob-run-mode read-only
rocksdb.writecf.titan.discardable-ratio 0.5
rocksdb.writecf.titan.level-merge false
rocksdb.writecf.titan.max-gc-batch-size 64MiB
rocksdb.writecf.titan.max-sorted-runs 20
rocksdb.writecf.titan.merge-small-file-threshold 8MiB
rocksdb.writecf.titan.min-blob-size null
rocksdb.writecf.titan.min-gc-batch-size 16MiB
rocksdb.writecf.titan.range-merge true
rocksdb.writecf.titan.shared-blob-cache true
rocksdb.writecf.titan.zstd-dict-size 0KiB
rocksdb.writecf.ttl null
rocksdb.writecf.use-bloom-filter true
rocksdb.writecf.whole-key-filtering false
rocksdb.writecf.write-buffer-limit null
rocksdb.writecf.write-buffer-size 128MiB
security.ca-path
security.cert-allowed-cn
security.cert-path
security.encryption.data-encryption-method plaintext
security.encryption.data-key-rotation-period 7d
security.encryption.enable-file-dictionary-log true
security.encryption.file-dictionary-rewrite-threshold 1000000
security.encryption.master-key.type plaintext
security.encryption.previous-master-key.type plaintext
security.key-path
security.redact-info-log false
server.addr 0.0.0.0:20160
server.advertise-addr 192.168.10.204:20160
server.advertise-addr 192.168.10.87:20160
server.advertise-addr 192.168.10.11:20160
server.advertise-addr 192.168.10.158:20160
server.advertise-status-addr 192.168.10.11:20180
server.advertise-status-addr 192.168.10.158:20180
server.advertise-status-addr 192.168.10.87:20180
server.advertise-status-addr 192.168.10.204:20180
server.background-thread-count 2
server.concurrent-recv-snap-limit 32
server.concurrent-send-snap-limit 32
server.enable-request-batch true
server.end-point-batch-row-limit 64
server.end-point-enable-batch-if-possible true
server.end-point-max-concurrency 8
server.end-point-memory-quota 4012728KiB
server.end-point-perf-level 0
server.end-point-recursion-limit 1000
server.end-point-request-max-handle-duration 1m
server.end-point-slow-log-threshold 1s
server.end-point-stream-batch-row-limit 128
server.end-point-stream-channel-size 8
server.forward-max-connections-per-address 4
server.grpc-compression-type none
server.grpc-concurrency 5
server.grpc-concurrent-stream 1024
server.grpc-gzip-compression-level 2
server.grpc-keepalive-time 10s
server.grpc-keepalive-timeout 3s
server.grpc-memory-pool-quota 9223372036854775807B
server.grpc-min-message-size-to-compress 4096
server.grpc-raft-conn-num 1
server.grpc-stream-initial-window-size 2MiB
server.health-feedback-interval 1s
server.heavy-load-threshold 75
server.heavy-load-wait-duration null
server.max-grpc-send-msg-len 10485760
server.raft-client-grpc-send-msg-buffer 524288
server.raft-client-queue-size 16384
server.raft-msg-max-batch-size 256
server.reject-messages-on-memory-ratio 0.2
server.simplify-metrics false
server.snap-io-max-bytes-per-sec 100MiB
server.snap-max-total-size 0KiB
server.snap-min-ingest-size 2MiB
server.stats-concurrency 1
server.status-addr 0.0.0.0:20180
server.status-thread-pool-size 1
slow-log-file
slow-log-threshold 1s
split.byte-threshold 31457280
split.detect-times 10
split.grpc-thread-cpu-overload-threshold-ratio 0.5
split.qps-threshold 3000
split.region-cpu-overload-threshold-ratio 0.25
split.sample-num 20
split.sample-threshold 100
split.split-balance-score 0.25
split.split-contained-score 0.5
split.unified-read-pool-thread-cpu-overload-threshold-ratio 0.8
storage.api-version 1
storage.background-error-recovery-window 1h
storage.block-cache.capacity 14792520499B
storage.block-cache.high-pri-pool-ratio 0.8
storage.block-cache.low-pri-pool-ratio 0.2
storage.block-cache.memory-allocator nodump
storage.block-cache.num-shard-bits 6
storage.block-cache.shared null
storage.block-cache.strict-capacity-limit false
storage.data-dir /data/tidb/tikv-20160
storage.enable-async-apply-prewrite false
storage.enable-ttl false
storage.engine raft-kv
storage.flow-control.enable true
storage.flow-control.hard-pending-compaction-bytes-limit 1TiB
storage.flow-control.l0-files-threshold 20
storage.flow-control.memtables-threshold 5
storage.flow-control.soft-pending-compaction-bytes-limit 192GiB
storage.gc-ratio-threshold 1.1
storage.io-rate-limit.compaction-priority low
storage.io-rate-limit.export-priority medium
storage.io-rate-limit.flush-priority high
storage.io-rate-limit.foreground-read-priority high
storage.io-rate-limit.foreground-write-priority high
storage.io-rate-limit.gc-priority high
storage.io-rate-limit.import-priority medium
storage.io-rate-limit.level-zero-compaction-priority medium
storage.io-rate-limit.load-balance-priority high
storage.io-rate-limit.max-bytes-per-sec 0KiB
storage.io-rate-limit.mode write-only
storage.io-rate-limit.other-priority high
storage.io-rate-limit.replication-priority high
storage.io-rate-limit.strict false
storage.max-key-size 8192
storage.memory-quota 256MiB
storage.reserve-raft-space 1GiB
storage.reserve-space 5GiB
storage.scheduler-concurrency 524288
storage.scheduler-pending-write-threshold 100MiB
storage.scheduler-worker-pool-size 4
storage.ttl-check-poll-interval 12h
storage.txn-status-cache-capacity 5120000

看报错像是磁盘空间满了,有磁盘空间满的情况吧,检查下各节点的磁盘空间,看display的情况pd状态是正常的

可以看下grafana中的Blackbox_exporter的相关监控,看下几个pd节点的网络延时情况。

可以先把pd上面的kafka干掉 来自 @田帅萌7 的建议

关注 关注下

我觉得也是,结合日志和配置来看,核心问题就是 PD 节点和 Kafka 混部 + 磁盘 I/O 瓶颈,导致 etcd 性能崩了!

把 PD 节点上的 Kafka 迁移走,避免资源争抢拖垮 etcd;同时检查 PD 节点磁盘(看监控写延迟都到 230ms 了),优先换成 SSD,临时可调大 etcd 的–election-timeout到 5s、–heartbeat-interval到 500ms,缓解选举失败。

PD 节点是ssd还是sata?

查看磁盘利用率和队列深度

iostat -x 1
看下

调优 etcd 参数(–quota-backend-bytes设为 16GiB,开启自动压缩),PD 节点单独部署别混部其他组件;

1 个赞

检查 PD 集群状态

1 个赞

我认真看了你的日志,感觉问题根源是pd节点未使用ssd,且与高 I/O的kafka混部,导致etcd 因磁盘延迟无法维持 leader,引发整个 TiDB集群不可用,解决方法是将pd迁移到专用ssd服务器并禁止混部。

2 个赞

用了SSD会更好些

1 个赞