在华为三台服务器部署TiDB的时候部署完成后建表速度特别慢 大概0.5秒一张表,相比于单节点myqsl和达梦都很慢,
SQL
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
# 部署tidb集群所使用的用户
user: "tidb"
ssh_port: 22
# tidb各个组件部署后使用的全局部署目录
deploy_dir: "/tidb-deploy"
# tidb各个组件部署后使用的全局数据目录
data_dir: "/tidb-data"
# # Monitored variables are applied to all the machines.
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
# deploy_dir: "/tidb-deploy/monitored-9100"
# data_dir: "/tidb-data/monitored-9100"
# log_dir: "/tidb-deploy/monitored-9100/log"
# # Server configs are used to specify the runtime configuration of TiDB components.
# # All configuration items can be found in TiDB docs:
# # - TiDB: https://docs.pingcap.com/zh/tidb/stable/tidb-configuration-file
# # - TiKV: https://docs.pingcap.com/zh/tidb/stable/tikv-configuration-file
# # - PD: https://docs.pingcap.com/zh/tidb/stable/pd-configuration-file
# # All configuration items use points to represent the hierarchy, e.g:
# # readpool.storage.use-unified-pool
# #
# # You can overwrite this configuration via the instance-level `config` field.
# tidb各个组件的配置文件参数管理
server_configs:
tidb:
log.slow-threshold: 300
binlog.enable: false
binlog.ignore-error: false
tikv:
# server.grpc-concurrency: 4
# raftstore.apply-pool-size: 2
# raftstore.store-pool-size: 2
# rocksdb.max-sub-compactions: 1
# storage.block-cache.capacity: "16GB"
# readpool.unified.max-thread-count: 12
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
pd:
schedule.leader-schedule-limit: 4
schedule.region-schedule-limit: 2048
schedule.replica-schedule-limit: 64
replication.max-replicas: 3
# pd_servers部署个数、所在机器IP、局部部署目录,数据目录管理、numa绑定、端口号、局部参数管理
pd_servers:
- host: 10.0.1.4
# ssh_port: 22
# name: "pd-1"
# client_port: 2379
# peer_port: 2380
# deploy_dir: "/tidb-deploy/pd-2379"
# data_dir: "/tidb-data/pd-2379"
# log_dir: "/tidb-deploy/pd-2379/log"
# numa_node: "0,1"
# # The following configs are used to overwrite the `server_configs.pd` values.
# config:
# schedule.max-merge-region-size: 20
# schedule.max-merge-region-keys: 200000
- host: 10.0.1.5
- host: 10.0.1.6
tidb_servers部署个数、所在机器IP、局部部署目录,数据目录管理、numa绑定、端口号、局部参数管理
tidb_servers:
- host: 10.0.1.1
# ssh_port: 22
# port: 4000
# status_port: 10080
# deploy_dir: "/tidb-deploy/tidb-4000"
# log_dir: "/tidb-deploy/tidb-4000/log"
# numa_node: "0,1"
# # The following configs are used to overwrite the `server_configs.tidb` values.
# config:
# log.slow-query-file: tidb-slow-overwrited.log
- host: 10.0.1.2
tikv_servers部署个数、所在机器IP、局部部署目录,数据目录管理、numa绑定、端口号、局部参数管理
tikv_servers:
- host: 10.0.1.7
# ssh_port: 22
# port: 20160
# status_port: 20180
# deploy_dir: "/tidb-deploy/tikv-20160"
# data_dir: "/tidb-data/tikv-20160"
# log_dir: "/tidb-deploy/tikv-20160/log"
# numa_node: "0,1"
# # The following configs are used to overwrite the `server_configs.tikv` values.
# config:
# server.grpc-concurrency: 4
# server.labels: { zone: "zone1", dc: "dc1", host: "host1" }
- host: 10.0.1.8
- host: 10.0.1.9
监控机器
monitoring_servers:
- host: 10.0.1.10
# ssh_port: 22
# port: 9090
# deploy_dir: "/tidb-deploy/prometheus-8249"
# data_dir: "/tidb-data/prometheus-8249"
# log_dir: "/tidb-deploy/prometheus-8249/log"
grafana机器
grafana_servers:
- host: 10.0.1.10
# port: 3000
# deploy_dir: /tidb-deploy/grafana-3000
# alertmanager_servers机器
alertmanager_servers:
- host: 10.0.1.10
# ssh_port: 22
# web_port: 9093
# cluster_port: 9094
# deploy_dir: "/tidb-deploy/alertmanager-9093"
# data_dir: "/tidb-data/alertmanager-9093"
# log_dir: "/tidb-deploy/alertmanager-9093/log"
小技巧
SQL
ip替换
sed -i "s/10.0.1.1/192.168.1.101/g" topo.yaml
sed -i "s/10.0.1.2/192.168.1.102/g" topo.yaml
sed -i "s/10.0.1.3/192.168.1.103/g" topo.yaml
sed -i "s/10.0.1.4/192.168.1.101/g" topo.yaml
sed -i "s/10.0.1.5/192.168.1.102/g" topo.yaml
sed -i "s/10.0.1.6/192.168.1.103/g" topo.yaml
sed -i "s/10.0.1.7/192.168.1.101/g" topo.yaml
sed -i "s/10.0.1.8/192.168.1.102/g" topo.yaml
sed -i "s/10.0.1.9/192.168.1.103/g" topo.yaml
上述为部署的yaml文件,服务器架构为arm 128内存 64核心 ,硬盘为本地机械硬盘 ,16分钟只建了1800张表,对比mysql大概60毫秒一张,速度极慢,有没有优化策略