【 TiDB 使用环境】测试环境
【 TiDB 版本】7.5
【复现路径】做过哪些操作出现的问题
tiup cluster deploy tidb_dev 7.5.0 topo.yaml
tiup is checking updates for component cluster …
Starting component cluster
: /root/.tiup/components/cluster/v1.14.0/tiup-cluster deploy tidb_dev 7.5.0 topo.yaml
-
Detect CPU Arch Name
- Detecting node 192.168.133.139 Arch info … Done
-
Detect CPU OS Name
- Detecting node 192.168.133.139 OS info … Done
Please confirm your topology:
Cluster type: tidb
Cluster name: tidb_dev
Cluster version: v7.5.0
Role Host Ports OS/Arch Directories
- Detecting node 192.168.133.139 OS info … Done
pd 192.168.133.139 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv 192.168.133.139 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 192.168.133.139 20161/20181 linux/x86_64 /tidb-deploy/tikv-20161,/tidb-data/tikv-20161
tikv 192.168.133.139 20162/20182 linux/x86_64 /tidb-deploy/tikv-20162,/tidb-data/tikv-20162
tidb 192.168.133.139 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
prometheus 192.168.133.139 9090/12020 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana 192.168.133.139 3000 linux/x86_64 /tidb-deploy/grafana-3000
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
- Generate SSH keys … Done
- Download TiDB components
- Download pd:v7.5.0 (linux/amd64) … Done
- Download tikv:v7.5.0 (linux/amd64) … Done
- Download tidb:v7.5.0 (linux/amd64) … Done
- Download prometheus:v7.5.0 (linux/amd64) … Done
- Download grafana:v7.5.0 (linux/amd64) … Done
- Download node_exporter: (linux/amd64) … Done
- Download blackbox_exporter: (linux/amd64) … Done
- Initialize target host environments
- Prepare 192.168.133.139:22 … Done
- Deploy TiDB instance
- Copy pd → 192.168.133.139 … Done
- Copy tikv → 192.168.133.139 … Done
- Copy tikv → 192.168.133.139 … Done
- Copy tikv → 192.168.133.139 … Done
- Copy tidb → 192.168.133.139 … Done
- Copy prometheus → 192.168.133.139 … Done
- Copy grafana → 192.168.133.139 … Done
- Deploy node_exporter → 192.168.133.139 … Done
- Deploy blackbox_exporter → 192.168.133.139 … Done
- Copy certificate to remote host
- Init instance configs
- Generate config pd → 192.168.133.139:2379 … Done
- Generate config tikv → 192.168.133.139:20160 … Done
- Generate config tikv → 192.168.133.139:20161 … Done
- Generate config tikv → 192.168.133.139:20162 … Done
- Generate config tidb → 192.168.133.139:4000 … Done
- Generate config prometheus → 192.168.133.139:9090 … Done
- Generate config grafana → 192.168.133.139:3000 … Done
- Init monitor configs
- Generate config node_exporter → 192.168.133.139 … Done
- Generate config blackbox_exporter → 192.168.133.139 … Done
Enabling component pd
Enabling instance 192.168.133.139:2379
Enable instance 192.168.133.139:2379 success
Enabling component tikv
Enabling instance 192.168.133.139:20162
Enabling instance 192.168.133.139:20160
Enabling instance 192.168.133.139:20161
Enable instance 192.168.133.139:20161 success
Enable instance 192.168.133.139:20160 success
Enable instance 192.168.133.139:20162 success
Enabling component tidb
Enabling instance 192.168.133.139:4000
Enable instance 192.168.133.139:4000 success
Enabling component prometheus
Enabling instance 192.168.133.139:9090
Enable instance 192.168.133.139:9090 success
Enabling component grafana
Enabling instance 192.168.133.139:3000
Enable instance 192.168.133.139:3000 success
Enabling component node_exporter
Enabling instance 192.168.133.139
Enable 192.168.133.139 success
Enabling component blackbox_exporter
Enabling instance 192.168.133.139
Enable 192.168.133.139 success
Clustertidb_dev
deployed successfully, you can start it with command:tiup cluster start tidb_dev --init
【遇到的问题:问题现象及影响】
tiup cluster start tidb_dev --init
tiup is checking updates for component cluster …
Starting component cluster
: /root/.tiup/components/cluster/v1.14.0/tiup-cluster start tidb_dev --init
Starting cluster tidb_dev…
- [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb_dev/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb_dev/ssh/id_rsa.pub
- [Parallel] - UserSSH: user=tidb, host=192.168.133.139
- [Parallel] - UserSSH: user=tidb, host=192.168.133.139
- [Parallel] - UserSSH: user=tidb, host=192.168.133.139
- [Parallel] - UserSSH: user=tidb, host=192.168.133.139
- [Parallel] - UserSSH: user=tidb, host=192.168.133.139
- [Parallel] - UserSSH: user=tidb, host=192.168.133.139
- [Parallel] - UserSSH: user=tidb, host=192.168.133.139
- [ Serial ] - StartCluster
Starting component pd
Starting instance 192.168.133.139:2379
Start instance 192.168.133.139:2379 success
Starting component tikv
Starting instance 192.168.133.139:20162
Starting instance 192.168.133.139:20160
Starting instance 192.168.133.139:20161
Start instance 192.168.133.139:20160 success
Start instance 192.168.133.139:20162 success
Start instance 192.168.133.139:20161 success
Starting component tidb
Starting instance 192.168.133.139:4000
Error: failed to start tidb: failed to start: 192.168.133.139 tidb-4000.service, please check the instance’s log(/tidb-deploy/tidb-4000/log) for more detail.: timed out waiting for port 4000 to be started after 2m0s
Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2023-12-19-21-44-41.log.
【资源配置】
【附件:截图/日志/监控】
44 [2023/12/19 22:01:45.489 +08:00] [ERROR] [runaway.go:145] [“try to get new runaw ay watch”] [error=“[schema:1146]Table ‘mysql.tidb_runaway_watch’ doesn’t exist”]
45 [2023/12/19 22:01:45.489 +08:00] [WARN] [runaway.go:172] [“get runaway watch rec ord failed”] [error="[schema:1146]Table ‘mysql.tidb_runaway_watch’ doesn’t exist "]