背景:现在已经部署了tidb 集群,tiup工具部署的
问题:需要在tidb集群中部署Prometheus
背景:现在已经部署了tidb 集群,tiup工具部署的
问题:需要在tidb集群中部署Prometheus
可以,再扩容一个就行了
扩容就行 参考我测试过的
扩容监控:
vi scale-out.yml
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
monitoring_servers:
- host: 127.0.0.1
grafana_servers:
- host: 127.0.0.1
扩容
root@tidb:~# tiup cluster scale-out tidb-test scale-out.yml -u root -p
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.14.1/tiup-cluster scale-out tidb-test scale-out.yml -u root -p
You have one or more of ["global", "monitored", "server_configs"] fields configured in
the scale out topology, but they will be ignored during the scaling out process.
If you want to use configs different from the existing cluster, cancel now and
set them in the specification fileds for each host.
Do you want to continue? [y/N]: (default=N) y
Input SSH password:
+ Detect CPU Arch Name
- Detecting node 127.0.0.1 Arch info ... Done
+ Detect CPU OS Name
- Detecting node 127.0.0.1 OS info ... Done
Please confirm your topology:
Cluster type: tidb
Cluster name: tidb-test
Cluster version: v7.6.0
Role Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
prometheus 127.0.0.1 9090/12020 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana 127.0.0.1 3000 linux/x86_64 /tidb-deploy/grafana-3000
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=127.0.0.1
+ [Parallel] - UserSSH: user=tidb, host=127.0.0.1
+ [Parallel] - UserSSH: user=tidb, host=127.0.0.1
+ Download TiDB components
- Download prometheus:v7.6.0 (linux/amd64) ... Done
- Download grafana:v7.6.0 (linux/amd64) ... Done
+ Initialize target host environments
+ Deploy TiDB instance
- Deploy instance prometheus -> 127.0.0.1:9090 ... Done
- Deploy instance grafana -> 127.0.0.1:3000 ... Done
+ Copy certificate to remote host
+ Generate scale-out config
- Generate scale-out config prometheus -> 127.0.0.1:9090 ... Done
- Generate scale-out config grafana -> 127.0.0.1:3000 ... Done
+ Init monitor config
Enabling component prometheus
Enabling instance 127.0.0.1:9090
Enable instance 127.0.0.1:9090 success
Enabling component grafana
Enabling instance 127.0.0.1:3000
Enable instance 127.0.0.1:3000 success
Enabling component node_exporter
Enabling instance 127.0.0.1
Enable 127.0.0.1 success
Enabling component blackbox_exporter
Enabling instance 127.0.0.1
Enable 127.0.0.1 success
+ [ Serial ] - Save meta
+ [ Serial ] - Start new instances
Starting component prometheus
Starting instance 127.0.0.1:9090
Start instance 127.0.0.1:9090 success
Starting component grafana
Starting instance 127.0.0.1:3000
Start instance 127.0.0.1:3000 success
Starting component node_exporter
Starting instance 127.0.0.1
Start 127.0.0.1 success
Starting component blackbox_exporter
Starting instance 127.0.0.1
Start 127.0.0.1 success
+ Refresh components conifgs
- Generate config pd -> 127.0.0.1:2379 ... Done
- Generate config tikv -> 127.0.0.1:20160 ... Done
- Generate config tidb -> 127.0.0.1:4000 ... Done
- Generate config prometheus -> 127.0.0.1:9090 ... Done
- Generate config grafana -> 127.0.0.1:3000 ... Done
+ Reload prometheus and grafana
+ [ Serial ] - UpdateTopology: cluster=tidb-test
Scaled cluster `tidb-test` out successfully
可以的
扩容是没有问题的,问题的关键是对历史的监控数据如何保留,比如,老的prometheus,已经跑了一段时间,已经有了一段时间的监控数据,然后现在新建一个prometheus,如何将老的监控数据同步到新的prometheus中
扩缩容,YYDS
超级牛
扩容可以
扩容可以的
二话不说 直接扩容就行
应该是支持导入导出的