执行部署命令,检查和自动修复集群存在的潜在风险时报错,已经配置免密

你用root装的啊,你root除了本机,其他ip能正常ssh过去吗

可以的 彼此间 都可以正常ssh

tiup cluster deploy tidb-test v5.2.3 ./topology.yaml --user root
6台机器 都是能 彼此ssh 的 且不需要密码登陆的, 还有个问题 这六台机器 是我用docker 的CENTOS 镜像创建的,不知道有影响不,我执行上面部署命令后,下面是信息 , 还是有成功的

  • Detect CPU Arch Name

  • Detect CPU Arch Name

    • Detecting node 172.17.0.2 Arch info … Done
    • Detecting node 172.17.0.3 Arch info … Done
    • Detecting node 172.17.0.4 Arch info … Done
    • Detecting node 172.17.0.5 Arch info … Done
    • Detecting node 172.17.0.6 Arch info … Done
    • Detecting node 172.17.0.7 Arch info … Done
  • Detect CPU OS Name

  • Detect CPU OS Name

    • Detecting node 172.17.0.2 OS info … Done
    • Detecting node 172.17.0.3 OS info … Done
    • Detecting node 172.17.0.4 OS info … Done
    • Detecting node 172.17.0.5 OS info … Done
    • Detecting node 172.17.0.6 OS info … Done
    • Detecting node 172.17.0.7 OS info … Done
      Please confirm your topology:
      Cluster type: tidb
      Cluster name: tidb-test
      Cluster version: v5.2.3
      Role Host Ports OS/Arch Directories

pd 172.17.0.2 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd 172.17.0.3 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd 172.17.0.4 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv 172.17.0.5 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 172.17.0.6 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 172.17.0.7 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tidb 172.17.0.2 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tidb 172.17.0.3 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tidb 172.17.0.4 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
prometheus 172.17.0.2 9090 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana 172.17.0.2 3000 linux/x86_64 /tidb-deploy/grafana-3000
alertmanager 172.17.0.2 9093/9094 linux/x86_64 /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y

  • Generate SSH keys … Done
  • Download TiDB components
    • Download pd:v5.2.3 (linux/amd64) … Done
    • Download tikv:v5.2.3 (linux/amd64) … Done
    • Download tidb:v5.2.3 (linux/amd64) … Done
    • Download prometheus:v5.2.3 (linux/amd64) … Done
    • Download grafana:v5.2.3 (linux/amd64) … Done
    • Download alertmanager: (linux/amd64) … Done
    • Download node_exporter: (linux/amd64) … Done
    • Download blackbox_exporter: (linux/amd64) … Done
  • Initialize target host environments
    • Prepare 172.17.0.6:22 … Done
    • Prepare 172.17.0.7:22 … Done
    • Prepare 172.17.0.2:22 … Done
    • Prepare 172.17.0.3:22 … Done
    • Prepare 172.17.0.4:22 … Done
    • Prepare 172.17.0.5:22 … Done
  • Deploy TiDB instance
    • Copy pd → 172.17.0.2 … Done
    • Copy pd → 172.17.0.3 … Done
    • Copy pd → 172.17.0.4 … Done
    • Copy tikv → 172.17.0.5 … Done
    • Copy tikv → 172.17.0.6 … Done
    • Copy tikv → 172.17.0.7 … Done
    • Copy tidb → 172.17.0.2 … Done
    • Copy tidb → 172.17.0.3 … Done
    • Copy tidb → 172.17.0.4 … Done
    • Copy prometheus → 172.17.0.2 … Done
    • Copy grafana → 172.17.0.2 … Done
    • Copy alertmanager → 172.17.0.2 … Done
    • Deploy node_exporter → 172.17.0.5 … Done
    • Deploy node_exporter → 172.17.0.6 … Done
    • Deploy node_exporter → 172.17.0.7 … Done
    • Deploy node_exporter → 172.17.0.2 … Done
    • Deploy node_exporter → 172.17.0.3 … Done
    • Deploy node_exporter → 172.17.0.4 … Done
    • Deploy blackbox_exporter → 172.17.0.4 … Done
    • Deploy blackbox_exporter → 172.17.0.5 … Done
    • Deploy blackbox_exporter → 172.17.0.6 … Done
    • Deploy blackbox_exporter → 172.17.0.7 … Done
    • Deploy blackbox_exporter → 172.17.0.2 … Done
    • Deploy blackbox_exporter → 172.17.0.3 … Done
  • Copy certificate to remote host
  • Init instance configs
    • Generate config pd → 172.17.0.2:2379 … Done
    • Generate config pd → 172.17.0.3:2379 … Done
    • Generate config pd → 172.17.0.4:2379 … Done
    • Generate config tikv → 172.17.0.5:20160 … Done
    • Generate config tikv → 172.17.0.6:20160 … Done
    • Generate config tikv → 172.17.0.7:20160 … Done
    • Generate config tidb → 172.17.0.2:4000 … Done
    • Generate config tidb → 172.17.0.3:4000 … Done
    • Generate config tidb → 172.17.0.4:4000 … Done
    • Generate config prometheus → 172.17.0.2:9090 … Done
    • Generate config grafana → 172.17.0.2:3000 … Done
    • Generate config alertmanager → 172.17.0.2:9093 … Done
  • Init monitor configs
    • Generate config node_exporter → 172.17.0.2 … Done
    • Generate config node_exporter → 172.17.0.3 … Done
    • Generate config node_exporter → 172.17.0.4 … Done
    • Generate config node_exporter → 172.17.0.5 … Done
    • Generate config node_exporter → 172.17.0.6 … Done
    • Generate config node_exporter → 172.17.0.7 … Done
    • Generate config blackbox_exporter → 172.17.0.2 … Done
    • Generate config blackbox_exporter → 172.17.0.3 … Done
    • Generate config blackbox_exporter → 172.17.0.4 … Done
    • Generate config blackbox_exporter → 172.17.0.5 … Done
    • Generate config blackbox_exporter → 172.17.0.6 … Done
    • Generate config blackbox_exporter → 172.17.0.7 … Done
      Enabling component pd
      Enabling instance 172.17.0.4:2379
      Enabling instance 172.17.0.3:2379
      Enabling instance 172.17.0.2:2379
      Failed to get D-Bus connection: Operation not permitted

Failed to get D-Bus connection: Operation not permitted

Failed to get D-Bus connection: Operation not permitted

Error: failed to enable/disable pd: failed to enable: 172.17.0.3 pd-2379.service, please check the instance’s log(/tidb-deploy/pd-2379/log) for more detail.: executor.ssh.execute_failed: Failed to execute command over SSH for ‘tidb@172.17.0.3:22’ {ssh_stderr: Failed to get D-Bus connection: Operation not permitted
, ssh_stdout: , ssh_command: export LANG=C; PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin; /usr/bin/sudo -H bash -c “systemctl daemon-reload && systemctl enable pd-2379.service”}, cause: Process exited with status 1

Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2024-07-29-03-25-08.log. 1

docker 创建的centos镜像没有systemd服务,systemctl好像用不了,你可以手工测试一下

tidb不能用docker部署

为啥不行啊,我用 docker 创建 多台容器 ,随便找个容器当中控机,分别再多台容器里 设置TIKV,PD,TIDB不行吗

提示ss命令不存在,安装下iproute软件包后再试试