TiDB Binlog 部署拓扑

为提高效率,请提供以下信息,问题描述清晰能够更快得到解决:

【TiDB 版本】
v4.0.11
【问题描述】
在部署时tiup cluster deploy daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml报错

Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: y

  • Generate SSH keys … Done
  • Download TiDB components
    • Download pd:v4.0.11 (linux/amd64) … Done
    • Download tikv:v4.0.11 (linux/amd64) … Done
    • Download pump:v4.0.11 (linux/amd64) … Done
    • Download tidb:v4.0.11 (linux/amd64) … Done
    • Download drainer:v4.0.11 (linux/amd64) … Done
    • Download prometheus:v4.0.11 (linux/amd64) … Done
    • Download grafana:v4.0.11 (linux/amd64) … Done
    • Download alertmanager:v0.17.0 (linux/amd64) … Done
    • Download node_exporter:v0.17.0 (linux/amd64) … Done
    • Download blackbox_exporter:v0.12.0 (linux/amd64) … Done
  • Initialize target host environments
    • Prepare 172.16.12.159:22 … ⠸ EnvInit: user=tidb, host=172.16.12.159
    • Prepare 172.16.12.173:22 … ⠸ EnvInit: user=tidb, host=172.16.12.173
    • Prepare 172.16.12.211:22 … ⠸ EnvInit: user=tidb, host=172.16.12.211
    • Prepare 172.16.12.214:22 … ⠸ EnvInit: user=tidb, host=172.16.12.214
    • Prepare 172.16.12.131:22 … ⠸ EnvInit: user=tidb, host=172.16.12.131
    • Prepare 172.16.12.146:22 … ⠸ EnvInit: user=tidb, host=172.16.12.146
    • Prepare 172.16.12.165:22 … ⠸ EnvInit: user=tidb, host=172.16.12.165
    • Prepare 172.16.12.218:22 … ⠸ EnvInit: user=tidb, host=172.16.12.218
    • Prepare 172.16.12.204:22 … ⠸ EnvInit: user=tidb, host=172.16.12.204
    • Prepare 172.16.12.203:22 … ⠸ EnvInit: user=tidb, host=172.16.12.203
    • Prepare 172.16.12.175:22 … ⠸ EnvInit: user=tidb, host=172.16.12.175
    • Prepare 172.16.12.156:22 … ⠸ EnvInit: user=tidb, host=172.16.12.156
    • Prepare 172.16.12.141:22 … ⠸ EnvInit: user=tidb, host=172.16.12.141
      panic: send on closed channel

goroutine 623 [running]:
github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1.1(0xc00032d700, 0xc000136660, 0xc000885480)
github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:255 +0x7c
created by github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1
github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:253 +0x285
Error: run /root/.tiup/components/cluster/v1.3.2/tiup-cluster (wd:/root/.tiup/data/SQeVcxg) failed: exit status 2


若提问为性能优化、故障排查类问题,请下载脚本运行。终端输出的打印结果,请务必全选并复制粘贴上传。

1.麻烦检查下节点之间的网络通信是否正常,以及防火墙和 selinux 是否都关闭了;
2.方便的话满发提供下拓扑文件。

感谢您的回复

selinux 状态:
salt -L “172.16.12.141,172.16.12.218,172.16.12.204,172.16.12.165,172.16.12.156,172.16.12.175,172.16.12.203,172.16.12.211,172.16.12.173,172.16.12.159,172.16.12.146,172.16.12.131,172.16.12.214” cmd.run ‘getenforce’
172.16.12.165:
Disabled
172.16.12.214:
Disabled
172.16.12.204:
Disabled
172.16.12.131:
Disabled
172.16.12.173:
Disabled
172.16.12.175:
Disabled
172.16.12.203:
Disabled
172.16.12.141:
Disabled
172.16.12.146:
Disabled
172.16.12.211:
Disabled
172.16.12.156:
Disabled
172.16.12.218:
Disabled
172.16.12.159:
Disabled

firewalld:
salt -L “172.16.12.141,172.16.12.218,172.16.12.204,172.16.12.165,172.16.12.156,172.16.12.175,172.16.12.203,172.16.12.211,172.16.12.173,172.16.12.159,172.16.12.146,172.16.12.131,172.16.12.214” service.status firewalld
172.16.12.165:
False
172.16.12.203:
False
172.16.12.173:
False
172.16.12.204:
False
172.16.12.146:
False
172.16.12.211:
False
172.16.12.131:
False
172.16.12.159:
False
172.16.12.175:
False
172.16.12.141:
False
172.16.12.214:
False
172.16.12.156:
False
172.16.12.218:
False

iptables:
salt -L “172.16.12.141,172.16.12.218,172.16.12.204,172.16.12.165,172.16.12.156,172.16.12.175,172.16.12.203,172.16.12.211,172.16.12.173,172.16.12.159,172.16.12.146,172.16.12.131,172.16.12.214” service.status iptables
172.16.12.214:
False
172.16.12.141:
False
172.16.12.146:
False
172.16.12.211:
False
172.16.12.173:
False
172.16.12.159:
False
172.16.12.131:
False
172.16.12.175:
False
172.16.12.204:
False
172.16.12.165:
False
172.16.12.203:
False
172.16.12.156:
False
172.16.12.218:
False

没有安装过防火墙软件

########################################################

# cat complex-tidb-binlog.yaml 
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"

# # Monitored variables are applied to all the machines.
monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115
  # deploy_dir: "/tidb-deploy/monitored-9100"
  # data_dir: "/tidb-data/monitored-9100"
  # log_dir: "/tidb-deploy/monitored-9100/log"

# # Server configs are used to specify the runtime configuration of TiDB components.
# # All configuration items can be found in TiDB docs:
# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/
# # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/
# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/
# # All configuration items use points to represent the hierarchy, e.g:
# #   readpool.storage.use-unified-pool
# #      
# # You can overwrite this configuration via the instance-level `config` field.

server_configs:
  tidb:
    log.slow-threshold: 300
    binlog.enable: true
    binlog.ignore-error: true
  tikv:
    # server.grpc-concurrency: 4
    # raftstore.apply-pool-size: 2
    # raftstore.store-pool-size: 2
    # rocksdb.max-sub-compactions: 1
    # storage.block-cache.capacity: "16GB"
    # readpool.unified.max-thread-count: 12
    readpool.storage.use-unified-pool: false
    readpool.coprocessor.use-unified-pool: true
  pd:
    schedule.leader-schedule-limit: 4
    schedule.region-schedule-limit: 2048
    schedule.replica-schedule-limit: 64

pd_servers:
  - host: 172.16.12.159
    # ssh_port: 22
    # name: "pd-1"
    # client_port: 2379
    # peer_port: 2380
    # deploy_dir: "/tidb-deploy/pd-2379"
    # data_dir: "/tidb-data/pd-2379"
    # log_dir: "/tidb-deploy/pd-2379/log"
    # numa_node: "0,1"
    # # The following configs are used to overwrite the `server_configs.pd` values.
    # config:
    #   schedule.max-merge-region-size: 20
    #   schedule.max-merge-region-keys: 200000
  - host: 172.16.12.173
  - host: 172.16.12.211
tidb_servers:
  - host: 172.16.12.203
    # ssh_port: 22
    # port: 4000
    # status_port: 10080
    # deploy_dir: "/tidb-deploy/tidb-4000"
    # log_dir: "/tidb-deploy/tidb-4000/log"
    # numa_node: "0,1"
    # # The following configs are used to overwrite the `server_configs.tidb` values.
    # config:
    #   log.slow-query-file: tidb-slow-overwrited.log
  - host: 172.16.12.175
  - host: 172.16.12.156
tikv_servers:
  - host: 172.16.12.214
    # ssh_port: 22
    # port: 20160
    # status_port: 20180
    # deploy_dir: "/tidb-deploy/tikv-20160"
    # data_dir: "/tidb-data/tikv-20160"
    # log_dir: "/tidb-deploy/tikv-20160/log"
    # numa_node: "0,1"
    # # The following configs are used to overwrite the `server_configs.tikv` values.
    # config:
    #   server.grpc-concurrency: 4
    #   server.labels: { zone: "zone1", dc: "dc1", host: "host1" }
  - host: 172.16.12.131
  - host: 172.16.12.146

pump_servers:
  - host: 172.16.12.165
    ssh_port: 22
    port: 8250
    deploy_dir: "/tidb-deploy/pump-8249"
    data_dir: "/tidb-data/pump-8249"
    # The following configs are used to overwrite the `server_configs.drainer` values.
    config:
      gc: 7
  - host: 172.16.12.218
    ssh_port: 22
    port: 8250
    deploy_dir: "/tidb-deploy/pump-8249"
    data_dir: "/tidb-data/pump-8249"
    # The following configs are used to overwrite the `server_configs.drainer` values.
    config:
      gc: 7
  - host: 172.16.12.204
    ssh_port: 22
    port: 8250
    deploy_dir: "/tidb-deploy/pump-8249"
    data_dir: "/tidb-data/pump-8249"
    # The following configs are used to overwrite the `server_configs.drainer` values.
    config:
      gc: 7
drainer_servers:
  - host: 172.16.12.141
    port: 8249
    data_dir: "/tidb-data/drainer-8249"
    # If drainer doesn't have a checkpoint, use initial commitTS as the initial checkpoint.
    # Will get a latest timestamp from pd if commit_ts is set to -1 (the default value).
    commit_ts: -1
    deploy_dir: "/tidb-deploy/drainer-8249"
    # The following configs are used to overwrite the `server_configs.drainer` values.
    config:
      syncer.db-type: "tidb"
      syncer.to.host: "172.16.12.203"
      syncer.to.user: "root"
      syncer.to.password: ""
      syncer.to.port: 4000

monitoring_servers:
  - host: 172.16.12.141
    # ssh_port: 22
    # port: 9090
    # deploy_dir: "/tidb-deploy/prometheus-8249"
    # data_dir: "/tidb-data/prometheus-8249"
    # log_dir: "/tidb-deploy/prometheus-8249/log"

grafana_servers:
  - host: 172.16.12.141
    # port: 3000
    # deploy_dir: /tidb-deploy/grafana-3000

alertmanager_servers:
  - host: 172.16.12.141
    # ssh_port: 22
    # web_port: 9093
    # cluster_port: 9094
    # deploy_dir: "/tidb-deploy/alertmanager-9093"
    # data_dir: "/tidb-data/alertmanager-9093"
    # log_dir: "/tidb-deploy/alertmanager-9093/log"

sudo权限

salt -L "172.16.12.141,172.16.12.218,172.16.12.204,172.16.12.165,172.16.12.156,172.16.12.175,172.16.12.203,172.16.12.211,172.16.12.173,172.16.12.159,172.16.12.146,172.16.12.131,172.16.12.214" cmd.run "sudo -l -U tidb"
172.16.12.211:
    Matching Defaults entries for tidb on tidb-cluster-pd3:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-pd3:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.165:
    Matching Defaults entries for tidb on tidb-cluster-pump1:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-pump1:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.204:
    Matching Defaults entries for tidb on tidb-cluster-pump3:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-pump3:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.214:
    Matching Defaults entries for tidb on tidb-cluster-tikv1:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-tikv1:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.173:
    Matching Defaults entries for tidb on tidb-cluster-pd2:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-pd2:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.159:
    Matching Defaults entries for tidb on tidb-cluster-pd1:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-pd1:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.156:
    Matching Defaults entries for tidb on tidb-cluster-tidb3:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-tidb3:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.131:
    Matching Defaults entries for tidb on tidb-cluster-tikv2:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-tikv2:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.141:
    Matching Defaults entries for tidb on tidb-cluster-drainer:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-drainer:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.146:
    Matching Defaults entries for tidb on tidb-cluster-tikv3:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-tikv3:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.175:
    Matching Defaults entries for tidb on tidb-cluster-tidb2:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-tidb2:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.218:
    Matching Defaults entries for tidb on tidb-cluster-pump2:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-pump2:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL
172.16.12.203:
    Matching Defaults entries for tidb on tidb-cluster-tidb1:
        !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
    
    User tidb may run the following commands on tidb-cluster-tidb1:
        (ALL) NOPASSWD: ALL
        (root) NOPASSWD: ALL

deploy 的时候通过 --user 指定下用户再试下呢?参考下:
https://docs.pingcap.com/zh/tidb/stable/production-deployment-using-tiup#第-6-步检查部署的-tidb-集群情况

一样的报错 panic:

# tiup cluster deploy --user=root daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml 
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster deploy --user=root daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml
Please confirm your topology:
Cluster type:    tidb
Cluster name:    daddylab-tidb-cluster
Cluster version: v4.0.11
Type          Host           Ports        OS/Arch       Directories
----          ----           -----        -------       -----------
pd            172.16.12.159  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.173  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.211  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          172.16.12.214  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.131  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.146  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
pump          172.16.12.165  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.218  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.204  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
tidb          172.16.12.203  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.175  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.156  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
drainer       172.16.12.141  8249         linux/x86_64  /tidb-deploy/drainer-8249,/tidb-data/drainer-8249
prometheus    172.16.12.141  9090         linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       172.16.12.141  3000         linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  172.16.12.141  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.11 (linux/amd64) ... Done
  - Download tikv:v4.0.11 (linux/amd64) ... Done
  - Download pump:v4.0.11 (linux/amd64) ... Done
  - Download tidb:v4.0.11 (linux/amd64) ... Done
  - Download drainer:v4.0.11 (linux/amd64) ... Done
  - Download prometheus:v4.0.11 (linux/amd64) ... Done
  - Download grafana:v4.0.11 (linux/amd64) ... Done
  - Download alertmanager:v0.17.0 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 172.16.12.159:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.159
  - Prepare 172.16.12.173:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.173
  - Prepare 172.16.12.211:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.211
  - Prepare 172.16.12.214:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.214
  - Prepare 172.16.12.131:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.131
  - Prepare 172.16.12.146:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.146
  - Prepare 172.16.12.165:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.165
  - Prepare 172.16.12.218:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.218
  - Prepare 172.16.12.204:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.204
  - Prepare 172.16.12.203:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.203
  - Prepare 172.16.12.175:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.175
  - Prepare 172.16.12.156:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.156
  - Prepare 172.16.12.141:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.141
panic: send on closed channel

goroutine 554 [running]:
github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1.1(0xc000653480, 0xc000975320, 0xc0007e7250)
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:255 +0x7c
created by github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:253 +0x285
Error: run `/root/.tiup/components/cluster/v1.3.2/tiup-cluster` (wd:/root/.tiup/data/SQfEMYF) failed: exit status 2

部署时其中一个节点的/var/log/messages 信息如下

....
Mar  4 13:24:02 tidb-cluster-tikv1 systemd: Started Session 194 of user root.
Mar  4 13:24:02 tidb-cluster-tikv1 systemd-logind: New session 194 of user root.
Mar  4 13:24:02 tidb-cluster-tikv1 systemd-logind: Removed session 194.
Mar  4 13:24:02 tidb-cluster-tikv1 systemd: Started Session 195 of user root.
Mar  4 13:24:02 tidb-cluster-tikv1 systemd-logind: New session 195 of user root.
Mar  4 13:24:02 tidb-cluster-tikv1 su: (to tidb) root on none

请问下中控机和目标节点之间配置了免密登录吗?如果没有的话麻烦在 deploy 时添加 -p-i 进行下认证试下:

所有主机均使用统一份ssh key

salt -L "172.16.12.141,172.16.12.218,172.16.12.204,172.16.12.165,172.16.12.156,172.16.12.175,172.16.12.203,172.16.12.211,172.16.12.173,172.16.12.159,172.16.12.146,172.16.12.131,172.16.12.214" cmd.run "md5sum .ssh/*"
172.16.12.141:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    c655fe347d487d59c0348aed69357664  .ssh/known_hosts
172.16.12.159:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    a4d0645c61acda7f8d38d41cae846097  .ssh/known_hosts
172.16.12.146:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    b89317173d9ef0e45956c43d36d83fd3  .ssh/known_hosts
172.16.12.211:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    c390eb2c73a2891089f519e90fbfd757  .ssh/known_hosts
172.16.12.175:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    5d802fd78c2d0aaea54a2ce25c4aa16f  .ssh/known_hosts
172.16.12.204:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    f1ab57f805d538fc0f9e53379607bbe7  .ssh/known_hosts
172.16.12.214:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    1341c322bc8f65eaf5a97da9c1806c2e  .ssh/known_hosts
172.16.12.203:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    c655fe347d487d59c0348aed69357664  .ssh/known_hosts
172.16.12.218:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    f1ab57f805d538fc0f9e53379607bbe7  .ssh/known_hosts
172.16.12.131:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    aab24916fd35434b174933b254115b6b  .ssh/known_hosts
172.16.12.173:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    7e3db810e7a6db60b0e3d858cb9b5681  .ssh/known_hosts
172.16.12.165:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    1c3ac99b9962adc848755f794cf6d46e  .ssh/known_hosts
172.16.12.156:
    f7cc71c387ef59e361c0206f741466c3  .ssh/authorized_keys
    eb5ad325ff0c3a4be220a22a33a69378  .ssh/id_rsa
    1bbabcf244c62c9f457ca34bfb4862d5  .ssh/id_rsa.pub
    e938af4c16a25f0c77142dbc46c4edee  .ssh/known_hosts

所有节点root账户的authorized_keys

salt -L "172.16.12.141,172.16.12.218,172.16.12.204,172.16.12.165,172.16.12.156,172.16.12.175,172.16.12.203,172.16.12.211,172.16.12.173,172.16.12.159,172.16.12.146,172.16.12.131,172.16.12.214" cmd.run "cat .ssh/authorized_keys"
172.16.12.165:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.159:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.175:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.173:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.203:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.146:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.214:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.141:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.204:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.211:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.131:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.156:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal
172.16.12.218:
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAX6TSDTEJMGVEJhQEDx0v1iUjE2lGKgQrjvdXy42FzmNwxQ8qw/DUjhEdPqJ28O4b6JtJNmHQphsaP9xutH+hNH5dm6C4t69eP7W7NnFFgrlsP1wYI2F8roH4FrnQcHdFvj2/oINttSqX3L9+4Aau+MbavslaDvVJgCicUIq/Iymd9UJJv3udu8gIG4JQVX8Z8y8ZGjr8O1+w0CQhlmJXWnA+tss2RoLCAnIHIYkdkaFhv7WYFSo6eNUxqv8Jooj0gOe1kcrjXGnysS7Lkii+OXkR+2hmIdj1OBnzuQuL9wtZQb8HMr6Rw+jaKl6axMxhN9m6/m+7s6TMgdSTJIRV ycd@ycd-work
    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW2lfxPPpfSPlBNCYaFGnD1oLEjWtCyaDk9auz60dKBVpeguh6ZHiqwqxwifgWm0w4eDxS4IVU+QeDkZ3bINdKQa1yaHDNEJL/8EU93WprtCB7QjZonbTiNTd8y6il9DtxKeiXAZ1uXFpJQfCeD6+QhAskGOHloF6nujlpmAwCJYztItM3HY2hewy+QXFe0eYsdUWkA/MLACO0MvUJd+aDGyZvf+OtBXiasqH6L5E+oAB9FPfggswP65jRaVRY4st9AKJQzVr5D7GqoRXpzTDJYI37N1axDROAsdoqXs5hFP7LerU/+9B6IWx42TX4DmwNnWvNNhdlRcQHdDlm+96D root@tidb-cluster-tidb1.novalocal

所有主机均使用统一份ssh key 可正常在任意节点登入 同样报错

# tiup cluster deploy --user=root daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster deploy --user=root daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml
Please confirm your topology:
Cluster type:    tidb
Cluster name:    daddylab-tidb-cluster
Cluster version: v4.0.11
Type          Host           Ports        OS/Arch       Directories
----          ----           -----        -------       -----------
pd            172.16.12.159  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.173  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.211  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          172.16.12.214  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.131  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.146  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
pump          172.16.12.165  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.218  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.204  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
tidb          172.16.12.203  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.175  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.156  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
drainer       172.16.12.141  8249         linux/x86_64  /tidb-deploy/drainer-8249,/tidb-data/drainer-8249
prometheus    172.16.12.141  9090         linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       172.16.12.141  3000         linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  172.16.12.141  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.11 (linux/amd64) ... Done
  - Download tikv:v4.0.11 (linux/amd64) ... Done
  - Download pump:v4.0.11 (linux/amd64) ... Done
  - Download tidb:v4.0.11 (linux/amd64) ... Done
  - Download drainer:v4.0.11 (linux/amd64) ... Done
  - Download prometheus:v4.0.11 (linux/amd64) ... Done
  - Download grafana:v4.0.11 (linux/amd64) ... Done
  - Download alertmanager:v0.17.0 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 172.16.12.159:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.159
  - Prepare 172.16.12.173:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.173
  - Prepare 172.16.12.211:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.211
  - Prepare 172.16.12.214:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.214
  - Prepare 172.16.12.131:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.131
  - Prepare 172.16.12.146:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.146
  - Prepare 172.16.12.165:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.165
  - Prepare 172.16.12.218:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.218
  - Prepare 172.16.12.204:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.204
  - Prepare 172.16.12.203:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.203
  - Prepare 172.16.12.175:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.175
  - Prepare 172.16.12.156:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.156
  - Prepare 172.16.12.141:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.141
panic: send on closed channel

goroutine 709 [running]:
github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1.1(0xc00004f800, 0xc0001e4900, 0xc000670240)
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:255 +0x7c
created by github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:253 +0x285
Error: run `/root/.tiup/components/cluster/v1.3.2/tiup-cluster` (wd:/root/.tiup/data/SQfTPd9) failed: exit status 2
  • 使用 -p 同样报错
tiup cluster deploy --user=root -p  daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster deploy --user=root -p daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml
Please confirm your topology:
Cluster type:    tidb
Cluster name:    daddylab-tidb-cluster
Cluster version: v4.0.11
Type          Host           Ports        OS/Arch       Directories
----          ----           -----        -------       -----------
pd            172.16.12.159  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.173  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.211  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          172.16.12.214  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.131  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.146  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
pump          172.16.12.165  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.218  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.204  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
tidb          172.16.12.203  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.175  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.156  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
drainer       172.16.12.141  8249         linux/x86_64  /tidb-deploy/drainer-8249,/tidb-data/drainer-8249
prometheus    172.16.12.141  9090         linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       172.16.12.141  3000         linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  172.16.12.141  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
Input SSH password: 
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.11 (linux/amd64) ... Done
  - Download tikv:v4.0.11 (linux/amd64) ... Done
  - Download pump:v4.0.11 (linux/amd64) ... Done
  - Download tidb:v4.0.11 (linux/amd64) ... Done
  - Download drainer:v4.0.11 (linux/amd64) ... Done
  - Download prometheus:v4.0.11 (linux/amd64) ... Done
  - Download grafana:v4.0.11 (linux/amd64) ... Done
  - Download alertmanager:v0.17.0 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 172.16.12.159:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.159
  - Prepare 172.16.12.173:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.173
  - Prepare 172.16.12.211:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.211
  - Prepare 172.16.12.214:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.214
  - Prepare 172.16.12.131:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.131
  - Prepare 172.16.12.146:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.146
  - Prepare 172.16.12.165:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.165
  - Prepare 172.16.12.218:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.218
  - Prepare 172.16.12.204:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.204
  - Prepare 172.16.12.203:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.203
  - Prepare 172.16.12.175:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.175
  - Prepare 172.16.12.156:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.156
  - Prepare 172.16.12.141:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.141
panic: send on closed channel

goroutine 756 [running]:
github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1.1(0xc000578600, 0xc0008a0360, 0xc0009066d0)
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:255 +0x7c
created by github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:253 +0x285
Error: run `/root/.tiup/components/cluster/v1.3.2/tiup-cluster` (wd:/root/.tiup/data/SQfTv0B) failed: exit status 2
  • 使用 -i 选项
# tiup cluster deploy --user=root -i ~/.ssh/id_rsa  daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster deploy --user=root -i /root/.ssh/id_rsa daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml
Please confirm your topology:
Cluster type:    tidb
Cluster name:    daddylab-tidb-cluster
Cluster version: v4.0.11
Type          Host           Ports        OS/Arch       Directories
----          ----           -----        -------       -----------
pd            172.16.12.159  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.173  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.211  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          172.16.12.214  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.131  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.146  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
pump          172.16.12.165  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.218  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.204  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
tidb          172.16.12.203  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.175  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.156  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
drainer       172.16.12.141  8249         linux/x86_64  /tidb-deploy/drainer-8249,/tidb-data/drainer-8249
prometheus    172.16.12.141  9090         linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       172.16.12.141  3000         linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  172.16.12.141  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.11 (linux/amd64) ... Done
  - Download tikv:v4.0.11 (linux/amd64) ... Done
  - Download pump:v4.0.11 (linux/amd64) ... Done
  - Download tidb:v4.0.11 (linux/amd64) ... Done
  - Download drainer:v4.0.11 (linux/amd64) ... Done
  - Download prometheus:v4.0.11 (linux/amd64) ... Done
  - Download grafana:v4.0.11 (linux/amd64) ... Done
  - Download alertmanager:v0.17.0 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 172.16.12.159:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.159
  - Prepare 172.16.12.173:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.173
  - Prepare 172.16.12.211:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.211
  - Prepare 172.16.12.214:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.214
  - Prepare 172.16.12.131:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.131
  - Prepare 172.16.12.146:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.146
  - Prepare 172.16.12.165:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.165
  - Prepare 172.16.12.218:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.218
  - Prepare 172.16.12.204:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.204
  - Prepare 172.16.12.203:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.203
  - Prepare 172.16.12.175:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.175
  - Prepare 172.16.12.156:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.156
  - Prepare 172.16.12.141:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.141
panic: send on closed channel

goroutine 560 [running]:
github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1.1(0xc000608980, 0xc0004178c0, 0xc0001e3a10)
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:255 +0x7c
created by github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:253 +0x285
Error: run `/root/.tiup/components/cluster/v1.3.2/tiup-cluster` (wd:/root/.tiup/data/SQfVIZu) failed: exit status 2
  • tiup version
[root@tidb-cluster-tidb1 install]# tiup -v
v1.3.2 tiup
Go Version: go1.13
Git Branch: release-1.3
GitHash: 2d88460
[root@tidb-cluster-tidb1 install]# md5sum `which tiup`
45386d62d77be03f36082d56f8e5f5ea  /root/.tiup/bin/tiup

你好,辛苦提供下 uname -a 的信息

salt -L "172.16.12.141,172.16.12.218,172.16.12.204,172.16.12.165,172.16.12.156,172.16.12.175,172.16.12.203,172.16.12.211,172.16.12.173,172.16.12.159,172.16.12.146,172.16.12.131,172.16.12.214" cmd.run "uname -a"
172.16.12.165:
    Linux tidb-cluster-pump1.novalocal 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.159:
    Linux tidb-cluster-pd1.novalocal 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3 15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.175:
    Linux tidb-cluster-tidb2.novalocal 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3 15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.131:
    Linux tidb-cluster-tikv2.novalocal 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3 15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.146:
    Linux tidb-cluster-tikv3.novalocal 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3 15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.203:
    Linux tidb-cluster-tidb1.novalocal 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3 15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.211:
    Linux tidb-cluster-pd3.novalocal 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3 15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.141:
    Linux tidb-cluster-drainer.novalocal 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3 15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.173:
    Linux tidb-cluster-pd2.novalocal 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3 15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.156:
    Linux tidb-cluster-tidb3.novalocal 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3 15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.204:
    Linux tidb-cluster-pump3.novalocal 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.214:
    Linux tidb-cluster-tikv1.novalocal 3.10.0-1160.15.2.el7.x86_64 #1 SMP Wed Feb 3 15:06:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
172.16.12.218:
    Linux tidb-cluster-pump2.novalocal 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

看 能否提供一个 打印debug信息的本版

是不是有可能是 中控机 ssh count 数量限制导致的。

  • 修改ssh的连接数同样报错
salt -L "172.16.12.141,172.16.12.218,172.16.12.204,172.16.12.165,172.16.12.156,172.16.12.175,172.16.12.203,172.16.12.211,172.16.12.173,172.16.12.159,172.16.12.146,172.16.12.131,172.16.12.214" cmd.run "grep MaxStartups /etc/ssh/sshd_config"
172.16.12.203:
    MaxStartups 1000
172.16.12.165:
    MaxStartups 1000
172.16.12.173:
    MaxStartups 1000
172.16.12.204:
    MaxStartups 1000
172.16.12.211:
    MaxStartups 1000
172.16.12.131:
    MaxStartups 1000
172.16.12.159:
    MaxStartups 1000
172.16.12.214:
    MaxStartups 1000
172.16.12.156:
    MaxStartups 1000
172.16.12.218:
    MaxStartups 1000
172.16.12.141:
    MaxStartups 1000
172.16.12.175:
    MaxStartups 1000
172.16.12.146:
    MaxStartups 1000
tiup cluster deploy   daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.2/tiup-cluster deploy daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml
Please confirm your topology:
Cluster type:    tidb
Cluster name:    daddylab-tidb-cluster
Cluster version: v4.0.11
Type          Host           Ports        OS/Arch       Directories
----          ----           -----        -------       -----------
pd            172.16.12.159  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.173  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.211  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          172.16.12.214  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.131  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.146  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
pump          172.16.12.165  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.218  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.204  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
tidb          172.16.12.203  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.175  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.156  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
drainer       172.16.12.141  8249         linux/x86_64  /tidb-deploy/drainer-8249,/tidb-data/drainer-8249
prometheus    172.16.12.141  9090         linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       172.16.12.141  3000         linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  172.16.12.141  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.11 (linux/amd64) ... Done
  - Download tikv:v4.0.11 (linux/amd64) ... Done
  - Download pump:v4.0.11 (linux/amd64) ... Done
  - Download tidb:v4.0.11 (linux/amd64) ... Done
  - Download drainer:v4.0.11 (linux/amd64) ... Done
  - Download prometheus:v4.0.11 (linux/amd64) ... Done
  - Download grafana:v4.0.11 (linux/amd64) ... Done
  - Download alertmanager:v0.17.0 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 172.16.12.159:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.159
  - Prepare 172.16.12.173:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.173
  - Prepare 172.16.12.211:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.211
  - Prepare 172.16.12.214:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.214
  - Prepare 172.16.12.131:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.131
  - Prepare 172.16.12.146:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.146
  - Prepare 172.16.12.165:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.165
  - Prepare 172.16.12.218:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.218
  - Prepare 172.16.12.204:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.204
  - Prepare 172.16.12.203:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.203
  - Prepare 172.16.12.175:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.175
  - Prepare 172.16.12.156:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.156
  - Prepare 172.16.12.141:22 ... ⠸ EnvInit: user=tidb, host=172.16.12.141
panic: send on closed channel

goroutine 734 [running]:
github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1.1(0xc00032b400, 0xc0009cc360, 0xc000674c90)
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:255 +0x7c
created by github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:253 +0x285
Error: run `/root/.tiup/components/cluster/v1.3.2/tiup-cluster` (wd:/root/.tiup/data/SQgIeNO) failed: exit status 2

  • /var/log/secure 日志如下
Mar  4 17:45:04 tidb-cluster-pump2 sshd[17630]: Accepted publickey for root from 172.16.12.203 port 46198 ssh2: RSA SHA256:pZbO5jQxJwhaHs9tQnuVAZrlJrF27T+bQeXfvmy3mEk
Mar  4 17:45:04 tidb-cluster-pump2 sshd[17630]: pam_unix(sshd:session): session opened for user root by (uid=0)
Mar  4 17:45:04 tidb-cluster-pump2 sudo:    root : TTY=unknown ; PWD=/root ; USER=root ; COMMAND=/bin/bash -c id -u tidb > /dev/null 2>&1 || (/usr/sbin/groupadd -f tidb && /usr/sbin/useradd -m -s /bin/bash -g tidb tidb) && echo 'tidb ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/tidb
Mar  4 17:45:04 tidb-cluster-pump2 sudo: pam_unix(sudo:session): session opened for user root by (uid=0)
Mar  4 17:45:04 tidb-cluster-pump2 sudo: pam_unix(sudo:session): session closed for user root
Mar  4 17:45:04 tidb-cluster-pump2 sshd[17630]: pam_unix(sshd:session): session closed for user root
Mar  4 17:45:04 tidb-cluster-pump2 sshd[17644]: Accepted publickey for root from 172.16.12.203 port 46246 ssh2: RSA SHA256:pZbO5jQxJwhaHs9tQnuVAZrlJrF27T+bQeXfvmy3mEk
Mar  4 17:45:04 tidb-cluster-pump2 sshd[17644]: pam_unix(sshd:session): session opened for user root by (uid=0)
Mar  4 17:45:04 tidb-cluster-pump2 sudo:    root : TTY=unknown ; PWD=/root ; USER=root ; COMMAND=/bin/bash -c su - tidb -c 'mkdir -p ~/.ssh && chmod 700 ~/.ssh'
Mar  4 17:45:04 tidb-cluster-pump2 sudo: pam_unix(sudo:session): session opened for user root by (uid=0)
Mar  4 17:45:04 tidb-cluster-pump2 su: pam_unix(su-l:session): session opened for user tidb by (uid=0)
Mar  4 17:45:04 tidb-cluster-pump2 su: pam_unix(su-l:session): session closed for user tidb
Mar  4 17:45:04 tidb-cluster-pump2 sudo: pam_unix(sudo:session): session closed for user root
Mar  4 17:46:04 tidb-cluster-pump2 sshd[17644]: pam_unix(sshd:session): session closed for user root

这样,你用 tidb 用户安装下,在中控机创建 tidb 用户,并在这个用户下安装 tiup,在进行 tiup cluster
deploy 看是否成功

不行啊…太难了

[tidb@tidb-cluster-tidb1 install]$ id 
uid=1002(tidb) gid=1002(tidb) groups=1002(tidb),10(wheel)
[tidb@tidb-cluster-tidb1 install]$ tiup cluster deploy   daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml -p -u tidb
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster deploy daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml -p -u tidb
Please confirm your topology:
Cluster type:    tidb
Cluster name:    daddylab-tidb-cluster
Cluster version: v4.0.11
Type          Host           Ports        OS/Arch       Directories
----          ----           -----        -------       -----------
pd            172.16.12.159  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.173  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.211  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          172.16.12.214  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.131  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.146  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
pump          172.16.12.165  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.218  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.204  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
tidb          172.16.12.203  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.175  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.156  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
drainer       172.16.12.141  8249         linux/x86_64  /tidb-deploy/drainer-8249,/tidb-data/drainer-8249
prometheus    172.16.12.141  9090         linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       172.16.12.141  3000         linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  172.16.12.141  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
Input SSH password: 
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.11 (linux/amd64) ... Done
  - Download tikv:v4.0.11 (linux/amd64) ... Done
  - Download pump:v4.0.11 (linux/amd64) ... Done
  - Download tidb:v4.0.11 (linux/amd64) ... Done
  - Download drainer:v4.0.11 (linux/amd64) ... Done
  - Download prometheus:v4.0.11 (linux/amd64) ... Done
  - Download grafana:v4.0.11 (linux/amd64) ... Done
  - Download alertmanager:v0.17.0 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 172.16.12.159:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.159
  - Prepare 172.16.12.173:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.173
  - Prepare 172.16.12.211:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.211
  - Prepare 172.16.12.214:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.214
  - Prepare 172.16.12.131:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.131
  - Prepare 172.16.12.146:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.146
  - Prepare 172.16.12.165:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.165
  - Prepare 172.16.12.218:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.218
  - Prepare 172.16.12.204:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.204
  - Prepare 172.16.12.203:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.203
  - Prepare 172.16.12.175:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.175
  - Prepare 172.16.12.156:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.156
  - Prepare 172.16.12.141:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.141
panic: send on closed channel

goroutine 582 [running]:
github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1.1(0xc0000db880, 0xc00062c660, 0xc000762af0)
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:255 +0x7c
created by github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:253 +0x285
Error: run `/home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster` (wd:/home/tidb/.tiup/data/SQlwQMg) failed: exit status 2


我们已经联系研发同学,请将 tiup 升级到最新版本换个中控机并使用 tidb 用户看是否可以部署成功
tiup update --self && tiup update cluster

好像不太行

  • 升级或安装tiup
[root@tidb-cluster-tidb3 ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 8700k  100 8700k    0     0  2988k      0  0:00:02  0:00:02 --:--:-- 2988k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Set mirror to https://tiup-mirrors.pingcap.com success
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================
[root@tidb-cluster-tidb3 ~]# source  /root/.bash_profile
[root@tidb-cluster-tidb3 ~]# tiup -v
v1.3.4 tiup
Go Version: go1.13
Git Branch: release-1.3
GitHash: b262d05
  • root用户下部署
[root@tidb-cluster-tidb3 ~]# cd install/
[root@tidb-cluster-tidb3 install]# tiup cluster deploy   daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml -p
The component `cluster` version  is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.3.4-linux-amd64.tar.gz 10.06 MiB / 10.06 MiB 100.00% 12.85 MiB p/s                                                                                            
Starting component `cluster`: /root/.tiup/components/cluster/v1.3.4/tiup-cluster deploy daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml -p
Please confirm your topology:
Cluster type:    tidb
Cluster name:    daddylab-tidb-cluster
Cluster version: v4.0.11
Type          Host           Ports        OS/Arch       Directories
----          ----           -----        -------       -----------
pd            172.16.12.159  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.173  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.211  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          172.16.12.214  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.131  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.146  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
pump          172.16.12.165  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.218  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.204  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
tidb          172.16.12.203  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.175  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.156  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
drainer       172.16.12.141  8249         linux/x86_64  /tidb-deploy/drainer-8249,/tidb-data/drainer-8249
prometheus    172.16.12.141  9090         linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       172.16.12.141  3000         linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  172.16.12.141  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
Input SSH password: 
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.11 (linux/amd64) ... Done
  - Download tikv:v4.0.11 (linux/amd64) ... Done
  - Download pump:v4.0.11 (linux/amd64) ... Done
  - Download tidb:v4.0.11 (linux/amd64) ... Done
  - Download drainer:v4.0.11 (linux/amd64) ... Done
  - Download prometheus:v4.0.11 (linux/amd64) ... Done
  - Download grafana:v4.0.11 (linux/amd64) ... Done
  - Download alertmanager: (linux/amd64) ... Done
  - Download node_exporter: (linux/amd64) ... Done
  - Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 172.16.12.159:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.159
  - Prepare 172.16.12.173:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.173
  - Prepare 172.16.12.211:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.211
  - Prepare 172.16.12.214:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.214
  - Prepare 172.16.12.131:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.131
  - Prepare 172.16.12.146:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.146
  - Prepare 172.16.12.165:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.165
  - Prepare 172.16.12.218:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.218
  - Prepare 172.16.12.204:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.204
  - Prepare 172.16.12.203:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.203
  - Prepare 172.16.12.175:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.175
  - Prepare 172.16.12.156:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.156
  - Prepare 172.16.12.141:22 ... ⠼ EnvInit: user=tidb, host=172.16.12.141
panic: send on closed channel

goroutine 1066 [running]:
github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1.1(0xc000aa8400, 0xc000544c60, 0xc000874fd0)
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:255 +0x7c
created by github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:253 +0x285
Error: run `/root/.tiup/components/cluster/v1.3.4/tiup-cluster` (wd:/root/.tiup/data/SR1YVXd) failed: exit status 2
  • 切换至 tidb用户还是不行
[root@tidb-cluster-tidb3 install]# su - tidb
上一次登录:一 3月  8 09:00:34 CST 2021
[tidb@tidb-cluster-tidb3 ~]$ curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 8700k  100 8700k    0     0  6350k      0  0:00:01  0:00:01 --:--:-- 6355k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /home/tidb/.tiup/bin/7b8e153f2e2d0928.root.json
Set mirror to https://tiup-mirrors.pingcap.com success
Detected shell: bash
Shell profile:  /home/tidb/.bash_profile
/home/tidb/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /home/tidb/.bash_profile to use it
Installed path: /home/tidb/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================


[tidb@tidb-cluster-tidb3 ~]$ source /home/tidb/.bash_profile 
[tidb@tidb-cluster-tidb3 ~]$ cd install/
[tidb@tidb-cluster-tidb3 install]$ tiup cluster deploy   daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml -p -u tidb
The component `cluster` version  is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.3.4-linux-amd64.tar.gz 10.06 MiB / 10.06 MiB 100.00% 9.14 MiB p/s                                                                                             
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.4/tiup-cluster deploy daddylab-tidb-cluster v4.0.11 complex-tidb-binlog.yaml -p -u tidb
Please confirm your topology:
Cluster type:    tidb
Cluster name:    daddylab-tidb-cluster
Cluster version: v4.0.11
Type          Host           Ports        OS/Arch       Directories
----          ----           -----        -------       -----------
pd            172.16.12.159  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.173  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            172.16.12.211  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          172.16.12.214  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.131  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.12.146  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
pump          172.16.12.165  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.218  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
pump          172.16.12.204  8250         linux/x86_64  /tidb-deploy/pump-8249,/tidb-data/pump-8249
tidb          172.16.12.203  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.175  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
tidb          172.16.12.156  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
drainer       172.16.12.141  8249         linux/x86_64  /tidb-deploy/drainer-8249,/tidb-data/drainer-8249
prometheus    172.16.12.141  9090         linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       172.16.12.141  3000         linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  172.16.12.141  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
Input SSH password: 
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.11 (linux/amd64) ... Done
  - Download tikv:v4.0.11 (linux/amd64) ... Done
  - Download pump:v4.0.11 (linux/amd64) ... Done
  - Download tidb:v4.0.11 (linux/amd64) ... Done
  - Download drainer:v4.0.11 (linux/amd64) ... Done
  - Download prometheus:v4.0.11 (linux/amd64) ... Done
  - Download grafana:v4.0.11 (linux/amd64) ... Done
  - Download alertmanager: (linux/amd64) ... Done
  - Download node_exporter: (linux/amd64) ... Done
  - Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 172.16.12.159:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.159
  - Prepare 172.16.12.173:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.173
  - Prepare 172.16.12.211:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.211
  - Prepare 172.16.12.214:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.214
  - Prepare 172.16.12.131:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.131
  - Prepare 172.16.12.146:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.146
  - Prepare 172.16.12.165:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.165
  - Prepare 172.16.12.218:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.218
  - Prepare 172.16.12.204:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.204
  - Prepare 172.16.12.203:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.203
  - Prepare 172.16.12.175:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.175
  - Prepare 172.16.12.156:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.156
  - Prepare 172.16.12.141:22 ... ⠙ EnvInit: user=tidb, host=172.16.12.141
panic: send on closed channel

goroutine 828 [running]:
github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1.1(0xc0007ce700, 0xc000401020, 0xc00097a7e0)
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:255 +0x7c
created by github.com/appleboy/easyssh-proxy.(*MakeConfig).Stream.func1
	github.com/appleboy/easyssh-proxy@v1.3.2/easyssh.go:253 +0x285
Error: run `/home/tidb/.tiup/components/cluster/v1.3.4/tiup-cluster` (wd:/home/tidb/.tiup/data/SR1Zels) failed: exit status 2
  • 换过部署的节点一样…

打算重新整几台虚拟机,重新部署试试,
感谢您的答复。环境我先保留,可在此环境测试 tiup

panic 的地方是 tiup 使用的一个第三方库存在的 bug,具体的触发条件我们还在排查中,如果测试环境还在可以将 tiup 升级到 nightly 版本,并使用这个版本 deploy 集群,看是否顺利。
pr ref:
https://github.com/appleboy/easyssh-proxy/pull/66
https://github.com/pingcap/tiup/pull/1200

tiup 升级到夜版 有问题?

  • 直接下载最新的 release
[root@tidb-cluster-tidb1 ~]# !curl
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 8700k  100 8700k    0     0  3496k      0  0:00:02  0:00:02 --:--:-- 3498k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Set mirror to https://tiup-mirrors.pingcap.com success
Detected shell: bash
Shell profile:  /root/.bash_profile
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================
  • 检查 tiup 以及版本
[root@tidb-cluster-tidb1 ~]# which tiup 
/root/.tiup/bin/tiup
[root@tidb-cluster-tidb1 ~]# tiup -v
v1.3.4 tiup
Go Version: go1.13
Git Branch: release-1.3
GitHash: b262d05
[root@tidb-cluster-tidb1 ~]# 
  • 升级夜版

[root@tidb-cluster-tidb1 ~]# tiup update  --nightly --self
download https://tiup-mirrors.pingcap.com/tiup-v1.3.4-linux-amd64.tar.gz 8.50 MiB / 8.50 MiB 100.00% 12.90 MiB p/s                                                                                                 
Updated successfully!
[root@tidb-cluster-tidb1 ~]# tiup -v
v1.3.4 tiup
Go Version: go1.13
Git Branch: release-1.3
GitHash: b262d05

我命令有问题 GitHash一致啊

  1. 请问下,您具体是什么环境的虚拟机?
  2. 之前有没有试过,不安装binlog,只是标准的pd,tidb,tikv 这些能否成功呢?

openstack KVM
基础镜像做过一些安全加固,所以一直没有安装成功。

后面用了未安全加固的虚拟机镜像部署成功了

之前有一套环境,先部署的集群后安全加固。使用到现在正常

此次是先安全加固 后部署集群 就遇到了问题

最开始没往这方面想. 后面才发现的。所以也给大家添麻烦了…

说回来: 如果没有日志功能的话 可以建议tiup部署时有日志收集 方便问题诊断。