tidb做了扩缩容之后,依然为down的状态

【 TiDB 使用环境】生产环境
【 TiDB 版本】7.1
【复现路径】做过哪些操作出现的问题
【遇到的问题:问题现象及影响】因文件系统满导致1个节点的pd服务无法启动,按照扩缩容方法处理,依然无法启动
【资源配置】进入到 TiDB Dashboard -集群信息 (Cluster Info) -主机(Hosts) 截图此页面
【附件:截图/日志/监控】

deploy_dir: “/data/software/tidb-7.1.0/tidb-deploy/pd-22379”
data_dir: “/data/software/tidb-data/tidb/tidb-data/pd-22379”
log_dir: “/data/software/tidb-7.1.0/tidb-deploy/pd-22379/log”
tiup cluster scale-out tidb scale-out.yml -p -i /home/root/.ssh/gcp_rsa
[FATAL] [main.go:232] [“run server failed”] [error=“[PD:server:ErrCancelStartEtcd]etcd start canceled”] [stack=“main.start
n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/cmd/pd-server/main.go:232\nmain.createServerWrapper\n\t/home/jenkins/agent/worksp
ace/build-common/go/src/github.com/pingcap/pd/cmd/pd-server/main.go:147\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.
0.0/command.go:846\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950\ngithub.com/spf13/cobra.(*Command
).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887\nmain.main\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/
cmd/pd-server/main.go:56\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250”]

[INFO] [etcd.go:305] [“starting an etcd server”] [etcd-version=3.4.21] [git-sha=“Not provided (use ./build instead of go bu
ild)”] [go-version=go1.20.3] [go-os=linux] [go-arch=amd64] [max-cpu-set=16] [max-cpu-available=16] [member-initialized=true] [name=pd-1] [data-dir=/data/sof
tware/tidb-data/tidb/tidb-data/pd-22379] [wal-dir=] [wal-dir-dedicated=] [member-dir=/data/software/tidb-data/tidb/tidb-data/pd-22379/member] [force-new-clu
ster=false] [heartbeat-interval=500ms] [election-timeout=3s] [initial-election-tick-advance=true] [snapshot-count=100000] [snapshot-catchup-entries=5000] [i nitial-advertise-peer-urls=“[http://192.168.209.5:22380]”] [listen-peer-urls=“[http://0.0.0.0:22380]”] [advertise-client-urls=“[http://192.168.209.5:22379]”
] [listen-client-urls=“[http://0.0.0.0:22379]”] [listen-metrics-urls=“”] [cors=“[]“] [host-whitelist=”[]”] [initial-cluster=] [initial-cluster-state=new
] [initial-cluster-token=] [quota-backend-bytes=8589934592] [max-request-bytes=157286400] [max-concurrent-streams=4294967295] [pre-vote=true] [initial-corru
pt-check=false] [corrupt-check-time-interval=0s] [auto-compaction-mode=periodic] [auto-compaction-retention=1h0m0s] [auto-compaction-interval=1h0m0s] [disco
very-url=] [discovery-proxy=]
[2024/04/11 17:39:02.624 +08:00] [WARN] [server.go:297] [“exceeded recommended request limit”] [max-request-bytes=157286400] [max-request-size=“157 MB”] [re
commended-request-bytes=10485760] [recommended-request-size=“10 MB”]
[2024/04/11 17:39:02.624 +08:00] [INFO] [backend.go:80] [“opened backend db”] [path=/data/software/tidb-data/tidb/tidb-data/pd-22379/member/snap/db] [took=1
73.215µs]
[2024/04/11 17:39:02.624 +08:00] [INFO] [raft.go:586] [“restarting local member”] [cluster-id=2c0580342200cbf5] [local-member-id=b43ecfd4b44129fc] [commit-i
ndex=0]
[2024/04/11 17:39:02.624 +08:00] [INFO] [raft.go:1523] [“b43ecfd4b44129fc switched to configuration voters=()”]
[2024/04/11 17:39:02.624 +08:00] [INFO] [raft.go:706] [“b43ecfd4b44129fc became follower at term 2”]
[2024/04/11 17:39:02.624 +08:00] [INFO] [raft.go:389] [“newRaft b43ecfd4b44129fc [peers: , term: 2, commit: 0, applied: 0, lastindex: 0, lastterm: 0]”]
[2024/04/11 17:39:02.627 +08:00] [WARN] [store.go:1379] [“simple token is not cryptographically signed”]

你的集群,有配置几个 PD 呢?

【资源配置】进入到 TiDB Dashboard -集群信息 (Cluster Info) -主机(Hosts) 截图此页面
截图看看~

不是应该把有问题的PD缩容掉吗

已经缩完成了吗

看一下集群拓扑,还有你做过的操作

不至于吧,是不是还有其他操作呀

按照如下步骤做了扩缩容,pd服务依然是down的状态
[root@host-192-168-209-5 tidb-community-server-v7.1.0-linux-amd64]# cat scale-out.yml
pd_servers:

  • host: 192.168.209.5
    ssh_port: 22
    name: pd-1
    client_port: 22379
    peer_port: 22380
    deploy_dir: /data/software/tidb-7.1.0/tidb-deploy/pd-22379
    data_dir: /data/software/tidb-data/tidb/tidb-data/pd-22379
    log_dir: /data/software/tidb-7.1.0/tidb-deploy/pd-22379/log
    [root@host-192-168-209-5 tidb-community-server-v7.1.0-linux-amd64]# vi scale-out.yml
    [root@host-192-168-209-5 tidb-community-server-v7.1.0-linux-amd64]# cat scale-out.yml
    pd_servers:
  • host: 192.168.209.5
    ssh_port: 22
    name: “pd-1”
    client_port: 22379
    peer_port: 22380
    deploy_dir: “/data/software/tidb-7.1.0/tidb-deploy/pd-22379”
    data_dir: “/data/software/tidb-data/tidb/tidb-data/pd-22379”
    log_dir: “/data/software/tidb-7.1.0/tidb-deploy/pd-22379/log”
    [root@host-192-168-209-5 tidb-community-server-v7.1.0-linux-amd64]# tiup cluster scale-out tidb scale-out.yml -p
    tiup is checking updates for component cluster …
    Starting component cluster: /root/.tiup/components/cluster/v1.12.2/tiup-cluster scale-out tidb scale-out.yml -p
    Input SSH password:
  • Detect CPU Arch Name

    • Detecting node 192.168.209.5 Arch info … Done
  • Detect CPU OS Name

    • Detecting node 192.168.209.5 OS info … Done
      Please confirm your topology:
      Cluster type: tidb
      Cluster name: tidb
      Cluster version: v7.1.0
      Role Host Ports OS/Arch Directories

pd 192.168.209.5 22379/22380 linux/x86_64 /data/software/tidb-7.1.0/tidb-deploy/pd-22379,/data/software/tidb-data/tidb/tidb-data/pd-22379
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y

  • [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa.pub
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.7
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.6
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.6
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.6
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.7
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.5
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.5
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.5
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.5
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.5
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.7
  • Download TiDB components
    • Download pd:v7.1.0 (linux/amd64) … Done
  • Initialize target host environments
  • Deploy TiDB instance
    • Deploy instance pd → 192.168.209.5:22379 … Done
  • Copy certificate to remote host
  • Generate scale-out config
    • Generate scale-out config pd → 192.168.209.5:22379 … Done
  • Init monitor config
    Enabling component pd
    Enabling instance 192.168.209.5:22379
    Enable instance 192.168.209.5:22379 success
    Enabling component node_exporter
    Enabling instance 192.168.209.5
    Enable 192.168.209.5 success
    Enabling component blackbox_exporter
    Enabling instance 192.168.209.5
    Enable 192.168.209.5 success
  • [ Serial ] - Save meta
  • [ Serial ] - Start new instances
    Starting component pd
    Starting instance 192.168.209.5:22379
    Start instance 192.168.209.5:22379 success
    Starting component node_exporter
    Starting instance 192.168.209.5
    Start 192.168.209.5 success
    Starting component blackbox_exporter
    Starting instance 192.168.209.5
    Start 192.168.209.5 success
  • Refresh components conifgs
    • Generate config pd → 192.168.209.6:22379 … Done
    • Generate config pd → 192.168.209.7:22379 … Done
    • Generate config pd → 192.168.209.5:22379 … Done
    • Generate config tikv → 192.168.209.5:20160 … Done
    • Generate config tikv → 192.168.209.6:20160 … Done
    • Generate config tikv → 192.168.209.7:20160 … Done
    • Generate config tidb → 192.168.209.5:4000 … Done
    • Generate config tidb → 192.168.209.6:4000 … Done
    • Generate config tidb → 192.168.209.7:4000 … Done
    • Generate config prometheus → 192.168.209.5:9090 … Done
    • Generate config grafana → 192.168.209.5:3000 … Done
    • Generate config alertmanager → 192.168.209.5:9093 … Done
  • Reload prometheus and grafana
    • Reload prometheus → 192.168.209.5:9090 … Done
    • Reload grafana → 192.168.209.5:3000 … Done
  • [ Serial ] - UpdateTopology: cluster=tidb
    Scaled cluster tidb out successfully
    [root@host-192-168-209-5 tidb-community-server-v7.1.0-linux-amd64]# tiup cluster display tidb
    tiup is checking updates for component cluster …
    Starting component cluster: /root/.tiup/components/cluster/v1.12.2/tiup-cluster display tidb
    Cluster type: tidb
    Cluster name: tidb
    Cluster version: v7.1.0
    Deploy user: tidb
    SSH type: builtin
    Dashboard URL: http://192.168.209.7:22379/dashboard
    Grafana URL: http://192.168.209.5:3000
    ID Role Host Ports OS/Arch Status Data Dir Deploy Dir

192.168.209.5:9093 alertmanager 192.168.209.5 9093/9094 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/alertmanager-9093 /data/software/tidb-7.1.0/tidb-deploy/alertmanager-9093
192.168.209.5:3000 grafana 192.168.209.5 3000 linux/x86_64 Up - /data/software/tidb-7.1.0/tidb-deploy/grafana-3000
192.168.209.5:22379 pd 192.168.209.5 22379/22380 linux/x86_64 Down /data/software/tidb-data/tidb/tidb-data/pd-22379 /data/software/tidb-7.1.0/tidb-deploy/pd-22379
192.168.209.6:22379 pd 192.168.209.6 22379/22380 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/pd-22379 /data/software/tidb-7.1.0/tidb-deploy/pd-22379
192.168.209.7:22379 pd 192.168.209.7 22379/22380 linux/x86_64 Up|L|UI /data/software/tidb-data/tidb/tidb-data/pd-22379 /data/software/tidb-7.1.0/tidb-deploy/pd-22379
192.168.209.5:9090 prometheus 192.168.209.5 9090/12020 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/prometheus-8249 /data/software/tidb-7.1.0/tidb-deploy/prometheus-8249
192.168.209.5:4000 tidb 192.168.209.5 4000/10080 linux/x86_64 Up - /data/software/tidb-7.1.0/tidb-deploy/tidb-4000
192.168.209.6:4000 tidb 192.168.209.6 4000/10080 linux/x86_64 Up - /data/software/tidb-7.1.0/tidb-deploy/tidb-4000
192.168.209.7:4000 tidb 192.168.209.7 4000/10080 linux/x86_64 Up - /data/software/tidb-7.1.0/tidb-deploy/tidb-4000
192.168.209.5:20160 tikv 192.168.209.5 20160/20180 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/tikv-20160 /data/software/tidb-7.1.0/tidb-deploy/tikv-20160
192.168.209.6:20160 tikv 192.168.209.6 20160/20180 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/tikv-20160 /data/software/tidb-7.1.0/tidb-deploy/tikv-20160
192.168.209.7:20160 tikv 192.168.209.7 20160/20180 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/tikv-20160 /data/software/tidb-7.1.0/tidb-deploy/tikv-20160
Total nodes: 12
[root@host-192-168-209-5 tidb-community-server-v7.1.0-linux-amd64]# tiup cluster start tidb -N 192.168.209.5:22379
tiup is checking updates for component cluster …
Starting component cluster: /root/.tiup/components/cluster/v1.12.2/tiup-cluster start tidb -N 192.168.209.5:22379
Starting cluster tidb…

  • [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa.pub
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.6
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.6
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.7
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.5
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.7
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.5
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.5
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.6
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.5
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.5
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.7
  • [Parallel] - UserSSH: user=tidb, host=192.168.209.5
  • [ Serial ] - StartCluster
    Starting component pd
    Starting instance 192.168.209.5:22379
    Start instance 192.168.209.5:22379 success
    Starting component node_exporter
    Starting instance 192.168.209.5
    Start 192.168.209.5 success
    Starting component blackbox_exporter
    Starting instance 192.168.209.5
    Start 192.168.209.5 success
  • [ Serial ] - UpdateTopology: cluster=tidb
    Started cluster tidb successfully
    [root@host-192-168-209-5 tidb-community-server-v7.1.0-linux-amd64]# tiup cluster display tidb
    tiup is checking updates for component cluster …
    Starting component cluster: /root/.tiup/components/cluster/v1.12.2/tiup-cluster display tidb
    Cluster type: tidb
    Cluster name: tidb
    Cluster version: v7.1.0
    Deploy user: tidb
    SSH type: builtin
    Dashboard URL: http://192.168.209.7:22379/dashboard
    Grafana URL: http://192.168.209.5:3000
    ID Role Host Ports OS/Arch Status Data Dir Deploy Dir

192.168.209.5:9093 alertmanager 192.168.209.5 9093/9094 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/alertmanager-9093 /data/software/tidb-7.1.0/tidb-deploy/alertmanager-9093
192.168.209.5:3000 grafana 192.168.209.5 3000 linux/x86_64 Up - /data/software/tidb-7.1.0/tidb-deploy/grafana-3000
192.168.209.5:22379 pd 192.168.209.5 22379/22380 linux/x86_64 Down /data/software/tidb-data/tidb/tidb-data/pd-22379 /data/software/tidb-7.1.0/tidb-deploy/pd-22379
192.168.209.6:22379 pd 192.168.209.6 22379/22380 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/pd-22379 /data/software/tidb-7.1.0/tidb-deploy/pd-22379
192.168.209.7:22379 pd 192.168.209.7 22379/22380 linux/x86_64 Up|L|UI /data/software/tidb-data/tidb/tidb-data/pd-22379 /data/software/tidb-7.1.0/tidb-deploy/pd-22379
192.168.209.5:9090 prometheus 192.168.209.5 9090/12020 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/prometheus-8249 /data/software/tidb-7.1.0/tidb-deploy/prometheus-8249
192.168.209.5:4000 tidb 192.168.209.5 4000/10080 linux/x86_64 Up - /data/software/tidb-7.1.0/tidb-deploy/tidb-4000
192.168.209.6:4000 tidb 192.168.209.6 4000/10080 linux/x86_64 Up - /data/software/tidb-7.1.0/tidb-deploy/tidb-4000
192.168.209.7:4000 tidb 192.168.209.7 4000/10080 linux/x86_64 Up - /data/software/tidb-7.1.0/tidb-deploy/tidb-4000
192.168.209.5:20160 tikv 192.168.209.5 20160/20180 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/tikv-20160 /data/software/tidb-7.1.0/tidb-deploy/tikv-20160
192.168.209.6:20160 tikv 192.168.209.6 20160/20180 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/tikv-20160 /data/software/tidb-7.1.0/tidb-deploy/tikv-20160
192.168.209.7:20160 tikv 192.168.209.7 20160/20180 linux/x86_64 Up /data/software/tidb-data/tidb/tidb-data/tikv-20160 /data/software/tidb-7.1.0/tidb-deploy/tikv-20160
Total nodes: 12
[root@host-192-168-209-5 tidb-community-server-v7.1.0-linux-amd64]#

报错日志如下:

[root@host-192-168-209-5 log]# tail -n 100 pd.log
[2024/04/17 16:31:54.657 +08:00] [INFO] [versioninfo.go:89] [“Welcome to Placement Driver (PD)”]
[2024/04/17 16:31:54.657 +08:00] [INFO] [versioninfo.go:90] [PD] [release-version=v7.1.0]
[2024/04/17 16:31:54.657 +08:00] [INFO] [versioninfo.go:91] [PD] [edition=Community]
[2024/04/17 16:31:54.657 +08:00] [INFO] [versioninfo.go:92] [PD] [git-hash=1ff614d90412396c9ebaad76a30d31e683c34adc]
[2024/04/17 16:31:54.657 +08:00] [INFO] [versioninfo.go:93] [PD] [git-branch=heads/refs/tags/v7.1.0]
[2024/04/17 16:31:54.657 +08:00] [INFO] [versioninfo.go:94] [PD] [utc-build-time=“2023-05-25 02:10:43”]
[2024/04/17 16:31:54.657 +08:00] [INFO] [metricutil.go:86] [“disable Prometheus push client”]
[2024/04/17 16:31:54.657 +08:00] [INFO] [join.go:218] [“failed to open directory, maybe start for the first time”] [error=“open /data/software/tidb-data/tidb/tidb-data/pd-22379/member: no such file or directory”]
[2024/04/17 16:31:54.667 +08:00] [INFO] [server.go:242] [“PD config”] [config=“{"client-urls":"http://0.0.0.0:22379","peer-urls":"http://0.0.0.0:22380","advertise-client-urls":"http://192.168.209.5:22379","advertise-peer-urls":"http://192.168.209.5:22380","name":"pd-1","data-dir":"/data/software/tidb-data/tidb/tidb-data/pd-22379","force-new-cluster":false,"enable-grpc-gateway":true,"initial-cluster":"pd-192.168.209.7-22379=http://192.168.209.7:22380,pd-1=http://192.168.209.5:22380,pd-192.168.209.6-22379=http://192.168.209.6:22380","initial-cluster-state":"existing","initial-cluster-token":"pd-cluster","join":"http://192.168.209.6:22379,http://192.168.209.7:22379","lease":3,"log":{"level":"info","format":"text","disable-timestamp":false,"file":{"filename":"/data/software/tidb-7.1.0/tidb-deploy/pd-22379/log/pd.log","max-size":0,"max-days":0,"max-backups":0},"development":false,"disable-caller":false,"disable-stacktrace":false,"disable-error-verbose":true,"sampling":null,"error-output-path":""},"tso-save-interval":"3s","tso-update-physical-interval":"50ms","enable-local-tso":false,"metric":{"job":"pd-1","address":"","interval":"15s"},"schedule":{"max-snapshot-count":64,"max-pending-peer-count":64,"max-merge-region-size":20,"max-merge-region-keys":0,"split-merge-interval":"1h0m0s","swtich-witness-interval":"1h0m0s","enable-one-way-merge":"false","enable-cross-table-merge":"true","patrol-region-interval":"10ms","max-store-down-time":"30m0s","max-store-preparing-time":"48h0m0s","leader-schedule-limit":4,"leader-schedule-policy":"count","region-schedule-limit":2048,"witness-schedule-limit":4,"replica-schedule-limit":64,"merge-schedule-limit":8,"hot-region-schedule-limit":4,"hot-region-cache-hits-threshold":3,"store-limit":{},"tolerant-size-ratio":0,"low-space-ratio":0.8,"high-space-ratio":0.7,"region-score-formula-version":"v2","scheduler-max-waiting-operator":5,"enable-remove-down-replica":"true","enable-replace-offline-replica":"true","enable-make-up-replica":"true","enable-remove-extra-replica":"true","enable-location-replacement":"true","enable-debug-metrics":"false","enable-joint-consensus":"true","enable-tikv-split-region":"true","schedulers-v2":[{"type":"balance-region","args":null,"disable":false,"args-payload":""},{"type":"balance-leader","args":null,"disable":false,"args-payload":""},{"type":"balance-witness","args":null,"disable":false,"args-payload":""},{"type":"hot-region","args":null,"disable":false,"args-payload":""},{"type":"transfer-witness-leader","args":null,"disable":false,"args-payload":""}],"schedulers-payload":null,"store-limit-mode":"manual","hot-regions-write-interval":"10m0s","hot-regions-reserved-days":7,"enable-diagnostic":"true","enable-witness":"false","slow-store-evicting-affected-store-ratio-threshold":0.3,"store-limit-version":"v1"},"replication":{"max-replicas":3,"location-labels":"","strictly-match-label":"false","enable-placement-rules":"true","enable-placement-rules-cache":"false","isolation-level":""},"pd-server":{"use-region-storage":"true","max-gap-reset-ts":"24h0m0s","key-type":"table","runtime-services":"","metric-storage":"","dashboard-address":"auto","trace-region-flow":"true","flow-round-by-digit":3,"min-resolved-ts-persistence-interval":"1s","server-memory-limit":0,"server-memory-limit-gc-trigger":0.7,"enable-gogc-tuner":"false","gc-tuner-threshold":0.6},"cluster-version":"0.0.0","labels":{},"quota-backend-bytes":"8GiB","auto-compaction-mode":"periodic","auto-compaction-retention-v2":"1h","TickInterval":"500ms","ElectionInterval":"3s","PreVote":true,"max-request-bytes":157286400,"security":{"cacert-path":"","cert-path":"","key-path":"","cert-allowed-cn":null,"SSLCABytes":null,"SSLCertBytes":null,"SSLKEYBytes":null,"redact-info-log":false,"encryption":{"data-encryption-method":"plaintext","data-key-rotation-period":"168h0m0s","master-key":{"type":"plaintext","key-id":"","region":"","endpoint":"","path":""}}},"label-property":null,"WarningMsgs":null,"DisableStrictReconfigCheck":false,"HeartbeatStreamBindInterval":"1m0s","LeaderPriorityCheckInterval":"1m0s","dashboard":{"tidb-cacert-path":"","tidb-cert-path":"","tidb-key-path":"","public-path-prefix":"","internal-proxy":false,"enable-telemetry":false,"enable-experimental":false},"replication-mode":{"replication-mode":"majority","dr-auto-sync":{"label-key":"","primary":"","dr":"","primary-replicas":0,"dr-replicas":0,"wait-store-timeout":"1m0s","pause-region-split":"false"}},"keyspace":{"pre-alloc":null},"controller":{"degraded-mode-wait-duration":"0s","request-unit":{"read-base-cost":0.25,"read-cost-per-byte":0.0000152587890625,"write-base-cost":1,"write-cost-per-byte":0.0009765625,"read-cpu-ms-cost":0.3333333333333333}}}”]
[2024/04/17 16:31:54.673 +08:00] [INFO] [apiutil.go:378] [“register REST path”] [path=/pd/api/v1]
[2024/04/17 16:31:54.673 +08:00] [INFO] [apiutil.go:378] [“register REST path”] [path=/pd/api/v2/]
[2024/04/17 16:31:54.673 +08:00] [INFO] [apiutil.go:378] [“register REST path”] [path=/swagger/]
[2024/04/17 16:31:54.673 +08:00] [INFO] [apiutil.go:378] [“register REST path”] [path=/autoscaling]
[2024/04/17 16:31:54.673 +08:00] [INFO] [distro.go:51] [“using distribution strings”] [strings={}]
[2024/04/17 16:31:54.674 +08:00] [INFO] [apiutil.go:378] [“register REST path”] [path=/dashboard/api/]
[2024/04/17 16:31:54.674 +08:00] [INFO] [apiutil.go:378] [“register REST path”] [path=/dashboard/]
[2024/04/17 16:31:54.674 +08:00] [INFO] [apiutil.go:378] [“register REST path”] [path=/resource-manager/api/v1/]
[2024/04/17 16:31:54.674 +08:00] [INFO] [registry.go:92] [“restful API service registered successfully”] [prefix=pd-1] [service-name=ResourceManager]
[2024/04/17 16:31:54.674 +08:00] [INFO] [registry.go:92] [“restful API service registered successfully”] [prefix=pd-1] [service-name=MetaStorage]
[2024/04/17 16:31:54.675 +08:00] [INFO] [etcd.go:117] [“configuring peer listeners”] [listen-peer-urls=“[http://0.0.0.0:22380]”]
[2024/04/17 16:31:54.675 +08:00] [INFO] [systimemon.go:30] [“start system time monitor”]
[2024/04/17 16:31:54.675 +08:00] [INFO] [etcd.go:127] [“configuring client listeners”] [listen-client-urls=“[http://0.0.0.0:22379]”]
[2024/04/17 16:31:54.675 +08:00] [INFO] [etcd.go:611] [“pprof is enabled”] [path=/debug/pprof]
[2024/04/17 16:31:54.675 +08:00] [INFO] [etcd.go:305] [“starting an etcd server”] [etcd-version=3.4.21] [git-sha=“Not provided (use ./build instead of go build)”] [go-version=go1.20.3] [go-os=linux] [go-arch=amd64] [max-cpu-set=16] [max-cpu-available=16] [member-initialized=false] [name=pd-1] [data-dir=/data/software/tidb-data/tidb/tidb-data/pd-22379] [wal-dir=] [wal-dir-dedicated=] [member-dir=/data/software/tidb-data/tidb/tidb-data/pd-22379/member] [force-new-cluster=false] [heartbeat-interval=500ms] [election-timeout=3s] [initial-election-tick-advance=true] [snapshot-count=100000] [snapshot-catchup-entries=5000] [initial-advertise-peer-urls=“[http://192.168.209.5:22380]”] [listen-peer-urls=“[http://0.0.0.0:22380]”] [advertise-client-urls=“[http://192.168.209.5:22379]”] [listen-client-urls=“[http://0.0.0.0:22379]”] [listen-metrics-urls=“”] [cors=“[]“] [host-whitelist=”[]”] [initial-cluster=“pd-192.168.209.6-22379=http://192.168.209.6:22380,pd-192.168.209.7-22379=http://192.168.209.7:22380,pd-1=http://192.168.209.5:22380”] [initial-cluster-state=existing] [initial-cluster-token=pd-cluster] [quota-backend-bytes=8589934592] [max-request-bytes=157286400] [max-concurrent-streams=4294967295] [pre-vote=true] [initial-corrupt-check=false] [corrupt-check-time-interval=0s] [auto-compaction-mode=periodic] [auto-compaction-retention=1h0m0s] [auto-compaction-interval=1h0m0s] [discovery-url=] [discovery-proxy=]
[2024/04/17 16:31:54.675 +08:00] [WARN] [server.go:297] [“exceeded recommended request limit”] [max-request-bytes=157286400] [max-request-size=“157 MB”] [recommended-request-bytes=10485760] [recommended-request-size=“10 MB”]
[2024/04/17 16:31:54.680 +08:00] [INFO] [backend.go:80] [“opened backend db”] [path=/data/software/tidb-data/tidb/tidb-data/pd-22379/member/snap/db] [took=5.299165ms]
[2024/04/17 16:31:54.691 +08:00] [INFO] [raft.go:536] [“starting local member”] [local-member-id=454254c164d8c6cf] [cluster-id=2c0580342200cbf5]
[2024/04/17 16:31:54.691 +08:00] [INFO] [raft.go:1523] [“454254c164d8c6cf switched to configuration voters=()”]
[2024/04/17 16:31:54.691 +08:00] [INFO] [raft.go:706] [“454254c164d8c6cf became follower at term 0”]
[2024/04/17 16:31:54.691 +08:00] [INFO] [raft.go:389] [“newRaft 454254c164d8c6cf [peers: , term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]”]
[2024/04/17 16:31:54.696 +08:00] [WARN] [store.go:1379] [“simple token is not cryptographically signed”]
[2024/04/17 16:31:54.702 +08:00] [INFO] [quota.go:126] [“enabled backend quota”] [quota-name=v3-applier] [quota-size-bytes=8589934592] [quota-size=“8.6 GB”]
[2024/04/17 16:31:54.705 +08:00] [INFO] [pipeline.go:71] [“started HTTP pipelining with remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=359d5f2f171f90a4]
[2024/04/17 16:31:54.705 +08:00] [INFO] [transport.go:294] [“added new remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=359d5f2f171f90a4] [remote-peer-urls=“[http://192.168.209.7:22380]”]
[2024/04/17 16:31:54.705 +08:00] [INFO] [pipeline.go:71] [“started HTTP pipelining with remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=6abe5923309025a4]
[2024/04/17 16:31:54.705 +08:00] [INFO] [transport.go:294] [“added new remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=6abe5923309025a4] [remote-peer-urls=“[http://192.168.209.6:22380]”]
[2024/04/17 16:31:54.705 +08:00] [INFO] [peer.go:128] [“starting remote peer”] [remote-peer-id=359d5f2f171f90a4]
[2024/04/17 16:31:54.705 +08:00] [INFO] [pipeline.go:71] [“started HTTP pipelining with remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=359d5f2f171f90a4]
[2024/04/17 16:31:54.705 +08:00] [INFO] [stream.go:166] [“started stream writer with remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=359d5f2f171f90a4]
[2024/04/17 16:31:54.706 +08:00] [INFO] [peer.go:134] [“started remote peer”] [remote-peer-id=359d5f2f171f90a4]
[2024/04/17 16:31:54.706 +08:00] [INFO] [transport.go:327] [“added remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=359d5f2f171f90a4] [remote-peer-urls=“[http://192.168.209.7:22380]”]
[2024/04/17 16:31:54.707 +08:00] [INFO] [peer.go:128] [“starting remote peer”] [remote-peer-id=6abe5923309025a4]
[2024/04/17 16:31:54.707 +08:00] [INFO] [pipeline.go:71] [“started HTTP pipelining with remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=6abe5923309025a4]
[2024/04/17 16:31:54.707 +08:00] [INFO] [stream.go:406] [“started stream reader with remote peer”] [stream-reader-type=“stream Message”] [local-member-id=454254c164d8c6cf] [remote-peer-id=359d5f2f171f90a4]
[2024/04/17 16:31:54.705 +08:00] [INFO] [stream.go:166] [“started stream writer with remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=359d5f2f171f90a4]
[2024/04/17 16:31:54.708 +08:00] [INFO] [stream.go:406] [“started stream reader with remote peer”] [stream-reader-type=“stream MsgApp v2”] [local-member-id=454254c164d8c6cf] [remote-peer-id=359d5f2f171f90a4]
[2024/04/17 16:31:54.708 +08:00] [INFO] [stream.go:166] [“started stream writer with remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=6abe5923309025a4]
[2024/04/17 16:31:54.708 +08:00] [INFO] [stream.go:166] [“started stream writer with remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=6abe5923309025a4]
[2024/04/17 16:31:54.709 +08:00] [INFO] [peer.go:134] [“started remote peer”] [remote-peer-id=6abe5923309025a4]
[2024/04/17 16:31:54.709 +08:00] [INFO] [stream.go:406] [“started stream reader with remote peer”] [stream-reader-type=“stream MsgApp v2”] [local-member-id=454254c164d8c6cf] [remote-peer-id=6abe5923309025a4]
[2024/04/17 16:31:54.709 +08:00] [INFO] [transport.go:327] [“added remote peer”] [local-member-id=454254c164d8c6cf] [remote-peer-id=6abe5923309025a4] [remote-peer-urls=“[http://192.168.209.6:22380]”]
[2024/04/17 16:31:54.709 +08:00] [INFO] [server.go:816] [“starting etcd server”] [local-member-id=454254c164d8c6cf] [local-server-version=3.4.21] [cluster-version=to_be_decided]
[2024/04/17 16:31:54.709 +08:00] [INFO] [server.go:704] [“starting initial election tick advance”] [election-ticks=6]
[2024/04/17 16:31:54.710 +08:00] [INFO] [peer_status.go:51] [“peer became active”] [peer-id=359d5f2f171f90a4]
[2024/04/17 16:31:54.710 +08:00] [INFO] [stream.go:425] [“established TCP streaming connection with remote peer”] [stream-reader-type=“stream Message”] [local-member-id=454254c164d8c6cf] [remote-peer-id=359d5f2f171f90a4]
[2024/04/17 16:31:54.710 +08:00] [INFO] [stream.go:425] [“established TCP streaming connection with remote peer”] [stream-reader-type=“stream MsgApp v2”] [local-member-id=454254c164d8c6cf] [remote-peer-id=359d5f2f171f90a4]
[2024/04/17 16:31:54.712 +08:00] [INFO] [etcd.go:247] [“now serving peer/client/metrics”] [local-member-id=454254c164d8c6cf] [initial-advertise-peer-urls=“[http://192.168.209.5:22380]”] [listen-peer-urls=“[http://0.0.0.0:22380]”] [advertise-client-urls=“[http://192.168.209.5:22379]”] [listen-client-urls=“[http://0.0.0.0:22379]”] [listen-metrics-urls=“”]
[2024/04/17 16:31:54.713 +08:00] [INFO] [stream.go:406] [“started stream reader with remote peer”] [stream-reader-type=“stream Message”] [local-member-id=454254c164d8c6cf] [remote-peer-id=6abe5923309025a4]
[2024/04/17 16:31:54.713 +08:00] [INFO] [etcd.go:585] [“serving peer traffic”] [address=“[::]:22380”]
[2024/04/17 16:31:54.714 +08:00] [INFO] [peer_status.go:51] [“peer became active”] [peer-id=6abe5923309025a4]
[2024/04/17 16:31:54.714 +08:00] [INFO] [stream.go:425] [“established TCP streaming connection with remote peer”] [stream-reader-type=“stream MsgApp v2”] [local-member-id=454254c164d8c6cf] [remote-peer-id=6abe5923309025a4]
[2024/04/17 16:31:54.714 +08:00] [INFO] [stream.go:425] [“established TCP streaming connection with remote peer”] [stream-reader-type=“stream Message”] [local-member-id=454254c164d8c6cf] [remote-peer-id=6abe5923309025a4]
[2024/04/17 16:31:54.759 +08:00] [INFO] [server.go:729] [“initialized peer connections; fast-forwarding election ticks”] [local-member-id=454254c164d8c6cf] [forward-ticks=4] [forward-duration=2s] [election-ticks=6] [election-timeout=3s] [active-remote-members=2]
[2024/04/17 16:31:55.046 +08:00] [INFO] [raft.go:865] [“454254c164d8c6cf [term: 0] received a MsgHeartbeat message with higher term from 359d5f2f171f90a4 [term: 4]”]
[2024/04/17 16:31:55.046 +08:00] [INFO] [raft.go:706] [“454254c164d8c6cf became follower at term 4”]
[2024/04/17 16:31:55.046 +08:00] [INFO] [node.go:327] [“raft.node: 454254c164d8c6cf elected leader 359d5f2f171f90a4 at term 4”]
[2024/04/17 16:32:05.711 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out, possibly due to connection lost”]
[2024/04/17 16:32:16.711 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:32:27.711 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:32:38.712 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:32:49.713 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:33:00.713 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:33:11.714 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:33:22.715 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:33:33.715 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:33:44.715 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:33:55.716 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:34:06.717 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:34:17.718 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:34:28.719 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:34:39.719 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:34:50.721 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:35:01.721 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:35:12.722 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:35:23.722 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:35:34.723 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:35:45.724 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:35:56.724 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:36:07.725 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:36:18.726 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:36:29.726 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:36:40.727 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:36:51.728 +08:00] [WARN] [server.go:2098] [“failed to publish local member to cluster through raft”] [local-member-id=454254c164d8c6cf] [local-member-attributes=“{Name:pd-1 ClientURLs:[http://192.168.209.5:22379]}”] [request-path=/0/members/454254c164d8c6cf/attributes] [publish-timeout=11s] [error=“etcdserver: request timed out”]
[2024/04/17 16:36:54.675 +08:00] [FATAL] [main.go:232] [“run server failed”] [error=“[PD:server:ErrCancelStartEtcd]etcd start canceled”] [stack=“main.start\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/cmd/pd-server/main.go:232\nmain.createServerWrapper\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/cmd/pd-server/main.go:147\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887\nmain.main\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/pd/cmd/pd-server/main.go:56\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250”]
[root@host-192-168-209-5 log]#

自己顶一下吧

有很多这种报错,有分析过这个原因吗

新的PD节点所在服务器防火墙之类的有关闭吗?检查一下网络相关的原因呗

目录被删了?

日志有点多,看不过来呀,不过还是很关注这个问题的最后解决

最后找到原因了么?上次也碰到过

PD是几个,是先缩容还是先扩容的,个人感觉如果是3个pd缩容一下不知道会不会出现其他问题。tidb服务器是可以缩容到1个都没问题,PD官方建议至少3个

192.168.209.5:22379文件系统坏了? 是如何操作的呢?
比如:扩容、缩容,需要详细描述下。

如果是tidb server至少保证有1台就可以,tikv和pd至少保证3台以上,
如果不是为了节省资源的缩容,建议先扩容一个新服务器,在缩容对应的问题服务器,
比如是pd有一台有问题,那就扩容一台pd server,
等状态全部正常之后,缩容有问题的这个节点。
pdserver的每一个 扩容缩容都需要执行更新配置命令:
tiup cluster reload --skip-restart
参考:
https://docs.pingcap.com/zh/tidb/stable/scale-tidb-using-tiup#1-查看节点-id-信息

扩缩容成功了嘛?

神奇的是,过了1段时间,pd为down的状态自行恢复了。