V5版本升级到V7,因为node_export failed to start而中断upgrade流程

【 TiDB 使用环境】生产环境
【 TiDB 版本】V5升级到V7.1.2
【复现路径】tiup upgrade 集群 或者 tiup restart -R prometheus 均可复现
【遇到的问题:问题现象及影响】

1、集群升级,所有组件基本升级成功,到了最后重启node_export,其中一个节点失败,upgrade退出。
2、查看dashboard,显示集群版本、tidb、pd、tikv版本为V7
3、tiup display,集群版本显示为V5。
4、反复尝试:tiup restart -R prometheus 均重启node_export超时

Error: failed to start: failed to start: a。b.c.x7 node_exporter-9100.service, please check the instance's log() for more detail.: timed out waiting for port 9100 to be started after 2m0s

5、upgrade最后的报错如下

Upgrading component tidb
        Restarting instance a.b.c.x1:4000
        Restart instance a.b.c.x1:4000 success
        Restarting instance a.b.c.x2:4000
        Restart instance a.b.c.x2:4000 success
        Restarting instance a.b.c.x1:4071
        Restart instance a.b.c.x1:4071 success
Upgrading component prometheus
        Restarting instance a.b.c.x7:9090
        Restart instance a.b.c.x7:9090 success
Upgrading component grafana
        Restarting instance a.b.c.x7:3000
        Restart instance a.b.c.x7:3000 success
Upgrading component alertmanager
        Restarting instance a.b.c.x7:9093
        Restart instance a.b.c.x7:9093 success
Stopping component node_exporter
        Stopping instance a.b.c.x1
        Stopping instance a.b.c.x1
        Stopping instance a.b.c.x2
        Stopping instance a.b.c.x4
        Stopping instance a.b.c.x7
        Stopping instance a.b.c.x0
        Stopping instance a.b.c.x6
        Stopping instance a.b.c.x5
        Stopping instance a.b.c.x8
        Stopping instance a.b.c.x3
        Stop a.b.c.x3 success
        Stop a.b.c.x5 success
        Stop a.b.c.x8 success
        Stop a.b.c.x7 success
        Stop a.b.c.x6 success
        Stop a.b.c.x2 success
        Stop a.b.c.x1 success
        Stop a.b.c.x4 success
        Stop a.b.c.x1 success
        Stop a.b.c.x0 success
Stopping component blackbox_exporter
        Stopping instance a.b.c.x1
        Stopping instance a.b.c.x2
        Stopping instance a.b.c.x5
        Stopping instance a.b.c.x0
        Stopping instance a.b.c.x1
        Stopping instance a.b.c.x7
        Stopping instance a.b.c.x3
        Stopping instance a.b.c.x4
        Stopping instance a.b.c.x8
        Stopping instance a.b.c.x6
        Stop a.b.c.x5 success
        Stop a.b.c.x3 success
        Stop a.b.c.x8 success
        Stop a.b.c.x7 success
        Stop a.b.c.x6 success
        Stop a.b.c.x4 success
        Stop a.b.c.x2 success
        Stop a.b.c.x1 success
        Stop a.b.c.x1 success
        Stop a.b.c.x0 success
Starting component node_exporter
        Starting instance a.b.c.x4
        Starting instance a.b.c.x5
        Starting instance a.b.c.x8
        Starting instance a.b.c.x0
        Starting instance a.b.c.x6
        Starting instance a.b.c.x2
        Starting instance a.b.c.x7
        Starting instance a.b.c.x1
        Starting instance a.b.c.x3
        Starting instance a.b.c.x1
        Start a.b.c.x5 success
        Start a.b.c.x3 success
        Start a.b.c.x8 success
        Start a.b.c.x1 success
        Start a.b.c.x6 success
        Start a.b.c.x4 success
        Start a.b.c.x2 success
        Start a.b.c.x1 success
        Start a.b.c.x0 success

Error: failed to start: a.b.c.x7 node_exporter-9100.service, please check the instance's log() for more detail.: timed out waiting for port 9100 to be started after 2m0s

尝试在Error: failed to start: a.b.c.x7 该节点使用systemctl restart node_exporter-9100 是没问题的。

怀疑:历史上该集群通过ansible部署(V2版本历经V3、tiup V4 V5到今天V7),报错节点(x7)上的bin和script目录不够规范。

没有遇到过,学习一下

如果没有其他日志可供分析,可以找个新部署的v5对比下文件目录。
你这升级路径有些复杂,也不知道有没有bug,要是我的话可能在准备迁移重新部署了。

我是想能不能将该节点的node_export卸载再重新部署? 如何操作。

现在这个upgrade集群最后失败,虽然目前集群状态正常,核心组件已经是V7了,但是总感觉这个tiup upgrade没有顺利跑完,没看到最后的成功,感觉很别扭啊

监控可以卸载了吧,后面再加

可以把 prometheus缩容了再扩容

上面#3不是说tiup display还是V5吗

这个方案不可,因为我想保留prometheus的监控数据。
目前promeheus的重启应该是正常的,关键就卡在其中一个节点的node_export,能不能只重装这个。

要么手动修复下对应节点的 node export。要么修改下 meta 里面的配置,跳过这个节点的 node export


看了下代码,将这个节点上所有组件都配置下这个配置试试。

1 个赞

重装行不行 或者迁移 prometheus?

我折腾一天,解决了。

现象:tiup重启prometheus卡在重启node_exporter这一步超时退出

报错:node_exporter日志未现异常,node进程存在,但是9100端口未监听成功

排查:查看系统日志 /var/log/message,发现大量提示

Jan  8 20:26:57 xxxxx systemd-logind: Failed to start user slice user-0.slice, ignoring: The maximum number of pending replies per connection has been reached (org.freedesktop.DBus.Error.LimitsExceeded)
Jan  8 20:26:57 xxxxx systemd-logind: Failed to start session scope session-c68372803.scope: The maximum number of pending replies per connection has been reached
Jan  8 20:26:58 xxxxx systemd-logind: Failed to start user slice user-0.slice, ignoring: The maximum number of pending replies per connection has been reached (org.freedesktop.DBus.Error.LimitsExceeded)
Jan  8 20:26:58 xxxxx systemd-logind: Failed to start session scope session-c68372804.scope: The maximum number of pending replies per connection has been reached
Jan  8 20:26:59 xxxxx systemd-logind: Failed to start user slice user-0.slice, ignoring: The maximum number of pending replies per connection has been reached (org.freedesktop.DBus.Error.LimitsExceeded)
Jan  8 20:26:59 xxxxx systemd-logind: Failed to start session scope session-c68372805.scope: The maximum number of pending replies per connection has been reached

搜索报错,找到redhat官网的Q&A

最后执行命令:

systemctl daemon-reexec

此时node_exporter完全启动成功,监听系统9100端口正常,再次尝试tiup重启prometheus,顺利。

最后,使用tiup replay从上次upgrade断点继续执行升级。

tiup cluster replay xxxx

期间,不停卡顿,在所有节点上反复执行reload、stop、start

systemctl daemon-reload
systemctl stop/start blackbox_exporter
systemctl stop/start node_exporter-9100

最终跑完整个upgrade流程

Upgraded cluster `xxxxxx` successfully

tiup display

[root@tidbxxx ~]# tiup cluster display xxxxx
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.14.0/tiup-cluster display xxxxx 
Cluster type:       tidb
Cluster name:      xxxxx
Cluster version:    v7.1.2
1 个赞

systemctl daemon-reexec 重新执行systemed守护进程。是linux系统存在问题,比如socket?问题解决了,相同的问题 如果不同人去解决,还真不一定可以解决。

此话题已在最后回复的 60 天后被自动关闭。不再允许新回复。