升级TiDB从2.1升级到3.0.19,滚动升级monitor的时候发生问题

升级TiDB从2.1升级到3.0.19,滚动升级monitor的时候发生问题,导致grafana监控不全。
grafana升级成功,promethus也升级成功
有啥办法解决么。

问题

[10.204.9.17]: Ansible FAILED! => playbook: rolling_update_monitor.yml; TASK: import grafana dashboards - run import script; message: {“changed”: true, “cmd”: “python grafana-config-copy.py dests-10.204.9.17.json”, “delta”: “0:01:00.987482”, “end”: “2020-10-25 23:56:53.931952”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-10-25 23:55:52.944470”, “stderr”: “Traceback (most recent call last):
File "grafana-config-copy.py", line 142, in
ret = import_dashboard_via_user_pass(dest[‘url’], dest[‘user’], dest[‘password’], dashboard)
File "grafana-config-copy.py", line 124, in import_dashboard_via_user_pass
resp = urllib2.urlopen(req)
File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib64/python2.7/urllib2.py", line 431, in open
response = self._open(req, data)
File "/usr/lib64/python2.7/urllib2.py", line 449, in _open
‘_open’, req)
File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/usr/lib64/python2.7/urllib2.py", line 1244, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib64/python2.7/urllib2.py", line 1217, in do_open
r = h.getresponse(buffering=True)
File "/usr/lib64/python2.7/httplib.py", line 1089, in getresponse
response.begin()
File "/usr/lib64/python2.7/httplib.py", line 444, in begin
version, status, reason = self._read_status()
File "/usr/lib64/python2.7/httplib.py", line 408, in _read_status
raise BadStatusLine(line)
httplib.BadStatusLine: ‘’”, “stderr_lines”: [“Traceback (most recent call last):”, " File "grafana-config-copy.py", line 142, in “, " ret = import_dashboard_via_user_pass(dest[‘url’], dest[‘user’], dest[‘password’], dashboard)”, " File "grafana-config-copy.py", line 124, in import_dashboard_via_user_pass", " resp = urllib2.urlopen(req)“, " File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen”, " return opener.open(url, data, timeout)“, " File "/usr/lib64/python2.7/urllib2.py", line 431, in open”, " response = self._open(req, data)“, " File "/usr/lib64/python2.7/urllib2.py", line 449, in _open”, " ‘_open’, req)“, " File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain”, " result = func(*args)“, " File "/usr/lib64/python2.7/urllib2.py", line 1244, in http_open”, " return self.do_open(httplib.HTTPConnection, req)“, " File "/usr/lib64/python2.7/urllib2.py", line 1217, in do_open”, " r = h.getresponse(buffering=True)“, " File "/usr/lib64/python2.7/httplib.py", line 1089, in getresponse”, " response.begin()“, " File "/usr/lib64/python2.7/httplib.py", line 444, in begin”, " version, status, reason = self._read_status()“, " File "/usr/lib64/python2.7/httplib.py", line 408, in _read_status”, " raise BadStatusLine(line)", “httplib.BadStatusLine: ‘’”], “stdout”: “[load] from <node.json>:node
[import] <Pay-Cluster-Node_exporter> to [Pay-Cluster]\t… “, “stdout_lines”: [”[load] from <node.json>:node”, "[import] <Pay-Cluster-Node_exporter> to [Pay-Cluster]\t… "]}

  1. 辛苦检查下,当前 tidb-ansible 版本是否为 v3.0.19
  2. 可以上传下 inventory 文件.

版本没问题,我测试的时候用3.0.14都没问题的

cat inventory.ini

## TiDB Cluster Part
[tidb_servers]
TIDB_1 ansible_host=10.204.9.11 deploy_dir=/data2/TIDB1
TIDB_2 ansible_host=10.204.9.12 deploy_dir=/data2/TIDB2
TIDB_3 ansible_host=10.204.9.13 deploy_dir=/data2/TIDB3

[tikv_servers]
TiKV1-1 ansible_host=10.204.9.14 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv1"
TiKV1-2 ansible_host=10.204.9.14 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv1"
TiKV1-3 ansible_host=10.204.9.14 deploy_dir=/data3/deploy tikv_port=20173 labels="host=tikv1"
TiKV1-4 ansible_host=10.204.9.14 deploy_dir=/data4/deploy tikv_port=20174 labels="host=tikv1"


TiKV2-1 ansible_host=10.204.8.15 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv2"
TiKV2-2 ansible_host=10.204.9.15 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv2"
TiKV2-3 ansible_host=10.204.9.15 deploy_dir=/data3/deploy tikv_port=20173 labels="host=tikv2"
TiKV2-4 ansible_host=10.204.9.15 deploy_dir=/data4/deploy tikv_port=20174 labels="host=tikv2"


TiKV3-1 ansible_host=10.204.9.16 deploy_dir=/data1/deploy tikv_port=20171 labels="host=tikv3"
TiKV3-2 ansible_host=10.204.9.16 deploy_dir=/data2/deploy tikv_port=20172 labels="host=tikv3"
TiKV3-3 ansible_host=10.204.9.16 deploy_dir=/data3/deploy tikv_port=20173 labels="host=tikv3"
TiKV3-4 ansible_host=10.204.9.16 deploy_dir=/data4/deploy tikv_port=20174 labels="host=tikv3"



[pd_servers]
TIPD_1 ansible_host=10.204.9.11 deploy_dir=/data1/TIPD1
TIPD_2 ansible_host=10.204.9.12 deploy_dir=/data1/TIPD2
TIPD_3 ansible_host=10.204.9.13 deploy_dir=/data1/TIPD3

[spark_master]

[spark_slaves]

[lightning_server]

[importer_server]

## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]
10.204.9.17 deploy_dir=/data1/pay-monitor

[grafana_servers]
10.204.9.17 deploy_dir=/data1/pay-grafana


# node_exporter and blackbox_exporter servers
[monitored_servers]
10.204.9.11
10.204.9.12
10.204.9.13
10.204.9.14
10.204.9.15
10.204.9.16
10.204.9.17


[alertmanager_servers]
10.204.9.17

[kafka_exporter_servers]

## Binlog Part
[pump_servers]
PUMP_1 ansible_host=10.204.9.11 deploy_dir=/data2/PUMP1
PUMP_2 ansible_host=10.204.9.12 deploy_dir=/data2/PUMP1
PUMP_3 ansible_host=10.204.9.13 deploy_dir=/data2/PUMP1

[drainer_servers]
drainer_kafka ansible_host=10.204.9.11 deploy_dir=/data2/DRAINER1  initial_commit_ts="405383887928688641"
## Group variables
[pd_servers:vars]
location_labels = ["zone","rack","host"]

## Global variables
[all:vars]
deploy_dir = /home/tidb/deploy

## Connection
# ssh via normal user
ansible_user = tidb

cluster_name = pay-cluster

tidb_version = v3.0.19

# process supervision, [systemd, supervise]
process_supervision = systemd

timezone = Asia/Shanghai

enable_firewalld = False
# check NTP service
enable_ntpd = True
set_hostname = False

## binlog trigger
enable_binlog = True

# kafka cluster address for monitoring, example:
# kafka_addrs = "192.168.0.11:9092,192.168.0.12:9092,192.168.0.13:9092"
kafka_addrs = ""

# zookeeper address of kafka cluster for monitoring, example:
# zookeeper_addrs = "192.168.0.11:2181,192.168.0.12:2181,192.168.0.13:2181"
zookeeper_addrs = ""

# enable TLS authentication in the TiDB cluster
enable_tls = False

# KV mode
deploy_without_tidb = False

# wait for region replication complete before start tidb-server.
wait_replication = True

# Optional: Set if you already have a alertmanager server.
# Format: alertmanager_host:alertmanager_port
alertmanager_target = ""

grafana_admin_user = "admin"
grafana_admin_password = "admin"


### Collect diagnosis
collect_log_recent_hours = 2

enable_bandwidth_limit = True
# default: 10Mb/s, unit: Kbit/s
collect_bandwidth_limit = 10000

流程上可以对照这个 sop 看下是否有出入,

我这边,看起来就是个监控项没考进去,有单独的grafana监控项拷贝进去的教程么

ansiable 相关操作均是保证幂等的
重新执行出错的 playbook 即可

由于 tidb-ansiable 后续已不再进行维护。
也可以尝试通过 tiup cluster import 方式将相关部署信息导入到 tiup
再通过 缩容+扩容 grafana的方式 修复监控模板不完整问题

通过把json手动考过去解决了这个问题。

:call_me_hand: