tidb binlog无法同步kafka drainer无报错

为提高效率,请提供以下信息,问题描述清晰能够更快得到解决:

【TiDB 版本】
v4.0.0-beta.2

【问题描述】

tiup配置binlog drainer 对接kfk, kfk topic无信息,drainer、pump无报错

drainer日志
[2021/02/02 17:26:36.453 +08:00] [INFO] [version.go:50] [“Welcome to Drainer”] [“Release Version”=v4.0.0-beta.2] [“Git Commit Hash”=598cc3a3a917ab10c6cb5bddce51229c13b4736c] [“Build TS”=“2020-03-18 01:25:41”] [“Go Version”=go1.13] [“Go OS/Arch”=linux/amd64]
[2021/02/02 17:26:36.453 +08:00] [INFO] [main.go:46] [“start drainer…”] [config="{“log-level”:“info”,“node-id”:“10.10.24.65:8249”,“addr”:“http://10.10.24.65:8249”,“advertise-addr”:“http://10.10.24.65:8249”,“data-dir”:"/data/tidb/data/drainer-8249",“detect-interval”:5,“pd-urls”:“http://10.10.24.67:2379,http://10.10.24.68:2379”,“log-file”:"/data/tidb/deploy/drainer-8249/log/drainer.log",“initial-commit-ts”:0,“sycner”:{“sql-mode”:null,“ignore-txn-commit-ts”:null,“ignore-schemas”:“INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql”,“ignore-table”:null,“txn-batch”:20,“loopback-control”:false,“sync-ddl”:true,“channel-id”:0,“worker-count”:1,“to”:{“host”:"",“user”:"",“password”:"",“security”:{“ssl-ca”:"",“ssl-cert”:"",“ssl-key”:""},“encrypted_password”:"",“sync-mode”:0,“port”:0,“checkpoint”:{“type”:"",“schema”:"",“host”:"",“user”:"",“password”:"",“encrypted_password”:"",“port”:0},“dir”:"",“retention-time”:0,“zookeeper-addrs”:"",“kafka-addrs”:“kafka1-t.vcredit.com.local:9092,kafka2-t.vcredit.com.local:9092,kafka3-t.vcredit.com.local:9092”,“kafka-version”:“2.3.1”,“kafka-max-messages”:1024,“kafka-client-id”:"",“topic-name”:“tidb_binlog_kfk”},“replicate-do-table”:null,“replicate-do-db”:null,“db-type”:“kafka”,“relay”:{“log-dir”:"",“max-file-size”:10485760},“disable-dispatch-flag”:null,“enable-dispatch-flag”:null,“disable-dispatch”:null,“enable-dispatch”:null,“safe-mode”:false,“disable-detect-flag”:null,“enable-detect-flag”:null,“disable-detect”:null,“enable-detect”:null},“security”:{“ssl-ca”:"",“ssl-cert”:"",“ssl-key”:""},“synced-check-time”:5,“compressor”:"",“EtcdTimeout”:5000000000,“MetricsAddr”:"",“MetricsInterval”:15}"]
[2021/02/02 17:26:36.454 +08:00] [INFO] [client.go:134] ["[pd] create pd client with endpoints"] [pd-address="[http://10.10.24.67:2379,http://10.10.24.68:2379]"]
[2021/02/02 17:26:36.465 +08:00] [INFO] [base_client.go:226] ["[pd] update member urls"] [old-urls="[http://10.10.24.67:2379,http://10.10.24.68:2379]"] [new-urls="[http://10.10.24.66:2379,http://10.10.24.67:2379,http://10.10.24.68:2379]"]
[2021/02/02 17:26:36.465 +08:00] [INFO] [base_client.go:242] ["[pd] switch leader"] [new-leader=http://10.10.24.68:2379] [old-leader=]
[2021/02/02 17:26:36.465 +08:00] [INFO] [base_client.go:92] ["[pd] init cluster id"] [cluster-id=6830592625788170975]
[2021/02/02 17:26:36.465 +08:00] [INFO] [server.go:119] [“get cluster id from pd”] [id=6830592625788170975]
[2021/02/02 17:26:36.466 +08:00] [INFO] [checkpoint.go:64] [“initialize checkpoint”] [type=file] [checkpoint=422643738770669569] [cfg="{“CheckpointType”:“file”,“Db”:null,“Schema”:"",“Table”:"",“ClusterID”:6830592625788170975,“InitialCommitTS”:0,“dir”:"/data/tidb/data/drainer-8249/savepoint"}"]
[2021/02/02 17:26:36.466 +08:00] [INFO] [store.go:68] [“new store”] [path=“tikv://10.10.24.67:2379,10.10.24.68:2379?disableGC=true”]
[2021/02/02 17:26:36.466 +08:00] [INFO] [client.go:134] ["[pd] create pd client with endpoints"] [pd-address="[10.10.24.67:2379,10.10.24.68:2379]"]
[2021/02/02 17:26:36.468 +08:00] [INFO] [base_client.go:226] ["[pd] update member urls"] [old-urls="[http://10.10.24.67:2379,http://10.10.24.68:2379]"] [new-urls="[http://10.10.24.66:2379,http://10.10.24.67:2379,http://10.10.24.68:2379]"]
[2021/02/02 17:26:36.468 +08:00] [INFO] [base_client.go:242] ["[pd] switch leader"] [new-leader=http://10.10.24.68:2379] [old-leader=]
[2021/02/02 17:26:36.468 +08:00] [INFO] [base_client.go:92] ["[pd] init cluster id"] [cluster-id=6830592625788170975]
[2021/02/02 17:26:36.469 +08:00] [INFO] [store.go:74] [“new store with retry success”]
[2021/02/02 17:26:42.122 +08:00] [INFO] [client.go:127] ["[sarama] Initializing new client"]
[2021/02/02 17:26:42.122 +08:00] [INFO] [client.go:174] ["[sarama] Successfully initialized new client"]
[2021/02/02 17:26:42.123 +08:00] [INFO] [store.go:68] [“new store”] [path=“tikv://10.10.24.67:2379,10.10.24.68:2379?disableGC=true”]
[2021/02/02 17:26:42.123 +08:00] [INFO] [client.go:134] ["[pd] create pd client with endpoints"] [pd-address="[10.10.24.67:2379,10.10.24.68:2379]"]
[2021/02/02 17:26:42.124 +08:00] [INFO] [base_client.go:226] ["[pd] update member urls"] [old-urls="[http://10.10.24.67:2379,http://10.10.24.68:2379]"] [new-urls="[http://10.10.24.66:2379,http://10.10.24.67:2379,http://10.10.24.68:2379]"]
[2021/02/02 17:26:42.125 +08:00] [INFO] [base_client.go:242] ["[pd] switch leader"] [new-leader=http://10.10.24.68:2379] [old-leader=]
[2021/02/02 17:26:42.125 +08:00] [INFO] [base_client.go:92] ["[pd] init cluster id"] [cluster-id=6830592625788170975]
[2021/02/02 17:26:42.126 +08:00] [INFO] [store.go:74] [“new store with retry success”]
[2021/02/02 17:26:42.129 +08:00] [INFO] [server.go:263] [“register success”] [“drainer node id”=10.10.24.65:8249]
[2021/02/02 17:26:42.129 +08:00] [INFO] [server.go:324] [“start to server request”] [addr=http://10.10.24.65:8249]
[2021/02/02 17:26:42.130 +08:00] [INFO] [merge.go:222] [“merger add source”] [“source id”=10.10.24.65:8250]
[2021/02/02 17:26:42.130 +08:00] [INFO] [pump.go:138] [“pump create pull binlogs client”] [id=10.10.24.65:8250]
[2021/02/02 17:26:47.740 +08:00] [INFO] [syncer.go:260] [“write save point”] [ts=422643763163168769]
[2021/02/02 17:26:50.745 +08:00] [INFO] [syncer.go:260] [“write save point”] [ts=422643763949600769]
[2021/02/02 17:26:53.750 +08:00] [INFO] [syncer.go:260] [“write save point”] [ts=422643764736032769]
[2021/02/02 17:26:56.754 +08:00] [INFO] [syncer.go:260] [“write save point”] [ts=422643765522464769]
[2021/02/02 17:26:59.759 +08:00] [INFO] [syncer.go:260] [“write save point”] [ts=422643766308896769]
[2021/02/02 17:27:02.764 +08:00] [INFO] [syncer.go:260] [“write save point”] [ts=422643767095328769]
[2021/02/02 17:27:05.769 +08:00] [INFO] [syncer.go:260] [“write save point”] [ts=422643767881760769]
[2021/02/02 17:27:08.774 +08:00] [INFO] [syncer.go:260] [“write save point”] [ts=422643768668192770]
[2021/02/02 17:27:11.779 +08:00] [INFO] [syncer.go:260] [“write save point”] [ts=422643769454624769]
[2021/02/02 17:27:14.783 +08:00] [INFO] [syncer.go:260] [“write save point”] [ts=422643770241056769]
。。。。。。

drainer 配置
[syncer]
db-type = “kafka”
[syncer.to]
kafka-addrs = “kafka1-t.vcredit.com.local:9092,kafka2-t.vcredit.com.local:9092,kafka3-t.vcredit.com.local:9092”
kafka-version = “2.3.1”
topic-name = “tidb_binlog_kfk”

pump配置
gc = 2


若提问为性能优化、故障排查类问题,请下载脚本运行。终端输出的打印结果,请务必全选并复制粘贴上传。

可以检查下 enable- binlog 参数是否设置为 true
tiup edit-config 确认参数配置

感谢 ,但一连串问题出现了

我在 tiup cluster edit-config test-cluster 添加了
server_configs:
tidb:
binlog.enable: true
binlog.ignore-error: true
log.slow-threshold: 300

执行tiup cluster reload test-cluster -R tidb 报

Error: init config failed: 10.10.24.61:4000: failed to scp /home/tidb/.tiup/storage/cluster/clusters/test-cluster/config-cache/run_tidb_10.10.24.61_4000.sh to tidb@10.10.24.61:/data/tidb/deploy/scripts/run_tidb.sh: Process exited with status 1

找到类似问题
TiUP 使用常见问题处理 中的
tiup cluster import 问题:

5.import 后的集群通过 tiup-reload 报错:failed to scp xxx to xxx Process exited with status 1
(请确认是不是这个原因)

照着操作
[root@tidb-61 ~]# systemctl stop node_exporter-
Failed to stop node_exporter-.service: Unit node_exporter-.service not loaded.

[root@tidb-61 ~]# systemctl stop blackbox_exporter-
Failed to stop blackbox_exporter-.service: Unit blackbox_exporter-.service not loaded.

[root@tidb-61 ~]# systemctl stop node_exporter-
Failed to stop node_exporter-.service: Unit node_exporter-.service not loaded.

[root@tidb-61 ~]# systemctl stop node_exporter
Failed to stop node_exporter.service: Unit node_exporter.service not loaded.

然后是下一步操作,不知怎么做
“在 meta.yaml 中修改 monitored 的部署目录 B,(将部署机器上的目录 A copy 一份 B,copy 的目录和 meta.yaml 中的目录保持一致)”
~/.tiup/storage/cluster/clusters/test-cluster/meta.yaml 这个文件复制到哪里去?monitored 怎么改?

首先你需要确认下是否是这个问题原因导致的问题,这块我们无法确认,得看你那边 tidb-ansible 当时的配置。

看起来你没太理解这个 FAQ,tiup 部署的机器需要保证所有的 monitored 的部署目录以及端口保持一致。不予许出现 A 机器 monitored 部署在 /data1 目录,B 机器 monitored 部署在 /data2 目录上这种情况。如果有,需要保持一致,统一配置。

当初是用tidb-ansible做的,应该也发生了monitored位置不一样
现在停止blackbox_exporter node_exporter报错
systemctl stop blackbox_exporter-
Failed to stop blackbox_exporter-.service: Unit blackbox_exporter-.service not loaded

亲,你去 system 目录下查看一下就知道了,完整的一service 是服务-端口.service 的。

此话题已在最后回复的 1 分钟后被自动关闭。不再允许新回复。