TiDB-Binlog 写入Kafka 失败 no specific topics to update metadata

TiUP 部署TiDB-v-4.0-rc ,以及TiDB-Binlog,部署成功,且启动成功,测试每秒向TiDB 表中插入一条数据,drainer配置的下游Kafka topic 中未接收到消息

组件启动状态:

[tidb@cm2 tidb-v4.0.0-rc-linux-amd64]$ tiup cluster display tidb-test         
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v0.6.3/cluster display tidb-test
TiDB Cluster: tidb-test
TiDB Version: v4.0.0-rc
ID                     Role          Host             Ports        OS/Arch       Status     Data Dir                            Deploy Dir
--                     ----          ----             -----        -------       ------     --------                            ----------
192.168.234.152:9093   alertmanager  192.168.234.152  9093/9094    linux/x86_64  Up         /data2/tidb-data/alertmanager-9093  /data2/tidb-deploy/alertmanager-9093
192.168.234.153:8249   drainer       192.168.234.153  8249         linux/x86_64  Up         /data2/tidb-data/drainer-8249       /data2/tidb-deploy/drainer-8249
192.168.234.152:3000   grafana       192.168.234.152  3000         linux/x86_64  Up         -                                   /data2/tidb-deploy/grafana-3000
192.168.234.152:2379   pd            192.168.234.152  2379/2380    linux/x86_64  Healthy|L  /data2/tidb-data/pd-2379            /data2/tidb-deploy/pd-2379
192.168.234.152:9090   prometheus    192.168.234.152  9090         linux/x86_64  Up         /data2/tidb-data/prometheus-9090    /data2/tidb-deploy/prometheus-9090
192.168.234.153:8250   pump          192.168.234.153  8250         linux/x86_64  Up         /data2/tidb-data/pump-8249          /data2/tidb-deploy/pump-8249
192.168.234.154:8250   pump          192.168.234.154  8250         linux/x86_64  Up         /data2/tidb-data/pump-8249          /data2/tidb-deploy/pump-8249
192.168.234.155:8250   pump          192.168.234.155  8250         linux/x86_64  Up         /data2/tidb-data/pump-8249          /data2/tidb-deploy/pump-8249
192.168.234.152:4000   tidb          192.168.234.152  4000/10080   linux/x86_64  Up         -                                   /data2/tidb-deploy/tidb-4000
192.168.234.153:20160  tikv          192.168.234.153  20160/20180  linux/x86_64  Up         /data2/tidb-data/tikv-20160         /data2/tidb-deploy/tikv-20160
192.168.234.154:20160  tikv          192.168.234.154  20160/20180  linux/x86_64  Up         /data2/tidb-data/tikv-20160         /data2/tidb-deploy/tikv-20160
192.168.234.155:20160  tikv          192.168.234.155  20160/20180  linux/x86_64  Up         /data2/tidb-data/tikv-20160         /data2/tidb-deploy/tikv-20160

pump组件配置:

# All configuration items you want to change can be added to:
# server_configs:
#   pump:
#     aa.b1.c3: value
#     aa.b2.c4: value
gc = 7

[storage]
sync-log = true

stop-write-at-available-space = "500 MB"

drainer组件配置:

#   drainer:
#     aa.b1.c3: value
#     aa.b2.c4: value
[syncer.to]
kafka-addrs = "192.168.234.153:9092,192.168.234.154:9092,192.168.234.155:9092"
kafka-version = "1.0.1"
topic-name = "tidb"
zookeeper-addrs = "192.168.234.153:2181,192.168.234.154:2181,192.168.234.155:2181"

[syncer]
db-type = "kafka"

pump组件日志:

[2020/06/01 16:30:25.711 +08:00] [WARN] [pd.go:109] ["get timestamp too slow"] ["cost time"=314.312668ms]
[2020/06/01 16:30:25.711 +08:00] [INFO] [storage.go:384] [DBStats] [DBStats="{\"WriteDelayCount\":0,\"WriteDelayDuration\":0,\"WritePaused\":false,\"AliveSnapshots\":0,\"AliveIterators\":0,\"IOWrite\":58605,\"IORead\":132184,\"BlockCacheSize\":161236,\"OpenedTablesCount\":6,\"LevelSizes\":[4346832],\"LevelTablesCounts\":[6],\"LevelRead\":[0],\"LevelWrite\":[0],\"LevelDurations\":[0]}"]
[2020/06/01 16:30:25.714 +08:00] [INFO] [server.go:561] ["server info tick"] [writeBinlogCount=0] [alivePullerCount=1] [MaxCommitTS=417071163028275201]
[2020/06/01 16:30:31.401 +08:00] [WARN] [pd.go:109] ["get timestamp too slow"] ["cost time"=67.530081ms]
[2020/06/01 16:30:35.397 +08:00] [INFO] [storage.go:384] [DBStats] [DBStats="{\"WriteDelayCount\":0,\"WriteDelayDuration\":0,\"WritePaused\":false,\"AliveSnapshots\":0,\"AliveIterators\":0,\"IOWrite\":59182,\"IORead\":132184,\"BlockCacheSize\":161236,\"OpenedTablesCount\":6,\"LevelSizes\":[4346832],\"LevelTablesCounts\":[6],\"LevelRead\":[0],\"LevelWrite\":[0],\"LevelDurations\":[0]}"]
[2020/06/01 16:30:35.403 +08:00] [INFO] [server.go:561] ["server info tick"] [writeBinlogCount=0] [alivePullerCount=1] [MaxCommitTS=417071165387571201]
[2020/06/01 16:30:45.397 +08:00] [INFO] [storage.go:384] [DBStats] [DBStats="{\"WriteDelayCount\":0,\"WriteDelayDuration\":0,\"WritePaused\":false,\"AliveSnapshots\":0,\"AliveIterators\":0,\"IOWrite\":59674,\"IORead\":132184,\"BlockCacheSize\":161236,\"OpenedTablesCount\":6,\"LevelSizes\":[4346832],\"LevelTablesCounts\":[6],\"LevelRead\":[0],\"LevelWrite\":[0],\"LevelDurations\":[0]}"]
[2020/06/01 16:30:45.403 +08:00] [INFO] [server.go:561] ["server info tick"] [writeBinlogCount=0] [alivePullerCount=1] [MaxCommitTS=417071167746867201]
[2020/06/01 16:30:55.610 +08:00] [INFO] [server.go:561] ["server info tick"] [writeBinlogCount=0] [alivePullerCount=1] [MaxCommitTS=417071170905702401]
[2020/06/01 16:30:55.636 +08:00] [INFO] [storage.go:384] [DBStats] [DBStats="{\"WriteDelayCount\":0,\"WriteDelayDuration\":0,\"WritePaused\":false,\"AliveSnapshots\":0,\"AliveIterators\":0,\"IOWrite\":60245,\"IORead\":132184,\"BlockCacheSize\":161236,\"OpenedTablesCount\":6,\"LevelSizes\":[4346832],\"LevelTablesCounts\":[6],\"LevelRead\":[0],\"LevelWrite\":[0],\"LevelDurations\":[0]}"]
[2020/06/01 16:30:57.754 +08:00] [WARN] [pd.go:109] ["get timestamp too slow"] ["cost time"=48.507886ms]
[2020/06/01 16:30:59.775 +08:00] [WARN] [pd.go:109] ["get timestamp too slow"] ["cost time"=368.884025ms]
[2020/06/01 16:31:05.397 +08:00] [INFO] [storage.go:384] [DBStats] [DBStats="{\"WriteDelayCount\":0,\"WriteDelayDuration\":0,\"WritePaused\":false,\"AliveSnapshots\":0,\"AliveIterators\":0,\"IOWrite\":60822,\"IORead\":132184,\"BlockCacheSize\":161236,\"OpenedTablesCount\":6,\"LevelSizes\":[4346832],\"LevelTablesCounts\":[6],\"LevelRead\":[0],\"LevelWrite\":[0],\"LevelDurations\":[0]}"]
[2020/06/01 16:31:05.403 +08:00] [INFO] [server.go:561] ["server info tick"] [writeBinlogCount=0] [alivePullerCount=1] [MaxCommitTS=417071173278367745]

drainer 组件部分日志:

[2020/06/01 16:27:00.222 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071108515168257]
[2020/06/01 16:27:03.280 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071109301600257]
[2020/06/01 16:27:07.056 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071110861619201]
[2020/06/01 16:27:12.252 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071111660896257]
[2020/06/01 16:27:15.349 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071112644198402]
[2020/06/01 16:27:17.292 +08:00] [INFO] [client.go:716] ["[sarama] Client background metadata update:kafka: no specific topics to update metadata"]
[2020/06/01 16:27:19.072 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071114007347201]
[2020/06/01 16:27:24.257 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071114806886404]
[2020/06/01 16:27:27.613 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071115789926401]
[2020/06/01 16:27:31.025 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071117167493121]
[2020/06/01 16:27:34.038 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071117952614401]
[2020/06/01 16:27:39.272 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071118751891457]
[2020/06/01 16:27:42.304 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071119538585602]
[2020/06/01 16:27:45.316 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071120325017601]
[2020/06/01 16:27:48.331 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071121111449604]
[2020/06/01 16:27:51.773 +08:00] [INFO] [syncer.go:260] ["write save point"] [ts=417071121897881601]

kafka集群tidb topic 测试,可以读写。

从报错看是 sarama 的报错,能否根据提示比对一下,没有可以更新的 topics?

// ErrNoTopicsToUpdateMetadata is returned when Meta.Full is set to false but no specific topics were found to update // the metadata. var ErrNoTopicsToUpdateMetadata = errors.New(“kafka: no specific topics to update metadata”)

https://github.com/Shopify/sarama/blob/master/errors.go

找到问题解决方法:修改完drainer 配置后,重启整个TiDB 所有组件,问题解决。 问题原因:Tiup 部署TiDB v4.0.0-rc 时,自动分配的drainer 组件配置文件中缺少 [syncer.to] 导致drainer 启动失败,所以手动修改drainer配置文件,TiUP 单独启动drainer组件,正常启动,但是kafka下游没有接受到数据,测试TiUP重启整个TiDB集群,问题解决。

1 个赞

感谢您的回复和解决方法 :+1:

此话题已在最后回复的 1 分钟后被自动关闭。不再允许新回复。