Tidb binlog同步数据到kafka部分表数据未同步

tidb binlog同步数据到kafka部分表数据未同步,同步时排除过滤问题,没有设置任务过滤。而且之前好的,后来突然不同步了,部分同步部分包不同步。 查看drainer日志有如下错误:

time="2019-08-30T17:06:24+08:00" level=info msg="[pd] create pd client with endpoints [http://192.168.10.201:2379]"
time="2019-08-30T17:06:24+08:00" level=info msg="[pd] leader switches to: http://192.168.10.201:2379, previous: "
time="2019-08-30T17:06:24+08:00" level=info msg="[pd] init cluster id 6727923379227328883"
time="2019-08-30T17:06:24+08:00" level=info msg="new store"
time="2019-08-30T17:06:24+08:00" level=info msg="[pd] create pd client with endpoints [192.168.10.201:2379]"
time="2019-08-30T17:06:24+08:00" level=info msg="[pd] leader switches to: http://192.168.10.201:2379, previous: "
time="2019-08-30T17:06:24+08:00" level=info msg="[pd] init cluster id 6727923379227328883"
panic: kafka: Failed to produce message to topic kafka_obinlog: kafka server: Message was too large, server rejected it to avoid allocation err
or.

goroutine 293 [running]:
github.com/pingcap/tidb-binlog/drainer/executor.(*kafkaExecutor).run(0xc000138780)
	/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb-binlog/drainer/executor/kafka.go:187 +0x53d
created by github.com/pingcap/tidb-binlog/drainer/executor.newKafka
	/home/jenkins/workspace/release_tidb_2.1/go/src/github.com/pingcap/tidb-binlog/drainer/executor/kafka.go:88 +0x35d
2019/08/30 17:06:41 config.go:296: [info] get kafka addrs from zookeeper: 192.168.10.202:9092,192.168.10.203:9092,192.168.10.204:9092,192
.168.10.201:9092
time="2019-08-30T17:06:41+08:00" level=info msg="[pd] create pd client with endpoints [http://192.168.10.201:2379]"
time="2019-08-30T17:06:41+08:00" level=info msg="[pd] leader switches to: http://192.168.10.201:2379, previous: "
time="2019-08-30T17:06:41+08:00" level=info msg="[pd] init cluster id 6727923379227328883"
time="2019-08-30T17:06:41+08:00" level=info msg="new store"
time="2019-08-30T17:06:41+08:00" level=info msg="[pd] create pd client with endpoints [192.168.10.201:2379]"
time="2019-08-30T17:06:41+08:00" level=info msg="[pd] leader switches to: http://192.168.10.201:2379, previous: "
time="2019-08-30T17:06:41+08:00" level=info msg="[pd] init cluster id 6727923379227328883"

看报错是 Message was too large 的问题,是不是可以从这方面排查下。

好的,谢谢,修改了一下kafka的配置,可以了

#broker能接收消息的最大字节数
message.max.bytes=200000000
#broker可复制的消息的最大字节数
replica.fetch.max.bytes=204857600
#消费者端的可读取的最大消息
fetch.message.max.bytes=204857600