ticdc创建到kakfka的changefeed成功,但是cdc数据没有进入到kafka topic中

【TiDB 使用环境】测试/ Poc
【 TiDB 版本】8.1
【 CDC 版本】8.1
【操作系统】centos7.6
【部署方式】虚机
【集群节点数】单机环境
【遇到的问题:问题现象及影响】创建至kafka的changefeed,创建successfully。但是dml操作后,cdc数据一条也没推送进下游,kafka种topic中为空

[root@node13 bin]# ./cdc cli changefeed create \

--server=http://192.168.50.68:8300 \
--sink-uri="kafka://192.168.50.85:39093/topic-name?protocol=canal-json&kafka-version=2.1.0&topic-name=yhtest8&partition-num=1&max-message-bytes=67108864&replication-factor=1" \
--changefeed-id="test-cdc-task"

Create changefeed successfully!
ID: test-cdc-task
Info: {“upstream_id”:7469677945147784521,“namespace”:“default”,“id”:“test-cdc-task”,“sink_uri”:“kafka://192.168.50.85:39093/topic-name?protocol=canal-json\u0026kafka-version=2.1.0\u0026topic-name=yhtest8\u0026partition-num=1\u0026max-message-bytes=67108864\u0026replication-factor=1”,“create_time”:“2025-07-01T14:14:58.84691822+08:00”,“start_ts”:459106025132523526,“config”:{“memory_quota”:1073741824,“case_sensitive”:false,“force_replicate”:false,“ignore_ineligible_table”:false,“check_gc_safe_point”:true,“enable_sync_point”:false,“enable_table_monitor”:false,“bdr_mode”:false,“sync_point_interval”:600000000000,“sync_point_retention”:86400000000000,“filter”:{“rules”:[“.”]},“mounter”:{“worker_num”:16},“sink”:{“protocol”:“canal-json”,“csv”:{“delimiter”:“,”,“quote”:“"”,“null”:“\N”,“include_commit_ts”:false,“binary_encoding_method”:“base64”,“output_old_value”:false,“output_handle_key”:false},“encoder_concurrency”:32,“terminator”:“\r\n”,“date_separator”:“day”,“enable_partition_separator”:true,“enable_kafka_sink_v2”:false,“only_output_updated_columns”:false,“delete_only_output_handle_key_columns”:false,“content_compatible”:false,“advance_timeout”:150,“send_bootstrap_interval_in_sec”:120,“send_bootstrap_in_msg_count”:10000,“send_bootstrap_to_all_partition”:true,“debezium_disable_schema”:false,“debezium”:{“output_old_value”:true},“open”:{“output_old_value”:true}},“consistent”:{“level”:“none”,“max_log_size”:64,“flush_interval”:2000,“meta_flush_interval”:200,“encoding_worker_num”:16,“flush_worker_num”:8,“use_file_backend”:false,“memory_usage”:{“memory_quota_percentage”:50}},“scheduler”:{“enable_table_across_nodes”:false,“region_threshold”:100000,“write_key_threshold”:0},“integrity”:{“integrity_check_level”:“none”,“corruption_handle_level”:“warn”},“changefeed_error_stuck_duration”:1800000000000,“synced_status”:{“synced_check_interval”:300,“checkpoint_interval”:15}},“state”:“normal”,“creator_version”:“v8.1.0”,“resolved_ts”:459106025132523526,“checkpoint_ts”:459106025132523526,“checkpoint_time”:“2025-07-01 14:14:58.705”}

[root@node13 bin]# curl -X GET http://192.168.50.68:8300/api/v2/changefeeds?state=normal
{“total”:1,“items”:[{“upstream_id”:7469677945147784521,“namespace”:“default”,“id”:“test-cdc-task”,“state”:“normal”,“checkpoint_tso”:459106295651762182,“checkpoint_time”:“2025-07-01 14:32:10.654”,“error”:null}]}[root@node13 bin]#

ticdc已成功创建到 Kafka 的 changefeed,状态为 normal ,且 checkpoint 也在不断推进,说明 TiCDC 正常拉取了上游 TiDB 的 binlog 并处理中。但 Kafka 中没有数据写入。

你想说啥?

你得有一个配置文件,说明要同步哪些库表

changefeed-error-stuck-duration = "30m"

[sink]
dispatchers = [
  {matcher = ['test1.*'], topic = "test1-topic"},
  {matcher = ['test2.*'], topic = "test2-topic"},
  {matcher = ['test3.*'], topic = "test3-topic"}
]

[mounter]
worker-num = 16

[filter]
rules = ['test1.*','test2.*','test3.*']

不需要吧,默认除了系统库其他都会同步。我也是传到kafka,阿里云的产品,任务创建成功,直接就收到数据了

指定下你需要同步得库表,看你得任务 rules 里面是 . 这不生效吧

指定 topic-name 的方式不对,你这里会把数据发送到 topic-name 这个 topic 而不是 yhtest8