问:show variables like "log_bin";

为提高效率,提问时请提供以下信息,问题描述清晰可优先响应。

  • 【TiDB 版本】:5.7.25-TiDB-v4.0.0
  • 【问题描述】:如下图显示,所以我的binlog是开启了吗?文档中取值“ON”是开启,这0就懵了
MySQL [feature]> show variables like "log_bin";
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| log_bin       | 0     |
+---------------+-------+
1 row in set (0.02 sec)

MySQL [feature]> select @@version;
+--------------------+
| @@version          |
+--------------------+
| 5.7.25-TiDB-v4.0.0 |
+--------------------+
1 row in set (0.00 sec)

MySQL [feature]> show pump status;
+-------------------+-------------------+--------+--------------------+---------------------+
| NodeID            | Address           | State  | Max_Commit_Ts      | Update_Time         |
+-------------------+-------------------+--------+--------------------+---------------------+
| 10.16.16.134:8250 | 10.16.16.134:8250 | online | 419198173690724353 | 2020-09-03 14:22:07 |
| 10.16.16.49:8250  | 10.16.16.49:8250  | online | 419198173087793153 | 2020-09-03 14:22:07 |
| 10.16.16.131:8250 | 10.16.16.131:8250 | online | 419198173743153153 | 2020-09-03 14:22:07 |
+-------------------+-------------------+--------+--------------------+---------------------+
3 rows in set (0.00 sec)

MySQL [feature]> show drainer status;
+------------------+------------------+--------+--------------------+---------------------+
| NodeID           | Address          | State  | Max_Commit_Ts      | Update_Time         |
+------------------+------------------+--------+--------------------+---------------------+
| 10.16.16.49:8249 | 10.16.16.49:8249 | online | 419198175316017154 | 2020-09-03 14:22:17 |
+------------------+------------------+--------+--------------------+---------------------+
1 row in set (0.01 sec)

确定下当前 tidb-server 是否开启了 binlog?

咋查呢?

您好,0 -> off , 1 -> on ,当前是off状态,多谢。

server_configs:
  tidb:
    binlog.enable: true
    binlog.ignore-error: true

pump_servers:
  - host: 10.16.16.49
  - host: 10.16.16.134
  - host: 10.16.16.131
drainer_servers:
  - host: 10.16.16.49
    config:
      syncer.db-type: "kafka"
      syncer.to.zookeeper-addrs: "10.16.12.10:2181,10.16.12.234:2181,10.16.12.78:2181"
      syncer.to.kafka-addrs: "10.16.12.10:9092,10.16.12.234:9092,10.16.12.78:9092"
      syncer.to.kafka-version: "1.1.0"
      syncer.to.kafka-max-messages: 1024
      syncer.to.topic-name: "feature_binlog"  
      syncer.replicate-do-db: ["feature"] 

我设置的true呢?

  1. 请问,文件修改以后,reload tidb 了吗?
  2. 如果 reload 了,麻烦反馈下 tidb 的日志,可以看到参数值,多谢。

啊…我reload tidb后确实查询是1了;但是哈,记得之前新增tikv之类也没做reload tidb呢

那应该就没有生效,修改配置文件后,记得要根据提示reload,多谢。

还有个问题;我insert写了几条测试数据,但是kafka消费不到

确认你连接的 tidb-server 是否开启了 binlog 参数。

啥意思啊?tiup cluster edit-config tidb-test2?

show variables like “log_bin”;
看下是否为 1 或者 on。
如果已经开启在 tidb 执行语句,看下 kafka 是否能接受到数据。

MySQL [feature]> show variables like "log_bin";
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| log_bin       | 1     |
+---------------+-------+
1 row in set (0.01 sec)

已经是1了;但是insert;kafka没数据

然后刚发现drainer down掉了

嗯,看下 drainer log 有什么信息报出来吗

[2020/09/03 14:14:56.183 +08:00] [INFO] [main.go:46] [“start drainer…”] [config="{“log-level”:“info”,“node-id”:“10.16.16.49:8249”,“addr”:“http://10.16.16.49:8249”,“advertise-addr”:“http://10.16.16.49:8249”,“data-dir”:"/tidb-data/drainer-8249",“detect-interval”:5,“pd-urls”:“http://10.16.16.134:2379,http://10.16.16.49:2379”,“log-file”:"/tidb-deploy/drainer-8249/log/drainer.log",“initial-commit-ts”:0,“sycner”:{“sql-mode”:null,“ignore-txn-commit-ts”:null,“ignore-schemas”:“INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql”,“ignore-table”:null,“txn-batch”:20,“loopback-control”:false,“sync-ddl”:true,“channel-id”:0,“worker-count”:1,“to”:{“host”:"",“user”:"",“password”:"",“security”:{“ssl-ca”:"",“ssl-cert”:"",“ssl-key”:"",“cert-allowed-cn”:null},“encrypted_password”:"",“sync-mode”:0,“port”:0,“checkpoint”:{“type”:"",“schema”:"",“host”:"",“user”:"",“password”:"",“encrypted_password”:"",“port”:0},“dir”:"",“retention-time”:0,“zookeeper-addrs”:“10.16.12.10:2181,10.16.12.234:2181,10.16.12.78:2181”,“kafka-addrs”:“10.16.12.84:9092,10.16.12.10:9092,10.16.12.249:9092”,“kafka-version”:“1.1.0”,“kafka-max-messages”:1024,“kafka-client-id”:"",“topic-name”:“feature_binlog”},“replicate-do-table”:null,“replicate-do-db”:[“feature”],“db-type”:“kafka”,“relay”:{“log-dir”:"",“max-file-size”:10485760},“disable-dispatch-flag”:null,“enable-dispatch-flag”:null,“disable-dispatch”:null,“enable-dispatch”:null,“safe-mode”:false,“disable-detect-flag”:null,“enable-detect-flag”:null,“disable-detect”:null,“enable-detect”:null},“security”:{“ssl-ca”:"",“ssl-cert”:"",“ssl-key”:"",“cert-allowed-cn”:null},“synced-check-time”:5,“compressor”:"",“EtcdTimeout”:5000000000,“MetricsAddr”:"",“MetricsInterval”:15}"]

[2020/09/03 15:44:58.624 +08:00] [INFO] [client.go:716] ["[sarama] Client background metadata update:kafka: no specific topics to update metadata"]

[2020/09/03 15:45:30.105 +08:00] [ERROR] [server.go:289] [“syncer exited abnormal”] [error=“filterTable failed: not found table id: 15607”] [errorVerbose=“not found table id: 15607\ngithub.com/pingcap/tidb-binlog/drainer.filterTable\ \t/home/jenkins/agent/workspace/build_tidb_binlog_master/go/src/github.com/pingcap/tidb-binlog/drainer/syncer.go:514\ github.com/pingcap/tidb-binlog/drainer.(*Syncer).run\ \t/home/jenkins/agent/workspace/build_tidb_binlog_master/go/src/github.com/pingcap/tidb-binlog/drainer/syncer.go:368\ngithub.com/pingcap/tidb-binlog/drainer.(*Syncer).Start\ \t/home/jenkins/agent/workspace/build_tidb_binlog_master/go/src/github.com/pingcap/tidb-binlog/drainer/syncer.go:132\ngithub.com/pingcap/tidb-binlog/drainer.(*Server).Start.func4\ \t/home/jenkins/agent/workspace/build_tidb_binlog_master/go/src/github.com/pingcap/tidb-binlog/drainer/server.go:288\ngithub.com/pingcap/tidb-binlog/drainer.(*taskGroup).start.func1\ \t/home/jenkins/agent/workspace/build_tidb_binlog_master/go/src/github.com/pingcap/tidb-binlog/drainer/util.go:75\ runtime.goexit\ \t/usr/local/go/src/runtime/asm_amd64.s:1357\ filterTable failed”]

所以怎么定位问题和重启节点呢?

kafka 下游是什么呢。
看 drainer 的报错是没有识别到这个 table id。这个可能是之前没有开启 binlog 的原因,tidb-server ddl owner 的信息没有同步下去。导致 drainer 没有识别到这个 table 的元信息。
方便的恢复办法可以是缩容 pump drainer 重新扩容。

奥,好嘞;明天缩扩容试试;
kafka下游?没明白,这个和报错啥关系?

好的。

想确认下 kafka 是否可以正常消费,拉齐下信息。避免恢复 drainer 后 kafka 出现问题。

嗯嗯,这个我还没有测试过;我写几条测试数据看下

:ok_hand: