Wander
(Wander)
1
为提高效率,请提供以下信息,问题描述清晰能够更快得到解决:
【 TiDB 使用环境】
v5.2.2
tidb1 cluster和tidb2 cluster之间通过cdc进行数据同步
tidb1 master
tidb2 tidb1的slave
tidb1上cdc相关的配置
172.18.6.101:8300 cdc 172.18.6.101 8300 linux/x86_64 Up /data/deploy/install/data/cdc-8300 /home/tidb/deploy/cdc-8300
tidb2上创建任务
在tidb1create database a;
use a;create table test(id int);
tidb2上会同步到数据库a;但是不会同步数据表test
【概述】 场景 + 问题概述
【备份和数据迁移策略逻辑】
【背景】 做过哪些操作
【现象】 业务和数据库现象
【问题】 当前遇到的问题
【业务影响】
【TiDB 版本】
【附件】
- 相关日志、配置文件、Grafana 监控(https://metricstool.pingcap.com/)
- TiUP Cluster Display 信息
- TiUP CLuster Edit config 信息
- TiDB-Overview 监控
- 对应模块的 Grafana 监控(如有 BR、TiDB-binlog、TiCDC 等)
- 对应模块日志(包含问题前后 1 小时日志)
若提问为性能优化、故障排查类问题,请下载脚本运行。终端输出的打印结果,请务必全选并复制粘贴上传。
Wander
(Wander)
3
直接tiup二进制包部署的,
tidb1的配置文件如下:
global:
user: “tidb”
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
deploy_dir: “/tidb-deploy/monitored-9100”
data_dir: “/tidb-data/monitored-9100”
log_dir: “/tidb-deploy/monitored-9100/log”
pd_servers:
- host: 172.16.67.101
deploy_dir: “/tidb-deploy/pd-2379”
data_dir: “/tidb-data/pd-2379”
log_dir: “/tidb-deploy/pd-2379/log”
tidb_servers:
- host: 172.16.67.101
deploy_dir: “/tidb-deploy/tidb-4000”
log_dir: “/tidb-deploy/tidb-4000/log”
tikv_servers:
- host: 172.16.67.101
deploy_dir: “/tidb-deploy/tikv-20160”
data_dir: “/tidb-data/tikv-20160”
log_dir: “/tidb-deploy/tikv-20160/log”
monitoring_servers:
- host: 172.16.67.101
deploy_dir: “/tidb-deploy/prometheus-8249”
data_dir: “/tidb-data/prometheus-8249”
log_dir: “/tidb-deploy/prometheus-8249/log”
grafana_servers:
- host: 172.16.67.101
deploy_dir: /tidb-deploy/grafana-3000
alertmanager_servers:
- host: 172.16.67.101
deploy_dir: “/tidb-deploy/alertmanager-9093”
data_dir: “/tidb-data/alertmanager-9093”
log_dir: “/tidb-deploy/alertmanager-9093/log”
tidb1扩容cdc的配置文件如下
cdc_servers:
- host: 172.16.67.101
gc-ttl: 86400
data_dir: /data/deploy/install/data/cdc-8300
############
tidb2的配置文件如下
global:
user: “tidb”
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
deploy_dir: “/tidb-deploy/monitored-9100”
data_dir: “/tidb-data/monitored-9100”
log_dir: “/tidb-deploy/monitored-9100/log”
pd_servers:
- host: 172.16.67.102
deploy_dir: “/tidb-deploy/pd-2379”
data_dir: “/tidb-data/pd-2379”
log_dir: “/tidb-deploy/pd-2379/log”
tidb_servers:
- host: 172.16.67.102
deploy_dir: “/tidb-deploy/tidb-4000”
log_dir: “/tidb-deploy/tidb-4000/log”
tikv_servers:
- host: 172.16.67.102
deploy_dir: “/tidb-deploy/tikv-20160”
data_dir: “/tidb-data/tikv-20160”
log_dir: “/tidb-deploy/tikv-20160/log”
monitoring_servers:
- host: 172.16.67.102
deploy_dir: “/tidb-deploy/prometheus-8249”
data_dir: “/tidb-data/prometheus-8249”
log_dir: “/tidb-deploy/prometheus-8249/log”
grafana_servers:
- host: 172.16.67.102
deploy_dir: /tidb-deploy/grafana-3000
alertmanager_servers:
- host: 172.16.67.102
deploy_dir: “/tidb-deploy/alertmanager-9093”
data_dir: “/tidb-data/alertmanager-9093”
log_dir: “/tidb-deploy/alertmanager-9093/log”
tidb2上启动任务如下:
./cdc cli changefeed create --pd=http://172…16.67.101:2379 --sink-uri=“tidb://root:”"@172.16.67.102:4000/" --changefeed-id=“simple-replication-task” --sort-engine=“unified”
xfworld
(魔幻之翼)
4
simple-replication-task 这个任务状态可以查下,是否正常
Wander
(Wander)
5
./cdc cli changefeed query -s --pd=http://172.16.67.101:2379 --changefeed-id=simple-replication-task
{
“state”: “normal”,
“tso”: 431271549299064833,
“checkpoint”: “2022-02-18 15:46:39.853”,
“error”: null
}
xfworld
(魔幻之翼)
6
2 个赞
system
(system)
关闭
8
此话题已在最后回复的 1 分钟后被自动关闭。不再允许新回复。