不懂就问:tidb集群跨DC(云 Region)迁移

【 TiDB 使用环境】生产环境
【 TiDB 版本】V4.0.16、v5.0.1、v5.0.4、v5.0.5

【遇到的问题:问题现象及影响】
背景:
计划更换云Region,相当于将tidb集群跨DC 迁移,dc间网络带宽 40M;同时所有服务、数据库都会依赖此网络迁移。

问题1:
上述版本能否使用扩缩容 方案完成迁移?会有什么风险?

问题2:
如果搭建 “ticdc”同步的TiDB主备集群,这两个集群间的TSO是否一致?
方案如下:

问题2:
https://docs.pingcap.com/zh/tidb/stable/upstream-downstream-diff
tikv 6.4.0版本 维护了一个上下游具有一致性 snapshot 的 ts-map

可以看到 TiDB主从集群之间的TSO是不一致的,不能直接拿 tso-1 作为 DC4 新ticdc同步任务的 start-ts

+------------------+----------------+--------------------+--------------------+---------------------+
| ticdc_cluster_id | changefeed     | primary_ts         | secondary_ts       | created_at          |
+------------------+----------------+--------------------+--------------------+---------------------+
| default          | test-2         | 435953225454059520 | 435953235516456963 | 2022-09-13 08:40:15 |
+------------------+----------------+--------------------+--------------------+---------------------+

应该是需要在 ts-map 里面找到距离暂停DC1 CDC任务之前最近的一对上下游tso,拿下游 tso 在 DC4 创建新的CDC任务,然后在业务侧自己处理CDC数据去重。

感谢,这个方案了解。但是当前版本为5.0.4版本ticdc,好像没有这个 ts-map

目前数据格式为 canal-json 能使用 es 字段作为唯一标识么?

{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106465,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"40885458645-32642579611-87780805360-75786734419-72768045065-73753996681-89397932258-52312229413-39775267520-39666045879","id":"2517","k":"2918","pad":"65257871835-02336757793-35547215331-13506539015-36914329313"}],"old":[{"c":"60863650832-99507440173-07738309387-99422695339-12533914802-83346224518-76619046045-53817415661-47267488726-39986665474","id":"2517","k":"2499","pad":"38090510652-07702250434-08975824054-31762704218-35254676445"}]}
{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"36748437506-94556857415-69545915809-09888142405-70283253843-49398631621-75942281182-73213913728-69818887950-29633019257","id":"2492","k":"2521","pad":"90443775370-75210883269-83077322641-39372892294-63319191513"}],"old":[{"c":"98468727765-71098460514-17871547751-31115406523-51850727858-10040790503-10290411769-16980158605-47885080784-68064720857","id":"2492","k":"2521","pad":"90443775370-75210883269-83077322641-39372892294-63319191513"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"INSERT","es":1678698106465,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"09241211417-57691820593-79764562888-73842992267-41262887361-75389434110-77222691084-93085932883-64958287620-62880885482","id":"3082","k":"2525","pad":"01968840133-25459477374-54317852552-80338720400-75459953512"}],"old":[null]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106465,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"19934403057-34971673456-92467389935-20811426463-07174676987-01687857565-39759477338-48074877637-87372120758-58739047390","id":"2513","k":"2443","pad":"91046045506-19115897563-62460380646-15683524292-24522152238"}],"old":[{"c":"19934403057-34971673456-92467389935-20811426463-07174676987-01687857565-39759477338-48074877637-87372120758-58739047390","id":"2513","k":"2442","pad":"91046045506-19115897563-62460380646-15683524292-24522152238"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106465,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"13691623327-83005145805-83631958316-46123856772-15705105838-81081547402-36969822895-52100011575-05302806548-04773569224","id":"2525","k":"2500","pad":"43502600413-60335724659-82061816160-56772024807-72825945280"}],"old":[{"c":"89539530113-62301570253-03465359037-07700414367-29592947089-72988169921-24405068518-18749371172-06521742235-43439314539","id":"2525","k":"2500","pad":"43502600413-60335724659-82061816160-56772024807-72825945280"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106465,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"31844738662-77280625167-89838779753-82309602385-13504667772-77003535616-92995740179-47081762615-93223864315-86695008012","id":"2526","k":"2512","pad":"43150350394-96332674850-41785125532-36262395640-43793389427"}],"old":[{"c":"53678190762-55451278874-74926587698-38804022752-04443180180-72947618338-99192311629-59893116827-91530418203-30890060984","id":"2526","k":"2443","pad":"92877974684-89527046757-36449185964-90804075956-95335136960"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"19800640099-72376342866-33213366919-93251112341-11592349316-05428160136-50966466520-62185565371-26685956060-76118892153","id":"2501","k":"2508","pad":"65724646516-72204530063-58424860195-51681546933-07413984546"}],"old":[{"c":"19800640099-72376342866-33213366919-93251112341-11592349316-05428160136-50966466520-62185565371-26685956060-76118892153","id":"2501","k":"2507","pad":"65724646516-72204530063-58424860195-51681546933-07413984546"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"91841022490-65183271807-21614876311-66664605746-60061001119-83526200880-22194712722-38362928776-47160873623-16284071693","id":"2518","k":"2514","pad":"02776960888-51240075245-26601022106-62361518668-84400300030"}],"old":[{"c":"59692093312-98684014182-92278707377-13901172019-05189745761-74481492881-88246290598-76145176570-80122731207-28078526067","id":"2518","k":"2507","pad":"47391824939-71827390667-04782801007-92236607649-23469695120"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"76237400836-26598228655-56274420473-72854069621-84065681506-86585863725-29405215281-86584208215-22608850900-23663890960","id":"2515","k":"2524","pad":"70478713515-41875495154-54274875136-79390411375-64064974633"}],"old":[{"c":"19114950417-60274401512-57071685027-84229394553-33395311129-55171071778-06094682076-24592884977-00674909810-84805217223","id":"2515","k":"2524","pad":"70478713515-41875495154-54274875136-79390411375-64064974633"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"INSERT","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"51395841833-42708519690-24428794152-53103598315-46414085449-41809516665-92899184184-32358865054-46477233949-52153611433","id":"2700","k":"2491","pad":"41507791580-89054770416-96344423531-24818895616-34057255403"}],"old":[null]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"INSERT","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"83619892387-60417650883-46504242914-13504855781-85532307808-10348344631-18843486287-72948635733-31250818916-04555101442","id":"2793","k":"2990","pad":"92960521900-42498903363-35989631384-69359851013-30787494047"}],"old":[null]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"85868054889-07511000969-42911596530-35347725424-91827654051-65008702917-52332602968-51887664247-86684615430-15159347379","id":"2494","k":"2493","pad":"26792467021-01484785863-45100936154-18995444297-88478383488"}],"old":[{"c":"85868054889-07511000969-42911596530-35347725424-91827654051-65008702917-52332602968-51887664247-86684615430-15159347379","id":"2494","k":"2492","pad":"26792467021-01484785863-45100936154-18995444297-88478383488"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"21783985126-98084808691-43725135017-00387949361-85105783910-92737547188-11283879525-92430796799-13166349163-02664006206","id":"2488","k":"2077","pad":"59643692351-74416335562-47894835839-88859043548-07712245874"}],"old":[{"c":"63349236844-13583054515-08339603696-25291547972-51163040841-59747865941-96948615494-20282525241-79860160710-26595326278","id":"2488","k":"2077","pad":"59643692351-74416335562-47894835839-88859043548-07712245874"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"16504897463-94629584892-47554908881-67943009449-05387121982-35633085054-15144861214-28184515442-47633639648-85265902260","id":"2490","k":"2503","pad":"89931153172-94712087801-77595487791-16294616439-35316696813"}],"old":[{"c":"16504897463-94629584892-47554908881-67943009449-05387121982-35633085054-15144861214-28184515442-47633639648-85265902260","id":"2490","k":"2502","pad":"89931153172-94712087801-77595487791-16294616439-35316696813"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"57871650441-34609648749-61556302419-79611357656-50875938359-79757458909-16970262656-33870182681-42720015636-91822851929","id":"2508","k":"2520","pad":"06061504511-94466593959-02881086504-35429971081-06069653615"}],"old":[{"c":"04541088532-06053666798-15525136093-54266388636-51550468207-92515958213-33177020037-54853722639-80847268874-54395882618","id":"2508","k":"3384","pad":"19770731527-28035293328-91085620215-02508971454-90321757220"}]}
{"id":0,"database":"sbtest","table":"sbtest1","pkNames":["id"],"isDdl":false,"type":"UPDATE","es":1678698106515,"ts":0,"sql":"","sqlType":{"c":1,"id":-5,"k":-5,"pad":1},"mysqlType":{"c":"char","id":"int","k":"int","pad":"char"},"data":[{"c":"22328732408-84553755166-56481834696-46824407968-09221029728-65552763549-78652057272-99695904643-09959496299-77757396239","id":"2514","k":"2510","pad":"32495370377-75268095470-89531226896-49712844359-65742255463"}],"old":[{"c":"81494412638-69779651720-15842456299-87653785182-62338904225-70162278213-54689526797-76128833216-27926990918-97065712167","id":"2514","k":"2510","pad":"32495370377-75268095470-89531226896-49712844359-65742255463"}]}

canal-json消息的es字段不是全局唯一的,但是全局递增的,还是需要业务侧消费数据时去重。

嗯嗯,了解到 这个是毫秒级别的时间戳。这个时间戳如果是不同的两个tidb集群,会不会相差非常多?

与TiCDC上下游数据同步延迟有关,可以参考TICDC Changefeed相关监控指标:

https://docs.pingcap.com/zh/tidb/dev/monitor-ticdc

正常同步不会有非常大的延迟。

这个云region迁移,是同城还是跨城?同城的话可以直接扩缩容方式完成,影响不是很大。如果跨城的话,TiDB 6.X版本可以考虑用placement rule in SQL方式,先在新DC的TiKV节点添加多个learner,等数据都同步完,找个时间统一将表的Leader都切换到新DC。

PS: 5.X版本的Placement rule只能通过pd-ctl来修改,得探索一下怎么写了。可以参考一下这篇文章的配置看下
专栏 - DR Auto-Sync 搭建和灾难恢复手册 | TiDB 社区

跨Region 区域,区域网络带宽40M,网络延迟7ms
image

这种就必须当成跨城来搞了,网络延时太高了。可以研究一下Placement Rule怎么写,通过调整副本角色的方式来搞迁移。
https://docs.pingcap.com/zh/tidb/stable/configure-placement-rules

PS: 两个集群的带宽只有40M也太小了。。。一般带宽都是指40Mb,也就是只有5MB/s,再加上你这是所有数据库的同步都是同一个带宽,同步高峰期时候延时是肯定会有的(具体多少,得看你们的业务量了)。
一般来说固定带宽的钱会比按量付费多很多,按量付费的带宽上限一般都很高,感觉你可以评估一下需要同步的数据量和每日增量,可能按量付费更划算,带宽更高。

PPS:我上面说的方案,都是根据方案1来的。其实方案2的操作成本更低,方案也更好理解,只不过要做好新集群的权限保护,要防止在迁移期间新DC上部署的业务误写入。

感谢,确实实际测试带宽5m/s。

  1. 实际在5.0.4版本测试Placement Rule 不能按照预期迁移。(当前tidb集群有 4.0.16、5.0.1、5.0.4、5.0.5版本。其余版本我再测下)。

假如:某集群所有region数据量20T,以5M/s满额占用,需要传输:48.55天

  1. 方案2 ticdc 主从同步。集群的混用非常严重,需要一个个业务迁移。混用的库中,部分业务不支持短时间( <300s )停写。
   PS:对于DBA来说这个方案成本低;最重要的是可控;
   唯一一点就是迁移期间部分业务不支持停写。

再问一下,你们的数据初始来源是直接写的TiDB,还是从MySQL同步过来的?如果从MySQL过来的,其实可以考虑下从DM处入手

都有

  • 部分业务使用单独集群,数据量比较小<1T,不能停写;这个应该可以使用
  • 部分集群混用,有DM写入、有业务写入、有Flink写入。

有flink写入的话,你还要排查一下,是不是所有表都有主键,我记得默认产生的表结构是没有主键的,如何做数据核对也是一个问题,:thinking:

如果对实时性要求比较高的,都是从MySQL同步的数据,其实这部分数据完全可以不经过TiCDC同步,而是在新机房只同步MySQL数据,新机房的TiDB直接从新机房MySQL经过DM同步,这样也能减少一部分带宽消耗,最终迁移时候,只用管MySQL的切换就好了。

1 个赞

其他可以允许长时间停止写入的集群,其实方案可选的就灵活多了。

单独的集群是业务直接写到tidb中,核心业务单独部署一个集群。

是一个好点子,思路打开了。

能把你写的rule配置发一下么?看5.0的文档,也是支持把所有表都新增不参与选举的follower节点的
https://docs.pingcap.com/zh/tidb/v5.0/configure-placement-rules#场景二5-副本按-2-2-1-的比例放置在-3-个数据中心且第-3-个中心不产生-leader

原始集群配置

global:
  user: tidb
  ssh_port: 22
  deploy_dir: /data/tidb-deploy
  data_dir: /data/tidb-data/
  os: linux
  arch: amd64
monitored:
  node_exporter_port: 39100
  blackbox_exporter_port: 39115
  deploy_dir: /data/tidb-deploy/monitor-39100
  data_dir: /data/tidb-data/monitor_data
  log_dir: /data/tidb-deploy/monitor-39100/log
server_configs:
  tidb:
    oom-use-tmp-storage: true
    performance.max-procs: 0
    performance.txn-total-size-limit: 2097152
    prepared-plan-cache.enabled: true
    tikv-client.copr-cache.capacity-mb: 128.0
    tikv-client.max-batch-wait-time: 0
    tmp-storage-path: /data/tidb-data/tmp_oom
    split-table: true
  tikv:
    coprocessor.split-region-on-table: true
    readpool.coprocessor.use-unified-pool: true
    readpool.storage.use-unified-pool: false
    server.grpc-compression-type: none
    storage.block-cache.shared: true
  pd:
    enable-cross-table-merge: false
    replication.enable-placement-rules: true
    schedule.leader-schedule-limit: 4
    schedule.region-schedule-limit: 2048
    schedule.replica-schedule-limit: 64
    replication.location-labels: ["dc","logic","rack","host"]
  tiflash: {}
  tiflash-learner: {}
  pump: {}
  drainer: {}
  cdc: {}
tidb_servers:
- host: 192.168.8.11
  ssh_port: 22
  port: 4000
  status_port: 10080
  deploy_dir: /data/tidb-deploy/tidb_4000
 
 
tikv_servers:
- host: 192.168.8.11
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /data/tidb-deploy/tikv_20160
  data_dir: /data/tidb-data/tikv_20160
 
 
- host: 192.168.8.12
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /data/tidb-deploy/tikv_20160
  data_dir: /data/tidb-data/tikv_20160
   
 
- host: 192.168.8.13
  ssh_port: 22
  port: 20160
  status_port: 20180
  deploy_dir: /data/tidb-deploy/tikv_20160
  data_dir: /data/tidb-data/tikv_20160
   
 
pd_servers:
- host: 192.168.8.11
  ssh_port: 22
  name: pd-192.168.8.11-2379
  client_port: 2379
  peer_port: 2380
  deploy_dir: /data/tidb-deploy/pd_2379
  data_dir: /data/tidb-data/pd_2379

修改集群Lable

tiup cluster edit-config tidb_placement_rule_remove
# 每个TiKV节点新增label配置:
  config:
    server.labels: { dc: "bj1", zone: "1", rack: "1", host: "192.168.8.11_20160" }
   
  config:
    server.labels: { dc: "bj1", zone: "1", rack: "1", host: "192.168.8.12_20160" }
   
   
  config:
    server.labels: { dc: "bj1", zone: "1", rack: "1", host: "192.168.8.13_20160" }     
 
 
# 生效配置变更
tiup cluster reload tidb_placement_rule_remove -R tikv -y

扩容新机房TiKV

tiup cluster scale-out tidb_placement_rule_remove scale-out-pr-test.yaml -u root -p
  • 配置文件
tikv_servers:
 - host: 192.168.8.12
   ssh_port: 22
   port: 20161
   status_port: 20181
   deploy_dir: /data/tidb-deploy/tikv_20161
   data_dir: /data/tidb-data/tikv_20161
   config:
     server.labels: { dc: "bj4",logic: "2",rack: "2",host: "192.168.8.12_20161" }
 - host: 192.168.8.13
   ssh_port: 22
   port: 20161
   status_port: 20181
   deploy_dir: /data/tidb-deploy/tikv_20161
   data_dir: /data/tidb-data/tikv_20161
   config:
     server.labels: { dc: "bj4",logic: "2",rack: "2",host: "192.168.8.13_20161" }
 
 - host: 192.168.8.14
   ssh_port: 22
   port: 20161
   status_port: 20181
   deploy_dir: /data/tidb-deploy/tikv_20161
   data_dir: /data/tidb-data/tikv_20161
   config:
     server.labels: { dc: "bj4",logic: "2",rack: "2",host: "192.168.8.14_20161" }
  • 可以看到扩容后,新节点调度一个follower region给192.168.8.12:20161 机器
SELECT  region.TABLE_NAME,  tikv.address,  case when region.IS_INDEX = 1 then "index" else "data" end as "region-type",  case when peer.is_leader = 1 then region.region_id end as "leader",
 case when peer.is_leader = 0 then region.region_id end as "follower",  case when peer.IS_LEARNER = 1 then region.region_id end as "learner"
FROM  information_schema.tikv_store_status tikv,  information_schema.tikv_region_peers peer, 
(SELECT * FROM information_schema.tikv_region_status where DB_NAME='test' and TABLE_NAME='sbtest1' and IS_INDEX=0) region
WHERE   region.region_id = peer.region_id  AND peer.store_id = tikv.store_id order by 1,3;
 
+------------+--------------------+-------------+--------+----------+---------+
| TABLE_NAME | address            | region-type | leader | follower | learner |
+------------+--------------------+-------------+--------+----------+---------+
| sbtest1    | 192.168.8.13:20160 | data        |   NULL |       16 |    NULL |
| sbtest1    | 192.168.8.11:20160 | data        |   NULL |       16 |    NULL |
| sbtest1    | 192.168.8.12:20160 | data        |     16 |     NULL |    NULL |
+------------+--------------------+-------------+--------+----------+---------+
3 rows in set (0.02 sec)

配置Placement Rule规则

  • dc-bj1 机房有3个voter
  • dc-bj2 机房有2个follower
cat > rules.json <<EOF
[{
  "group_id": "pd",
  "group_index": 0,
  "group_override": false,
  "rules": [
    {
        "group_id": "pd",
        "id": "dc-bj1",
        "start_key": "",
        "end_key": "",
        "role": "voter",
        "count": 3,
        "label_constraints": [
            {"key": "dc", "op": "in", "values": ["bj1"]}
        ],
        "location_labels": ["dc"]
    },
    {
        "group_id": "pd",
        "id": "dc-bj4",
        "start_key": "",
        "end_key": "",
        "role": "follower",
        "count": 2,
        "label_constraints": [
            {"key": "dc", "op": "in", "values": ["bj4"]}
        ],
        "location_labels": ["dc"]
    }
]
}
]
EOF

生效 Placement rule

tiup ctl:v5.0.4 pd --pd=http://127.0.0.1:2379 config placement-rules rule-bundle save --in=rules.json

检查region分布状态

可以看到按照预期分配region调度,并且“bj4” 机房没有分配leader,目前是follower。

MySQL [(none)]> SELECT  region.TABLE_NAME,  tikv.address,  case when region.IS_INDEX = 1 then "index" else "data" end as "region-type",  case when peer.is_leader = 1 then region.region_id end as "leader",  case when peer.is_leader = 0 then region.region_id end as "follower",  case when peer.IS_LEARNER = 1 then region.region_id end as "learner" FROM  information_schema.tikv_store_status tikv,  information_schema.tikv_region_peers peer,  (SELECT * FROM information_schema.tikv_region_status where DB_NAME='test' and TABLE_NAME='sbtest1' and IS_INDEX=0) region WHERE   region.region_id = peer.region_id  AND peer.store_id = tikv.store_id order by 1,3;
+------------+--------------------+-------------+--------+----------+---------+
| TABLE_NAME | address            | region-type | leader | follower | learner |
+------------+--------------------+-------------+--------+----------+---------+
| sbtest1    | 192.168.8.11:20160 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.12:20161 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.14:20161 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.13:20160 | data        |      3 |     NULL |    NULL |
| sbtest1    | 192.168.8.12:20160 | data        |   NULL |        3 |    NULL |
+------------+--------------------+-------------+--------+----------+---------+
  • 集群Placement-rule切换:未选举出bj4 的region-leader
[tidb@centos1 deploy]$ tiup ctl:v5.0.4 pd --pd=http://127.0.0.1:2379 config placement-rules rule-bundle save --in=rules.json
[tidb@centos1 deploy]$ tiup ctl:v5.0.4 pd --pd=http://127.0.0.1:2379 config placement-rules show
 
MySQL [(none)]> SELECT  region.TABLE_NAME,  tikv.address,  case when region.IS_INDEX = 1 then "index" else "data" end as "region-type",  case when peer.is_leader = 1 then region.region_id end as "leader",   case when peer.is_leader = 0 then region.region_id end as "follower",  case when peer.IS_LEARNER = 1 then region.region_id end as "learner"  FROM  information_schema.tikv_store_status tikv,  information_schema.tikv_region_peers peer,   (SELECT * FROM information_schema.tikv_region_status where DB_NAME='test' and TABLE_NAME='sbtest1' and IS_INDEX=0) region  WHERE   region.region_id = peer.region_id  AND peer.store_id = tikv.store_id order by 1,3;
+------------+--------------------+-------------+--------+----------+---------+
| TABLE_NAME | address            | region-type | leader | follower | learner |
+------------+--------------------+-------------+--------+----------+---------+
| sbtest1    | 192.168.8.13:20160 | data        |      3 |     NULL |    NULL |
| sbtest1    | 192.168.8.12:20160 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.11:20160 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.12:20161 | data        |   NULL |        3 |    NULL |
| sbtest1    | 192.168.8.14:20161 | data        |   NULL |        3 |    NULL |
+------------+--------------------+-------------+--------+----------+---------+
5 rows in set (0.01 sec)