TIDB6.2.0 alter placement policy报错 8243

【 TiDB 使用环境】生产环境 /测试/ Poc
【 TiDB 版本】V6.2.0
【遇到的问题】TIDB6.2.0参照 TiDB 6.x in Action实验两地三中心 alter placement policy报错8243
参照地址:基于 TiDB v6.0 部署两地三中心 | TiDB Books
【复现路径】做过哪些操作出现的问题
【问题现象及影响】
集群情况:


规则:

报错:

【附件】

请提供各个组件的 version 信息,如 cdc/tikv,可通过执行 cdc version/tikv-server --version 获取。


leader 占一个副本 在 非 sjz dc,follower 非sjz 4个副本 sjz 、eur、ame各一个副本,总共7个副本,5个tikv , 还down了一台,满足不了配置要求, 把follower 非sjz 改成1个先试试

节点全部启动 7个tikv ,flower 改成1 ,依然报错8243
[tidb@centos ~]$ tiup cluster display tidb-test
tiup is checking updates for component cluster …
Starting component cluster: /home/tidb/.tiup/components/cluster/v1.10.3/tiup-cluster display tidb-test
Cluster type: tidb
Cluster name: tidb-test
Cluster version: v6.2.0
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.58.133:2379/dashboard
Grafana URL: http://192.168.58.133:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir


192.168.58.133:9093 alertmanager 192.168.58.133 9093/9094 linux/x86_64 Up /home/tidb/cluster/tidb-data/alertmanager-9093 /home/tidb/cluster/tidb-deploy/alertmanager-9093
192.168.58.133:3000 grafana 192.168.58.133 3000 linux/x86_64 Up - /home/tidb/cluster/tidb-deploy/grafana-3000
192.168.58.133:2379 pd 192.168.58.133 2379/2380 linux/x86_64 Up|L|UI /home/tidb/cluster/tidb-data/pd-2379 /home/tidb/cluster/tidb-deploy/pd-2379
192.168.58.133:9090 prometheus 192.168.58.133 9090/12020 linux/x86_64 Up /home/tidb/cluster/tidb-data/prometheus-9090 /home/tidb/cluster/tidb-deploy/prometheus-9090
192.168.58.133:4000 tidb 192.168.58.133 4000/10080 linux/x86_64 Up - /home/tidb/cluster/tidb-deploy/tidb-4000
192.168.58.133:20160 tikv 192.168.58.133 20160/20180 linux/x86_64 Up /home/tidb/cluster/tidb-data/tikv-20160 /home/tidb/cluster/tidb-deploy/tikv-20160
192.168.58.133:20161 tikv 192.168.58.133 20161/20181 linux/x86_64 Up /home/tidb/cluster/tidb-data/tikv-20161 /home/tidb/cluster/tidb-deploy/tikv-20161
192.168.58.133:20162 tikv 192.168.58.133 20162/20182 linux/x86_64 Up /home/tidb/cluster/tidb-data/tikv-20162 /home/tidb/cluster/tidb-deploy/tikv-20162
192.168.58.133:20163 tikv 192.168.58.133 20163/20183 linux/x86_64 Up /home/tidb/cluster/tidb-data/tikv-20163 /home/tidb/cluster/tidb-deploy/tikv-20163
192.168.58.133:20164 tikv 192.168.58.133 20164/20184 linux/x86_64 Up /home/tidb/cluster/tidb-data/tikv-20164 /home/tidb/cluster/tidb-deploy/tikv-20164
192.168.58.133:20165 tikv 192.168.58.133 20165/20185 linux/x86_64 Up /home/tidb/cluster/tidb-data/tikv-20165 /home/tidb/cluster/tidb-deploy/tikv-20165
192.168.58.133:20166 tikv 192.168.58.133 20166/20186 linux/x86_64 Up /home/tidb/cluster/tidb-data/tikv-20166 /home/tidb/cluster/tidb-deploy/tikv-20166
Total nodes: 12
[tidb@centos ~]$

[root@centos ~]# mysql -h192.168.58.133 -P4000 -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 405
Server version: 5.7.25-TiDB-v6.2.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright © 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MySQL [(none)]> use crm
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MySQL [crm]> SHOW PLACEMENT LABELS;
±-----±------------------------------------------------------------------------------+
| Key | Values |
±-----±------------------------------------------------------------------------------+
| area | [“america”, “europe”, “northern”] |
| dc | [“bj1”, “bj2”, “germany”, “sjz”, “usa”] |
| host | [“host100”, “host101”, “host102”, “host103”, “host104”, “host105”, “host106”] |
| rack | [“r1”, “r2”] |
±-----±------------------------------------------------------------------------------+
4 rows in set (0.00 sec)

MySQL [crm]> show placement;
±-----------------------±--------------------------------------------------------------------------------------------------------------±-----------------+
| Target | Placement | Scheduling_State |
±-----------------------±--------------------------------------------------------------------------------------------------------------±-----------------+
| POLICY northernpolicy | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | NULL |
| DATABASE crm | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING |
| TABLE crm.m_cust_data | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING |
| TABLE crm.m_cust_label | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING |
| TABLE crm.m_cust_main | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING |
| TABLE crm.m_cust_org | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING |
| TABLE crm.m_seed | LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" | PENDING |
±-----------------------±--------------------------------------------------------------------------------------------------------------±-----------------+
7 rows in set (0.00 sec)

MySQL [crm]> select * from information_schema.placement_policies;
±----------±-------------±---------------±---------------±--------±------------±-------------------------±-----------------------------------------±--------------------±---------±----------±---------+
| POLICY_ID | CATALOG_NAME | POLICY_NAME | PRIMARY_REGION | REGIONS | CONSTRAINTS | LEADER_CONSTRAINTS | FOLLOWER_CONSTRAINTS | LEARNER_CONSTRAINTS | SCHEDULE | FOLLOWERS | LEARNERS |
±----------±-------------±---------------±---------------±--------±------------±-------------------------±-----------------------------------------±--------------------±---------±----------±---------+
| 2 | def | northernpolicy | | | | [+area=northern,-dc=sjz] | {"+area=northern,-dc=sjz": 4,+dc=sjz: 1} | | | 2 | 0 |
±----------±-------------±---------------±---------------±--------±------------±-------------------------±-----------------------------------------±--------------------±---------±----------±---------+
1 row in set (0.00 sec)

MySQL [crm]> select a.region_id,a.peer_id,a.store_id,a.is_leader,b.address,b.label from INFORMATION_SCHEMA.TIKV_REGION_PEERS a
-> left join INFORMATION_SCHEMA.TIKV_STORE_STATUS b on a.store_id =b.store_id
-> where a.region_id =218;
±----------±--------±---------±----------±---------------------±-------------------------------------------------------------------------------------------------------------------------------------------+
| region_id | peer_id | store_id | is_leader | address | label |
±----------±--------±---------±----------±---------------------±-------------------------------------------------------------------------------------------------------------------------------------------+
| 218 | 219 | 1 | 1 | 192.168.58.133:20163 | [{“key”: “area”, “value”: “northern”}, {“key”: “rack”, “value”: “r2”}, {“key”: “host”, “value”: “host103”}, {“key”: “dc”, “value”: “bj2”}] |
| 218 | 220 | 2 | 0 | 192.168.58.133:20160 | [{“key”: “area”, “value”: “northern”}, {“key”: “rack”, “value”: “r1”}, {“key”: “host”, “value”: “host100”}, {“key”: “dc”, “value”: “bj1”}] |
| 218 | 221 | 8 | 0 | 192.168.58.133:20164 | [{“key”: “area”, “value”: “northern”}, {“key”: “rack”, “value”: “r1”}, {“key”: “host”, “value”: “host104”}, {“key”: “dc”, “value”: “sjz”}] |
| 218 | 222 | 9 | 0 | 192.168.58.133:20161 | [{“key”: “area”, “value”: “northern”}, {“key”: “rack”, “value”: “r2”}, {“key”: “host”, “value”: “host101”}, {“key”: “dc”, “value”: “bj1”}] |
| 218 | 223 | 7 | 0 | 192.168.58.133:20162 | [{“key”: “area”, “value”: “northern”}, {“key”: “rack”, “value”: “r1”}, {“key”: “host”, “value”: “host102”}, {“key”: “dc”, “value”: “bj2”}] |
±----------±--------±---------±----------±---------------------±-------------------------------------------------------------------------------------------------------------------------------------------+
5 rows in set (0.00 sec)

MySQL [crm]> show create PLACEMENT POLICY northernpolicy;
±---------------±-------------------------------------------------------------------------------------------------------------------------------------------------------+
| Policy | Create Policy |
±---------------±-------------------------------------------------------------------------------------------------------------------------------------------------------+
| northernpolicy | CREATE PLACEMENT POLICY northernpolicy LEADER_CONSTRAINTS="[+area=northern,-dc=sjz]" FOLLOWER_CONSTRAINTS="{"+area=northern,-dc=sjz": 4,+dc=sjz: 1}" |
±---------------±-------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

MySQL [crm]> ALTER PLACEMENT POLICY northernpolicy LEADER_CONSTRAINTS=’[+area=northern,-dc=sjz]’ FOLLOWER_CONSTRAINTS=’{"+area=northern,-dc=sjz": 4,+dc=sjz: 1,+dc=europe: 1,+dc=america: 1}’;
ERROR 8243 (HY000): “[PD:placement:ErrRuleContent]invalid rule content, rule ‘table_rule_72_3’ from rule group ‘TiDB_DDL_72’ can not match any store”

MySQL [crm]> ALTER PLACEMENT POLICY northernpolicy LEADER_CONSTRAINTS=’[+area=northern,-dc=sjz]’ FOLLOWER_CONSTRAINTS=’{"+area=northern,-dc=sjz": 3,+dc=sjz: 1,+dc=europe: 1,+dc=america: 1}’;
ERROR 8243 (HY000): “[PD:placement:ErrRuleContent]invalid rule content, rule ‘table_rule_72_3’ from rule group ‘TiDB_DDL_72’ can not match any store”

MySQL [crm]> ALTER PLACEMENT POLICY northernpolicy LEADER_CONSTRAINTS=’[+area=northern,-dc=sjz]’ FOLLOWER_CONSTRAINTS=’{"+area=northern,-dc=sjz": 1,+dc=sjz: 1,+dc=europe: 1,+dc=america: 1}’;
ERROR 8243 (HY000): “[PD:placement:ErrRuleContent]invalid rule content, rule ‘table_rule_72_2’ from rule group ‘TiDB_DDL_72’ can not match any store”


label里的dc 也没有+dc=europe: 1,+dc=america: 1 啊

你说的我不是很理解 那个结果就是在未添加europe、america的时候结果

你的tikv的label中有没有设置 {“key”: “dc”, “value”: “europe”}或america? 如果没有这样的tikv 它怎么匹配+dc=europe: 1,+dc=america: 1

当然有设置
[tidb@centos ~]$ tiup cluster show-config tidb-test
tiup is checking updates for component cluster …
Starting component cluster: /home/tidb/.tiup/components/cluster/v1.10.3/tiup-cluster show-config tidb-test
global:
user: tidb
ssh_port: 22
ssh_type: builtin
deploy_dir: /home/tidb/cluster/tidb-deploy
data_dir: /home/tidb/cluster/tidb-data
os: linux
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
deploy_dir: /home/tidb/cluster/tidb-deploy/monitor-9100
data_dir: /home/tidb/cluster/tidb-data/monitor-9100
log_dir: /home/tidb/cluster/tidb-deploy/monitor-9100/log
server_configs:
tidb:
binlog.enable: false
binlog.ignore-error: false
log.slow-threshold: 300
tikv:
readpool.coprocessor.use-unified-pool: true
readpool.storage.use-unified-pool: false
server.grpc-compression-type: gzip
pd:
replication.location-labels:
- area
- dc
- rack
- host
schedule.leader-schedule-limit: 4
schedule.region-schedule-limit: 2048
schedule.replica-schedule-limit: null
schedule.tolerant-size-ratio: 20.0
tiflash: {}
tiflash-learner: {}
pump: {}
drainer: {}
cdc: {}
grafana: {}
tidb_servers:

  • host: 192.168.58.133
    ssh_port: 22
    port: 4000
    status_port: 10080
    deploy_dir: /home/tidb/cluster/tidb-deploy/tidb-4000
    log_dir: /home/tidb/cluster/tidb-deploy/tidb-4000/log
    arch: amd64
    os: linux
    tikv_servers:
  • host: 192.168.58.133
    ssh_port: 22
    port: 20160
    status_port: 20180
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20160
    data_dir: /home/tidb/cluster/tidb-data/tikv-20160
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20160/log
    config:
    server.labels:
    area: northern
    dc: bj1
    host: host100
    rack: r1
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20161
    status_port: 20181
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20161
    data_dir: /home/tidb/cluster/tidb-data/tikv-20161
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20161/log
    config:
    server.labels:
    area: northern
    dc: bj1
    host: host101
    rack: r2
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20162
    status_port: 20182
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20162
    data_dir: /home/tidb/cluster/tidb-data/tikv-20162
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20162/log
    config:
    server.labels:
    area: northern
    dc: bj2
    host: host102
    rack: r1
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20163
    status_port: 20183
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20163
    data_dir: /home/tidb/cluster/tidb-data/tikv-20163
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20163/log
    config:
    server.labels:
    area: northern
    dc: bj2
    host: host103
    rack: r2
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20164
    status_port: 20184
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20164
    data_dir: /home/tidb/cluster/tidb-data/tikv-20164
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20164/log
    config:
    server.labels:
    area: northern
    dc: sjz
    host: host104
    rack: r1
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20165
    status_port: 20185
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20165
    data_dir: /home/tidb/cluster/tidb-data/tikv-20165
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20165/log
    config:
    server.labels:
    area: europe
    dc: germany
    host: host105
    rack: r1
    arch: amd64
    os: linux
  • host: 192.168.58.133
    ssh_port: 22
    port: 20166
    status_port: 20186
    deploy_dir: /home/tidb/cluster/tidb-deploy/tikv-20166
    data_dir: /home/tidb/cluster/tidb-data/tikv-20166
    log_dir: /home/tidb/cluster/tidb-deploy/tikv-20166/log
    config:
    server.labels:
    area: america
    dc: usa
    host: host106
    rack: r1
    arch: amd64
    os: linux
    tiflash_servers: []
    pd_servers:
  • host: 192.168.58.133
    ssh_port: 22
    name: pd-192.168.58.133-2379
    client_port: 2379
    peer_port: 2380
    deploy_dir: /home/tidb/cluster/tidb-deploy/pd-2379
    data_dir: /home/tidb/cluster/tidb-data/pd-2379
    log_dir: /home/tidb/cluster/tidb-deploy/pd-2379/log
    arch: amd64
    os: linux
    monitoring_servers:
  • host: 192.168.58.133
    ssh_port: 22
    port: 9090
    ng_port: 12020
    deploy_dir: /home/tidb/cluster/tidb-deploy/prometheus-9090
    data_dir: /home/tidb/cluster/tidb-data/prometheus-9090
    log_dir: /home/tidb/cluster/tidb-deploy/prometheus-9090/log
    external_alertmanagers: []
    arch: amd64
    os: linux
    grafana_servers:
  • host: 192.168.58.133
    ssh_port: 22
    port: 3000
    deploy_dir: /home/tidb/cluster/tidb-deploy/grafana-3000
    arch: amd64
    os: linux
    username: admin
    password: admin
    anonymous_enable: false
    root_url: “”
    domain: “”
    alertmanager_servers:
  • host: 192.168.58.133
    ssh_port: 22
    web_port: 9093
    cluster_port: 9094
    deploy_dir: /home/tidb/cluster/tidb-deploy/alertmanager-9093
    data_dir: /home/tidb/cluster/tidb-data/alertmanager-9093
    log_dir: /home/tidb/cluster/tidb-deploy/alertmanager-9093/log
    arch: amd64
    os: linux
    [tidb@centos ~]$

america 不在 dc标签里,在area标签里。

确实是文档中SQL脚本与config中area、dc值不对应

此话题已在最后回复的 60 天后被自动关闭。不再允许新回复。