tidb 创建索引失败

为提高效率,请提供以下信息,问题描述清晰能够更快得到解决:

【TiDB 版本】v4.0.0

【问题描述】tidb 数据量6.7亿,创建索引,第一次耗时3000s,提示创建失败,修改
update mysql.tidb set VARIABLE_VALUE=“24h” where VARIABLE_NAME=“tikv_gc_life_time”;
依然创建索引失败

麻烦看一下 tidb.log 日志中有什么报错,多谢。

我这边是单机部署,超过一千万的表新加索引,都是先创建一个新表,加好索引之后,导入原表数据,之后找个合适的时候rename一下。

我这里是先全量同步(dumping和lighning)mysql的数据,然后增量同步(dm),所以表结构、索引、数据相关的都是直接上游同步的,由于同步的是主库,不是从库,少了索引,然后添加索引


[“run statement failed”] [conn=11856] [schemaVersion=1067] [error="[ddl:8214]Cancelled DDL job"] [session="{\n “currDBName”: “lyzhhw4”,\n “id”: 11856,\n “status”: 2,\n “strictMode”: true,\n “user”: {\n “Username”: “root”,\n “Hostname”: “192.168.8.69”,\n “CurrentUser”: false,\n “AuthUsername”: “root”,\n “AuthHostname”: “%”\n }\n}"]
[2021/01/27 17:19:24.241 +08:00] [ERROR] [conn.go:728] [“command dispatched failed”] [conn=11856] [connInfo=“id:11856, addr:192.168.8.69:49954 status:10, collation:utf8_general_ci, user:root”] [command=Query] [status=“inTxn:0, autocommit:1”] [sql=“create index idx_tenant_product_region_tfsj on h_recycle_record(tenantid, productid, regionid, tfsj)”] [txn_mode=PESSIMISTIC] [err="[ddl:8214]Cancelled DDL job\ngithub.com/pingcap/errors.AddStack\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20190809092503-95897b64e011/errors.go:174\ngithub.com/pingcap/errors.Trace\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20190809092503-95897b64e011/juju_adaptor.go:15\ngithub.com/pingcap/tidb/ddl.(*ddl).doDDLJob\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/ddl/ddl.go:537\ngithub.com/pingcap/tidb/ddl.(*ddl).CreateIndex\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/ddl/ddl_api.go:3928\ngithub.com/pingcap/tidb/executor.(*DDLExec).executeCreateIndex\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/executor/ddl.go:206\ngithub.com/pingcap/tidb/executor.(*DDLExec).Next\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/executor/ddl.go:88\ngithub.com/pingcap/tidb/executor.Next\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/executor/executor.go:248\ngithub.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelayExecutor\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/executor/adapter.go:507\ngithub.com/pingcap/tidb/executor.(*ExecStmt).handleNoDelay\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/executor/adapter.go:389\ngithub.com/pingcap/tidb/executor.(*ExecStmt).Exec\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/executor/adapter.go:357\ngithub.com/pingcap/tidb/session.runStmt\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/session/tidb.go:276\ngithub.com/pingcap/tidb/session.(*session).executeStatement\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/session/session.go:1037\ngithub.com/pingcap/tidb/session.(*session).execute\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/session/session.go:1150\ngithub.com/pingcap/tidb/session.(*session).Execute\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/session/session.go:1080\ngithub.com/pingcap/tidb/server.(*TiDBContext).Execute\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/server/driver_tidb.go:248\ngithub.com/pingcap/tidb/server.(*clientConn).handleQuery\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/server/conn.go:1265\ngithub.com/pingcap/tidb/server.(*clientConn).dispatch\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/server/conn.go:899\ngithub.com/pingcap/tidb/server.(*clientConn).Run\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/server/conn.go:713\ngithub.com/pingcap/tidb/server.(*Server).onConn\n\t/home/jenkins/agent/workspace/tidb_v4.0.0/go/src/github.com/pingcap/tidb/server/server.go:415\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"]

好的,麻烦您再确认下您的版本是4.0.0吗?

  1. 导入时使用的 lightning local backend模式吗?使用的哪个版本的lightning?
  2. 麻烦展示下这个表 show table regions 的结果

有一点,需要说明,由于mysql数据量太大,使用dumping 和 lightning 一次性导入磁盘空间不够,采用表前缀模糊匹配导入数据,然后删除之前导入的数据,再次导入,单一张表是一次性导入进去的。

tidb-toolkit-v4.0.3-linux-amd64.tar.gz

请问这是所有的 show region 结果吗? 请展示所有结果

是的,执行命令如下:
show table h_recycle_record REGIONS ;
一共有8495条,太多了。是否需要上传所有?

all-regions.sql (3.0 MB)

  1. 这个问题是由有使用lightning local backend 模式导入切region造成的 Issue: https://github.com/pingcap/tidb/issues/20727 PR: https://github.com/pingcap/tidb/pull/20908

  2. 再 lighting v4.0.9以后版本修复

  3. 目前解决方法,需要将这种有问题的(可以看到没有正常的 _r 或 _i 标识) region merge 合并,再手工分裂

  4. 合并和分裂方法使用 pd-ctl

>> operator add merge-region 1 2                        // 将 Region 1 与 Region 2 合并
>> operator add split-region 1 --policy=approximate     // 将 Region 1 对半拆分成两个 Region,基于粗略估计值
>> operator add split-region 1 --policy=scan            // 将 Region 1 对半拆分成两个 Region,基于精确扫描值

./pd-ctl -i -u http://127.0.0.1:2379
image

请问是通过这种方式进行操作么,为啥会提示operator 不存在呢

127.0.0.1是你的ip地址吗?连接自己的pd 的地址

不是,我修改了,是pd 的地址

还不行吗? 我看这边和你的问题差不多,应该已经调整了
建立索引时 ddl jobs 中的 ROW_COUNT 超出实际数据记录数

应该可以了,在建索引,已经花了4000多秒了