tispark某个库里任何操作都报java.lang.IllegalArgumentException: Multiple entries with same key:

Bug 反馈
清晰准确地描述您发现的问题,提供任何可能复现问题的步骤有助于研发同学及时处理问题
【 Bug 的影响】
tidb5.2.2
Spark version 2.4.3
【可能的问题复现步骤】
use yixintui_operate ; #其他数据库没有问题
show tables;
【看到的非预期行为】
21/12/14 14:20:28 ERROR SparkSQLDriver: Failed in [show tables]
java.lang.IllegalArgumentException: Multiple entries with same key: material_creative_count=table_id: 29944
columns {
column_id: 1
tp: 8
collation: 63
columnLen: 20
decimal: 0
flag: 4099
default_val: “\000”
pk_handle: true
}
columns {
column_id: 2
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 3
tp: 15
collation: 46
columnLen: 100
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 4
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
and material_creative_count=table_id: 29909
columns {
column_id: 1
tp: 8
collation: 63
columnLen: 20
decimal: 0
flag: 4099
default_val: “\000”
pk_handle: true
}
columns {
column_id: 2
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 3
tp: 15
collation: 46
columnLen: 100
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 4
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
【期望看到的行为】

【相关组件及具体版本】

【其他背景信息或者截图】
如集群拓扑,系统和内核版本,应用 app 信息等;如果问题跟 SQL 有关,请提供 SQL 语句和相关表的 Schema 信息;如果节点日志存在关键报错,请提供相关节点的日志内容或文件;如果一些业务敏感信息不便提供,请留下联系方式,我们与您私下沟通。

1赞

CREATE TABLE Material_creative_count (
Id BIGINT(20) NOT NULL /*T![auto_rand] AUTO_RANDOM(5) */ COMMENT ‘报表ID’,
Platform_Type INT(11) DEFAULT NULL COMMENT ‘媒体平台’,
Agent_Material_Id VARCHAR(100) DEFAULT NULL COMMENT ‘媒体平台素材ID’,
creative_count INT(11) DEFAULT NULL COMMENT ‘创意数’,
PRIMARY KEY (Id) /*T![clustered_index] CLUSTERED */
) ENGINE=INNODB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin /*T![auto_rand_base] AUTO_RANDOM_BASE=2734362 */ COMMENT=‘素材-创意数’

(user:tidbdba time: 14:27)[db: yixintui_operate]select * from information_schema.tables where tidb_table_id=29909;
Empty set (1.97 sec)

(user:tidbdba time: 14:27)[db: yixintui_operate]select * from information_schema.tables where tidb_table_id=29944;
Empty set (2.57 sec)

curl http://xx.xx.17.29:10080/db-table/29949
[schema:1146]Table which ID = 29949 does not exist.

curl http://xx.xx.17.29:10080/db-table/29909
[schema:1146]Table which ID = 29909 does not exist.

看上去这个表使用了 TiSpark 不支持的特性

Supports and Limitations

TiSpark (>= 2.4.2) supports writing data to clustered index tables, which is a new feature in TiDB-5.0.0.

TiSpark does not support writing to the following tables:

  • tables with auto random column
  • partition table
  • tables with generated column

https://github.com/pingcap/tispark/blob/master/docs/datasource_api_userguide.md#supports-and-limitations

ALTER TABLE Material_creative_count RENAME TO test.Material_creative_count;
我把表挪走了 ,重新连spark-sql 还是一样报错 ,同样显示table_id 也没变。

我这里是查询 ,还没有写操作
select CREATE_TIME, TIDB_TABLE_ID from information_schema.tables where table_name =‘Material_creative_count’ ;
| CREATE_TIME | TIDB_TABLE_ID |
±--------------------±--------------+
| 2021-11-05 17:47:43 | 36881 |

我在test库里相似创建了 这个表 material_creative_count 。
(user:tidbdba time: 14:53)[db: test]create table Material_creative_count like yixintui_operate.Material_creative_count ;
Query OK, 0 rows affected (0.53 sec)

(user:tidbdba time: 15:16)[db: test]insert into Material_creative_count select * from yixintui_operate.Material_creative_count limit 10000;
ERROR 8216 (HY000): Invalid auto random: Explicit insertion on auto_random column is disabled. Try to set @@allow_auto_random_explicit_insert = true.
(user:tidbdba time: 15:16)[db: test]set session allow_auto_random_explicit_insert = true;
Query OK, 0 rows affected (0.00 sec)

(user:tidbdba time: 15:17)[db: test]insert into Material_creative_count select * from yixintui_operate.Material_creative_count limit 10000;
Query OK, 10000 rows affected (0.18 sec)
Records: 10000 Duplicates: 0 Warnings: 0

test库查询没问题 所以应该不是不支持类型问题。 show create table test.Material_creative_count; 提示不存在 。

代码里没搜到那个 Multiple entries 的内容不好定位是哪里的检查,收到错误的时候完整的 stack 有打印出来么

spark-sql> show tables;
21/12/14 15:25:52 INFO HiveMetaStore: 0: get_database: yixintui_operate
21/12/14 15:25:52 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: yixintui_operate
21/12/14 15:25:52 INFO HiveMetaStore: 0: get_database: default
21/12/14 15:25:52 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: default
21/12/14 15:25:52 INFO HiveMetaStore: 0: get_database: default
21/12/14 15:25:52 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: default
21/12/14 15:25:52 INFO HiveMetaStore: 0: get_tables: db=default pat=*
21/12/14 15:25:52 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_tables: db=default pat=*
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 1
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 18
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 3
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 4
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 16
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 26
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 14
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 17
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 22
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 15
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 27
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 25
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 5
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 6
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 12
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 8
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 30
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 2
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 24
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 9
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 19
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 21
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 29
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 13
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 28
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 10
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 20
21/12/14 15:25:53 INFO ContextCleaner: Cleaned accumulator 31
21/12/14 15:25:54 INFO BlockManagerInfo: Removed broadcast_0_piece0 on yzdmp006044.yima.cn:35934 in memory (size: 8.2 KB, free: 997.8 MB)
21/12/14 15:25:54 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 123.59.17.46:30001 in memory (size: 8.2 KB, free: 5.2 GB)
21/12/14 15:25:54 INFO ContextCleaner: Cleaned accumulator 7
21/12/14 15:25:54 INFO ContextCleaner: Cleaned accumulator 11
21/12/14 15:25:54 INFO ContextCleaner: Cleaned accumulator 23
21/12/14 15:25:55 ERROR SparkSQLDriver: Failed in [show tables]
java.lang.IllegalArgumentException: Multiple entries with same key: material_creative_count=table_id: 29944
columns {
column_id: 1
tp: 8
collation: 63
columnLen: 20
decimal: 0
flag: 4099
default_val: “\000”
pk_handle: true
}
columns {
column_id: 2
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 3
tp: 15
collation: 46
columnLen: 100
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 4
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
and material_creative_count=table_id: 29909
columns {
column_id: 1
tp: 8
collation: 63
columnLen: 20
decimal: 0
flag: 4099
default_val: “\000”
pk_handle: true
}
columns {
column_id: 2
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 3
tp: 15
collation: 46
columnLen: 100
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 4
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}

    at com.pingcap.com.google.common.collect.RegularImmutableMap.createHashTable(RegularImmutableMap.java:104)
    at com.pingcap.com.google.common.collect.RegularImmutableMap.create(RegularImmutableMap.java:74)
    at com.pingcap.com.google.common.collect.ImmutableMap$Builder.build(ImmutableMap.java:338)
    at com.pingcap.tikv.catalog.Catalog$CatalogCache.loadTables(Catalog.java:189)
    at com.pingcap.tikv.catalog.Catalog$CatalogCache.listTables(Catalog.java:162)
    at com.pingcap.tikv.catalog.Catalog.listTables(Catalog.java:80)
    at com.pingcap.tispark.MetaManager.getTables(MetaManager.scala:30)
    at org.apache.spark.sql.catalyst.catalog.TiDirectExternalCatalog.listTables(TiDirectExternalCatalog.scala:57)
    at org.apache.spark.sql.catalyst.catalog.TiDirectExternalCatalog.listTables(TiDirectExternalCatalog.scala:53)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listTables(SessionCatalog.scala:765)
    at org.apache.spark.sql.catalyst.catalog.TiCompositeSessionCatalog.listTables(TiCompositeSessionCatalog.scala:158)
    at org.apache.spark.sql.catalyst.catalog.TiCompositeSessionCatalog.listTables(TiCompositeSessionCatalog.scala:143)
    at org.apache.spark.sql.execution.command.TiShowTablesCommand$$anonfun$2.apply(tables.scala:41)
    at org.apache.spark.sql.execution.command.TiShowTablesCommand$$anonfun$2.apply(tables.scala:41)
    at scala.Option.fold(Option.scala:158)
    at org.apache.spark.sql.execution.command.TiShowTablesCommand.run(tables.scala:41)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
    at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
    at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
    at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363)
    at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
    at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62)
    at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:371)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
    at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:274)
    at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

java.lang.IllegalArgumentException: Multiple entries with same key: material_creative_count=table_id: 29944
columns {
column_id: 1
tp: 8
collation: 63
columnLen: 20
decimal: 0
flag: 4099
default_val: “\000”
pk_handle: true
}
columns {
column_id: 2
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 3
tp: 15
collation: 46
columnLen: 100
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 4
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
and material_creative_count=table_id: 29909
columns {
column_id: 1
tp: 8
collation: 63
columnLen: 20
decimal: 0
flag: 4099
default_val: “\000”
pk_handle: true
}
columns {
column_id: 2
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 3
tp: 15
collation: 46
columnLen: 100
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}
columns {
column_id: 4
tp: 3
collation: 63
columnLen: 11
decimal: 0
flag: 0
default_val: “\000”
pk_handle: false
}

    at com.pingcap.com.google.common.collect.RegularImmutableMap.createHashTable(RegularImmutableMap.java:104)
    at com.pingcap.com.google.common.collect.RegularImmutableMap.create(RegularImmutableMap.java:74)
    at com.pingcap.com.google.common.collect.ImmutableMap$Builder.build(ImmutableMap.java:338)
    at com.pingcap.tikv.catalog.Catalog$CatalogCache.loadTables(Catalog.java:189)
    at com.pingcap.tikv.catalog.Catalog$CatalogCache.listTables(Catalog.java:162)
    at com.pingcap.tikv.catalog.Catalog.listTables(Catalog.java:80)
    at com.pingcap.tispark.MetaManager.getTables(MetaManager.scala:30)
    at org.apache.spark.sql.catalyst.catalog.TiDirectExternalCatalog.listTables(TiDirectExternalCatalog.scala:57)
    at org.apache.spark.sql.catalyst.catalog.TiDirectExternalCatalog.listTables(TiDirectExternalCatalog.scala:53)
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listTables(SessionCatalog.scala:765)
    at org.apache.spark.sql.catalyst.catalog.TiCompositeSessionCatalog.listTables(TiCompositeSessionCatalog.scala:158)
    at org.apache.spark.sql.catalyst.catalog.TiCompositeSessionCatalog.listTables(TiCompositeSessionCatalog.scala:143)
    at org.apache.spark.sql.execution.command.TiShowTablesCommand$$anonfun$2.apply(tables.scala:41)
    at org.apache.spark.sql.execution.command.TiShowTablesCommand$$anonfun$2.apply(tables.scala:41)
    at scala.Option.fold(Option.scala:158)
    at org.apache.spark.sql.execution.command.TiShowTablesCommand.run(tables.scala:41)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
    at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
    at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
    at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363)
    at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
    at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62)
    at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:371)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
    at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:274)
    at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

这个信息是 Guava 抛出的,原因是存在相同的 key。看了报错信息,两个 table 名都是 material_creative_count,但是 table id 却不一样

-rw-r–r-- 1 chenlei chenlei 2.1M Dec 14 11:49 guava-14.0.1.jar
要不要换这个版本试试: # 换成这个 guava-27.0-jre.jar 也不行
-rw-r–r-- 1 hadoop hadoop 2.7M Dec 28 2020 guava-27.0-jre.jar
另外我在tidb里 怎么查 你说的 两个 table 名都是 material_creative_count 是不是TIPD里面有缓存没更新 tispark读取了错误的缓存

select CREATE_TIME, TIDB_TABLE_ID from information_schema.tables where table_name =‘Material_creative_count’ ;
| CREATE_TIME | TIDB_TABLE_ID |
±--------------------±--------------+
| 2021-11-05 17:47:43 | 36881 |

这种一个表 有多个id,导致的 tispark 查询报错,该怎么修复呢

确实 部分 tidb上可以查到 有错误的信息。 我把表drop 了 也就只少了一条记录
| yixintui_operate | 2021-11-05 17:47:43 | 37113 | 剩下的记录我可以直接删除吗?

mysql> drop table Material_creative_count ;
Query OK, 0 rows affected (0.52 sec)

mysql> select TABLE_SCHEMA,table_name from information_schema.tables group by TABLE_SCHEMA,table_name having count(1) > 1;
±-----------------±------------------------+
| TABLE_SCHEMA | table_name |
±-----------------±------------------------+
| yixintui_operate | Material_creative_count |
±-----------------±------------------------+
1 row in set (1.76 sec)

mysql> select TABLE_SCHEMA,CREATE_TIME, TIDB_TABLE_ID from information_schema.tables where table_name =‘Material_creative_count’;
±-----------------±--------------------±--------------+
| TABLE_SCHEMA | CREATE_TIME | TIDB_TABLE_ID |
±-----------------±--------------------±--------------+
| test | 2021-12-14 15:16:10 | 37115 |
| yixintui_operate | 2021-11-05 17:47:43 | 29909 |
| yixintui_operate | 2021-11-05 17:47:43 | 29944 |
| yixintui_operate | 2021-11-05 17:47:43 | 32404 |
±-----------------±--------------------±--------------+
4 rows in set (1.42 sec)

手动删还删不掉 。
mysql> delete from information_schema.tables where TIDB_TABLE_ID =29909;
ERROR 1142 (42000): DELETE command denied to user ‘root’@‘127.0.0.1’ for table ‘tables’

请问,这个集群是升级的还是新建的?

另外,还请发一下这个结果: select * from DDL_JOBS where table_name = 'Material_creative_count' limit 10000;

升级过来的
ddl.txt (139.7 KB)

information_schema.tables 这是一个 SYSTEM VIEW
为什么会出现 同一集群不同的tidb实例 查到 information_schema.tables where table_name =‘Material_creative_count’; 结果集不一致?

另外 tispark 看不到视图 只能看到 表 。

升级到spark3.0, tispark 2.5 还是一样报错,应该是tikv 的问题。 怎么手动把这些重复table_id 删了?

curl http://{TiDBIP}:10080/schema?table_id=29909
curl http://{TiDBIP}:10080/schema?table_id=29944
curl http://{TiDBIP}:10080/schema?table_id=37467

如果这几条都不为空并且表名相同,就可以确定是表的元数据出了问题。临时修复元数据的方法如下:

rename table Material_creative_count to Material_creative_count_29909;
rename table Material_creative_count to Material_creative_count_29944;
rename table Material_creative_count to Material_creative_count_37467;

直到 TiDB 报 Material_creative_count 不存在的错误。

然后看哪些表是空的 drop 掉即可,假设非空的表是 Material_creative_count_29944,最后

rename table Material_creative_count_29944 to Material_creative_count;

drop 掉 37467 之后 就报找不到 table了 , 29909 29944 清不掉 。

MySQL [yixintui_operate]> rename table Material_creative_count to Material_creative_count_37467;
Query OK, 0 rows affected (0.52 sec)

MySQL [yixintui_operate]> rename table Material_creative_count to Material_creative_count_29944;
ERROR 1017 (HY000): Can’t find file: ‘yixintui_operate’ (errno: {%!d(string=Material_creative_count) %!d(string=material_creative_count)} - %!s(MISSING))

MySQL [yixintui_operate]> select TABLE_SCHEMA,CREATE_TIME, TIDB_TABLE_ID from information_schema.tables where table_name =‘Material_creative_count’;
±-----------------±--------------------±--------------+
| TABLE_SCHEMA | CREATE_TIME | TIDB_TABLE_ID |
±-----------------±--------------------±--------------+
| yixintui_operate | 2021-11-05 17:47:43 | 29909 |
| yixintui_operate | 2021-11-05 17:47:43 | 29944 |
±-----------------±--------------------±--------------+