Spark 3.0无法按插件使用TiCatalog

为提高效率,请提供以下信息,问题描述清晰能够更快得到解决:

[TiDB 版本]
TiDB v5.0.1
TiSpark v2.5.0-SNAPSHOT (2021-04-30)
Spark v3.0.1

[问题描述]
Spark 3.0 使用插件模式spark.sql.catalog.tidb_catalog = org.apache.spark.sql.catalyst.catalog.TiCatalog
无法进行DDL,TiCatalog类未实现 createTable(), dropTable() 方法, 并导致数据写入失败
df.write
.format(“tidb”)
.options(options)
.mode(SaveMode.Append)
.saveAsTable(tableName)


spark-sql> show tables in default.test;
21/04/30 14:48:13 INFO CodeGenerator: Code generated in 21.659727 ms
test load_data
test datetime_range
test aaa1
test table_de013fabb1d74af881bcfbb5386ed1b6

21/04/30 14:48:26 ERROR SparkSQLDriver: Failed in [drop table test1]
scala.NotImplementedError: an implementation is missing
at scala.Predef$.$qmark$qmark$qmark(Predef.scala:288)
at org.apache.spark.sql.catalyst.catalog.TiCatalog.dropTable(TiCatalog.scala:231)
at org.apache.spark.sql.execution.datasources.v2.DropTableExec.run(DropTableExec.scala:33)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:39)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:39)
at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:45)

请问是如何配置的 Ticatalog 的 ?可以发一下具体的配置文件,和操作方法吗 ?

https://github.com/pingcap/tispark#spark-30-catalog

spark-defaults.conf 加入下列配置:
spark.sql.extensions org.apache.spark.sql.TiExtensions
spark.tispark.pd.addresses 192.168.10.3:2379
spark.sql.catalog.tidb_catalog org.apache.spark.sql.catalyst.catalog.TiCatalog
spark.sql.catalog.tidb_catalog.pd.addresses 192.168.10.3:2379

目前 tispark 不支持 saveAsTable 操作,请使用下面的参考代码

df.write.
format(“tidb”).
option(“tidb.user”, “root”).
option(“tidb.password”, “”).
option(“database”, “tpch_test”).
option(“table”, “target_table_orders”).
mode(“append”).
save()

文档如下 https://github.com/pingcap/tispark/blob/master/docs/datasource_api_userguide.md#use-spark-connector