tispark 连接8.1tidb 异常问题: ERROR SparkSQLEngine: Failed to instantiate SparkSession: Error scanning data from region.

tid: 8.1
tispark: tispark-assembly-3.3_2.12-3.2.2.jar
kyuubi:1.3

我使用场景是Kyuubi 调用spark on yarn,创建kyuubi引擎过程中(创建spark yarn app启动后),连接pd后产生的异常。

问题日志:
以下日志产生在kyuubi work上,但是等同于spark 日志

24/07/12 10:54:01 INFO RangerResourceTrie: builderThreadCount is set to [1]
24/07/12 10:54:01 INFO init: builderThreadCount is set to [1]
24/07/12 10:54:01 INFO RangerResourceTrie: builderThreadCount is set to [1]
24/07/12 10:54:01 INFO init: builderThreadCount is set to [1]
24/07/12 10:54:01 INFO RangerBasePlugin: Policies will NOT be reordered based on number of evaluations
24/07/12 10:54:01 INFO TiSparkInfo$: Supported Spark Version: 3.0 3.1 3.2 3.3
Current Spark Version: 3.3.3
Current Spark Major Version: 3.3
24/07/12 10:54:01 INFO SharedState: Setting hive.metastore.warehouse.dir ('/warehouse/tablespace/managed/hive') to the value of spark.sql.warehouse.dir.
24/07/12 10:54:01 INFO SharedState: Warehouse path is 'hdfs://UATCLUSTER/apps/spark/warehouse'.
24/07/12 10:54:01 INFO ServerInfo: Adding filter to /SQL: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/07/12 10:54:01 INFO ServerInfo: Adding filter to /SQL/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/07/12 10:54:01 INFO ServerInfo: Adding filter to /SQL/execution: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/07/12 10:54:01 INFO ServerInfo: Adding filter to /SQL/execution/json: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/07/12 10:54:01 INFO ServerInfo: Adding filter to /static/sql: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
24/07/12 10:54:02 WARN TiConfiguration: Unable to find tikv.properties
24/07/12 10:54:02 INFO TiConfiguration: static configurations are:[tikv.kv_client_concurrency=10, tikv.network.mapping=, tikv.grpc.timeout_in_ms=200ms, tikv.rawkv.scan_timeout_in_ms=10000, tikv.batch_put_concurrency=20, tikv.rawkv.scan_slowlog_in_ms=5000, tikv.rawkv.default_backoff_in_ms=20000, tikv.request.isolation.level=SI, tikv.grpc.max_frame_size=536870912, tikv.grpc.health_check_timeout=100, tikv.pd.addresses=127.0.0.1:2379, tikv.metrics.port=3140, tikv.rawkv.batch_write_timeout_in_ms=2000, tikv.api_version=1, tikv.tls_enable=false, tikv.circuit_break.trigger.availability.error_threshold_percentage=100, tikv.db_prefix=, tikv.importer.max_kv_batch_bytes=1048576, tikv.kv_mode=TXN, tikv.replica_read=LEADER, tiflash.enable=false, tikv.table.scan_concurrency=512, tikv.batch_scan_concurrency=5, tikv.scatter_wait_seconds=300, tikv.grpc.idle_timeout=60, tikv.index.scan_batch_size=20000, tikv.grpc.scan_timeout_in_ms=20s, tikv.index.scan_concurrency=5, tikv.health_check_period_duration=300, tikv.importer.max_kv_batch_size=32768, tikv.grpc.warm_up_timeout_in_ms=5000ms, tikv.metrics.enable=false, tikv.rawkv.write_timeout_in_ms=2000, tikv.request.command.priority=LOW, tikv.grpc.forward_timeout_in_ms=300ms, tikv.batch_delete_concurrency=20, tikv.batch_get_concurrency=20, tikv.rawkv.read_timeout_in_ms=2000, tikv.rawkv.clean_timeout_in_ms=600000, tikv.grpc.pd_first_get_member_timeout_in_ms=10000ms, tikv.circuit_break.trigger.availability.request_volumn_threshold=10, tikv.delete_range_concurrency=20, tikv.tls.reload_interval=10s, tikv.enable_grpc_forward=true, tikv.circuit_break.trigger.sleep_window_in_seconds=20, tikv.bo_region_miss_base_in_ms=20, tikv.warm_up.enable=true, tikv.rawkv.batch_read_timeout_in_ms=2000, tikv.show_rowid=false, tikv.circuit_break.trigger.availability.window_in_seconds=60, tikv.grpc.scan_batch_size=10240, tikv.circuit_break.enable=false, tikv.grpc.keepalive_timeout=3, tikv.conn.recycle_time=60s, tikv.grpc.keepalive_time=10, tikv.enable_atomic_for_cas=false, tikv.grpc.ingest_timeout_in_ms=200s, tikv.use_jks=false, tikv.circuit_break.trigger.attempt_request_count=10, tikv.scan_regions_limit=1000]
24/07/12 10:54:02 INFO TiSession: Welcome to TiKV Java Client 7.0.0@null
24/07/12 10:54:03 INFO PDClient: init cluster: start
24/07/12 10:54:03 INFO PDClient: init host mapping: start
24/07/12 10:54:03 INFO PDClient: init host mapping: end
24/07/12 10:54:03 INFO PDClient: get members with pd http://192.168.xx.xx:2379: start
24/07/12 10:54:03 INFO PDClient: get members with pd http://192.168.xx.xx:2379: end
24/07/12 10:54:03 INFO PDClient: init cluster with address: [http://192.168.xx.xx:2379, http://192.168.181.56:2379, http://192.168.181.55:2379, http://192.168.xx.xx:2379]
24/07/12 10:54:03 INFO PDClient: createLeaderClientWrapper with leader 192.168.xx.xx:2379: start
24/07/12 10:54:03 INFO PDClient: Switched to new leader: [leaderInfo: 192.168.xx.xx:2379, storeAddress: 192.168.181.104:2379]
24/07/12 10:54:03 INFO PDClient: createLeaderClientWrapper with leader 192.168.xx.xx:2379: end
24/07/12 10:54:03 INFO PDClient: init cluster: finish
24/07/12 10:54:03 INFO TiSession: TiSession initialized in TXN mode in API version: V1
24/07/12 10:54:13 INFO PDClient: try update leader
24/07/12 10:54:23 INFO PDClient: try update leader
24/07/12 10:54:33 INFO PDClient: try update leader
24/07/12 10:54:33 WARN NettyClientHandler: Received GOAWAY with ENHANCE_YOUR_CALM. Debug data: too_many_pings
24/07/12 10:54:33 WARN AtomicBackoff: Increased keepalive time nanos to 20,000,000,000
24/07/12 10:54:43 INFO PDClient: try update leader
24/07/12 10:54:53 INFO PDClient: try update leader
24/07/12 10:54:57 INFO YarnClientSchedulerBackend: Requesting to kill executor(s) 1
24/07/12 10:54:57 INFO YarnClientSchedulerBackend: Actual list of executor(s) to be killed is 1
24/07/12 10:54:57 INFO ExecutorAllocationManager: Executors 1 removed due to idle timeout.
24/07/12 10:55:01 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 1.
24/07/12 10:55:01 INFO DAGScheduler: Executor lost: 1 (epoch 0)
24/07/12 10:55:01 INFO YarnScheduler: Executor 1 on uat04.qys.local killed by driver.
24/07/12 10:55:01 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
24/07/12 10:55:01 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, uat04.qys.local, 41139, None)
24/07/12 10:55:01 INFO BlockManagerMaster: Removed 1 successfully in removeExecutor
24/07/12 10:55:01 INFO DAGScheduler: Shuffle files lost for executor: 1 (epoch 0)
24/07/12 10:55:01 INFO ExecutorMonitor: Executor 1 is removed. Remove reason statistics: (gracefully decommissioned: 0, decommision unfinished: 0, driver killed: 1, unexpectedly exited: 0).
24/07/12 10:55:03 INFO PDClient: try update leader
24/07/12 10:55:13 INFO PDClient: try update leader
24/07/12 10:55:23 INFO PDClient: try update leader
24/07/12 10:55:33 INFO PDClient: try update leader
24/07/12 10:55:43 INFO PDClient: try update leader
24/07/12 10:55:53 INFO PDClient: try update leader
24/07/12 10:56:03 INFO PDClient: try update leader
24/07/12 10:56:13 INFO PDClient: try update leader
24/07/12 10:56:23 INFO PDClient: try update leader
24/07/12 10:56:33 INFO PDClient: try update leader
24/07/12 10:56:36 ERROR SparkSQLEngine: Failed to instantiate SparkSession: Error scanning data from region.
org.tikv.common.exception.TiClientInternalException: Error scanning data from region.
       at com.pingcap.tikv.operation.iterator.ScanIterator.cacheLoadFails(ScanIterator.java:108)
        at com.pingcap.tikv.operation.iterator.ConcreteScanIterator.hasNext(ConcreteScanIterator.java:117)
        at com.pingcap.tikv.codec.MetaCodec.hashGetFields(MetaCodec.java:126)
        at com.pingcap.tikv.catalog.CatalogTransaction.getTables(CatalogTransaction.java:95)
        at com.pingcap.tikv.catalog.Catalog$CatalogCache.loadTables(Catalog.java:242)

...

Caused by: org.tikv.common.exception.GrpcException: java.lang.InterruptedException: sleep interrupted
        at org.tikv.common.util.ConcreteBackOffer.canRetryAfterSleep(ConcreteBackOffer.java:230)
        at org.tikv.common.util.ConcreteBackOffer.doBackOffWithMaxSleep(ConcreteBackOffer.java:250)
        at org.tikv.common.util.ConcreteBackOffer.doBackOff(ConcreteBackOffer.java:190)
        at com.pingcap.tikv.operation.iterator.ConcreteScanIterator.loadCurrentRegionToCache(ConcreteScanIterator.java:74)
        at com.pingcap.tikv.operation.iterator.ScanIterator.cacheLoadFails(ScanIterator.java:76)
        ... 64 more
Caused by: java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at org.tikv.common.util.ConcreteBackOffer.canRetryAfterSleep(ConcreteBackOffer.java:228)
        ... 68 more
24/07/12 10:56:36 INFO YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 2.
24/07/12 10:56:36 INFO DAGScheduler: Executor lost: 2 (epoch 1)
24/07/12 10:56:36 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
24/07/12 10:56:36 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(2, uat03.qys.local, 37145, None)
24/07/12 10:56:36 INFO BlockManagerMaster: Removed 2 successfully in removeExecutor
24/07/12 10:56:36 INFO DAGScheduler: Shuffle files lost for executor: 2 (epoch 1)
24/07/12 10:56:36 WARN TransportChannelHandler: Exception in connection from /192.168.xx.xx:59296                                                               

然后程序马上和客户端(dbveaver断开连接)

没有搜索相关异常信息。

这个tidb升级8.1有两周时间,第一次在今天发现这个问题,之后能成功创建两次引擎(提交spark任务),但是大部分时间是失败的。
其他两个环境的tidb 是7.5,还没有发现提交spark任务失败的问题。


新进展:
tispark更换了tispark-assembly-3.3_2.12-3.2.3.jar包之后报错没了,可以进行提交spark任务完成了。
但是:

  1. 提交时长由原来的两分半延长到了3分钟
  2. 原来一分钟的一个jdbccatalog 读 tidbcatalog 写的任务从1分,现在执行需要8分钟(已交叉测试,同jdbccatalog读写到其他环境的tidbcatalog(7.5tidb)1分)。

已全部重启集群。
tidb无相关报错日志。


8分钟的sql执行日志中有如下报错

3345 24/07/13 14:48:25 INFO PDClient: updating leader finish
3346 24/07/13 14:48:25 WARN PDClient: updating leader or forward follower
3347 24/07/13 14:48:25 INFO PDClient: Switched to new leader: [leaderInfo: 192.168.181.56:2379, storeAddress: 192.168.181.56:2379]
3348 24/07/13 14:48:25 INFO PDClient: updating leader finish
3349 24/07/13 14:48:25 WARN ConcreteBackOffer: BackOffer.maxSleep 30000ms is exceeded, errors:
3350 24/07/13 14:48:25 WARN ConcreteBackOffer:
3351 64.org.tikv.shade.io.grpc.StatusRuntimeException: UNKNOWN: [PD:common:ErrGetTargetStore]failed to get the target store%!(EXTRA string=no target leader sto     re found, region: &{6 0xc009f55a70 [] [] [0xc010830180 0xc0108301c0 0xc010830200] 0xc010830240 [] [] 0 39 1 0 0 1 0 0 0xc010aac510 <nil> 0xc00a2e5400 9223     372036854775808 <nil> 2})
3352 65.org.tikv.shade.io.grpc.StatusRuntimeException: UNKNOWN: [PD:common:ErrGetTargetStore]failed to get the target store%!(EXTRA string=no target leader sto     re found, region: &{6 0xc009f55a70 [] [] [0xc010830180 0xc0108301c0 0xc010830200] 0xc010830240 [] [] 0 39 1 0 0 1 0 0 0xc010aac510 <nil> 0xc00a2e5400 9223     372036854775808 <nil> 2})
3353 66.org.tikv.shade.io.grpc.StatusRuntimeException: UNKNOWN: [PD:common:ErrGetTargetStore]failed to get the target store%!(EXTRA string=no target leader sto     re found, region: &{6 0xc009f55a70 [] [] [0xc010830180 0xc0108301c0 0xc010830200] 0xc010830240 [] [] 0 39 1 0 0 1 0 0 0xc010aac510 <nil> 0xc00a2e5400 9223     372036854775808 <nil> 2})
3354 24/07/13 14:48:25 WARN TiSession: failed to scatter region: 135765001
3355 org.tikv.common.exception.GrpcException: retry is exhausted.
3356         at org.tikv.common.util.ConcreteBackOffer.logThrowError(ConcreteBackOffer.java:266)
3357         at org.tikv.common.util.ConcreteBackOffer.doBackOffWithMaxSleep(ConcreteBackOffer.java:251)
3358         at org.tikv.common.util.ConcreteBackOffer.doBackOff(ConcreteBackOffer.java:190)
3359         at org.tikv.common.operation.PDErrorHandler.handleRequestError(PDErrorHandler.java:88)
3360         at org.tikv.common.policy.RetryPolicy.callWithRetry(RetryPolicy.java:97)
3361         at org.tikv.common.AbstractGRPCClient.callWithRetry(AbstractGRPCClient.java:88)
3362         at org.tikv.common.PDClient.scatterRegion(PDClient.java:255)
3363         at org.tikv.common.TiSession.splitRegionAndScatter(TiSession.java:627)
3364         at com.pingcap.tispark.write.TiBatchWrite.doWrite(TiBatchWrite.scala:230)
3365         at com.pingcap.tispark.write.TiBatchWrite.com$pingcap$tispark$write$TiBatchWrite$$write(TiBatchWrite.scala:94)
3366         at com.pingcap.tispark.write.TiBatchWrite$.write(TiBatchWrite.scala:50)
3367         at com.pingcap.tispark.write.TiDBWriter$.write(TiDBWriter.scala:41)
...
3435         at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.$anonfun$executeStatement$1(ExecuteStatement.scala:86)
3436         at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
3437         at org.apache.kyuubi.engine.spark.operation.SparkOperation.$anonfun$withLocalProperties$1(SparkOperation.scala:155)
3438         at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
3439         at org.apache.kyuubi.engine.spark.operation.SparkOperation.withLocalProperties(SparkOperation.scala:139)
3440         at org.apache.kyuubi.engine.spark.operation.ExecuteStatement.executeStatement(ExecuteStatement.scala:81)
3441         at org.apache.kyuubi.engine.spark.operation.ExecuteStatement$$anon$1.run(ExecuteStatement.scala:103)
3442         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
3443         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
3444         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
3445         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
3446         at java.lang.Thread.run(Thread.java:748)
3447 Caused by: org.tikv.shade.io.grpc.StatusRuntimeException: UNKNOWN: [PD:common:ErrGetTargetStore]failed to get the target store%!(EXTRA string=no target le     ader store found, region: &{6 0xc009f55a70 [] [] [0xc010830180 0xc0108301c0 0xc010830200] 0xc010830240 [] [] 0 39 1 0 0 1 0 0 0xc010aac510 <nil> 0xc00a2e5     400 9223372036854775808 <nil> 2})
3448         at org.tikv.shade.io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:287)
3449         at org.tikv.shade.io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:268)
3450         at org.tikv.shade.io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:175)
3451         at org.tikv.common.AbstractGRPCClient.lambda$callWithRetry$0(AbstractGRPCClient.java:91)
3452         at org.tikv.common.policy.RetryPolicy.callWithRetry(RetryPolicy.java:88)
3453         ... 86 more
3454 24/07/13 14:48:25 WARN PDClient: updating leader or forward follower
3455 24/07/13 14:48:25 INFO PDClient: Switched to new leader: [leaderInfo: 192.168.181.56:2379, storeAddress: 192.168.181.56:2379]
3456 24/07/13 14:48:25 INFO PDClient: updating leader finish

出现了14次

看这个报错,是 tispark 和 pd grpc 交互发生了一些问题。TiSpark 缓存的 Region 信息可能与 PD 中的信息不一致,导致 TiSpark 尝试访问错误的 TiKV 节点?

咱们集群正常不 :rofl:

咱们业务上能否切换成 spark jdbc 写入呢,如果可以推荐做一下切换。
错误原因主要是写入前的 scatterRegion (预切region)失败。考虑到这个错误只在 TiDB 8.1 上出现,可能是兼容性方面的问题,这方面目前官方也没有保证和 TiDB 8.1 的兼容性。
时间变长可能是中间有许多错误,在重试。

1 个赞

没有这么用过,关注学习一下。

集群都是正常的,看了所有的监控图,然后手动执行了pd-ctl scan,对比了其他集群,都很正常。

所有的现象都指向了只有这个8.1版本的tidb升级后不和tispark(或java tikv)的兼容性问题。

tispark的jdbc写每次只能插入1条。会拆分大量的insert。效率极低

我也希望是我的pd有问题导致的,但是我不知道该再如何排查

tispark jdbc 在几个亿数据下是比回写快的,TiSpark Benchmark · pingcap/tispark Wiki · GitHub
是不是使用姿势不对啊。

关注一下

我也有点蒙蔽。

我使用spark-shell提交可以batch insert。
现在我在尝试修改spark.sql.catalog方式。

哎,就是一次插入一行。。

在stackoverflow上找到了一个方法
jdbc:mysql://ip:port?rewriteBatchedStatements=true

加上rewriteBatchedStatements=true允许批写

好了?

先让开发改成jdbccatalog把,tispark先等等新版本。

image