tispark使用Java应用提示初始化没有任何资源

tispark部署成功了,使用spark-shell可以成功,但是使用java编程时,把使用spark-submit提交任务后作业不执行 打印日志: 2019-08-05 11:11:55 INFO PDClient:315 - Switched to new leader: [leaderInfo: 127.0.0.1:2379] 2019-08-05 11:11:59 WARN Utils:66 - Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting ‘spark.debug.maxToStringFields’ in SparkEnv.conf. 2019-08-05 11:11:59 INFO ContextCleaner:54 - Cleaned accumulator 1 2019-08-05 11:11:59 INFO CodeGenerator:54 - Code generated in 268.250792 ms 2019-08-05 11:12:00 INFO CodeGenerator:54 - Code generated in 106.529333 ms 2019-08-05 11:12:00 INFO HashAggregateExec:54 - spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate. 2019-08-05 11:12:00 INFO CodeGenerator:54 - Code generated in 80.55136 ms 2019-08-05 11:12:00 INFO SparkContext:54 - Starting job: collectAsList at App.java:160 2019-08-05 11:12:00 INFO DAGScheduler:54 - Registering RDD 4 (collectAsList at App.java:160) 2019-08-05 11:12:00 INFO DAGScheduler:54 - Got job 0 (collectAsList at App.java:160) with 200 output partitions 2019-08-05 11:12:00 INFO DAGScheduler:54 - Final stage: ResultStage 1 (collectAsList at App.java:160) 2019-08-05 11:12:00 INFO DAGScheduler:54 - Parents of final stage: List(ShuffleMapStage 0) 2019-08-05 11:12:00 INFO DAGScheduler:54 - Missing parents: List(ShuffleMapStage 0) 2019-08-05 11:12:00 INFO DAGScheduler:54 - Submitting ShuffleMapStage 0 (MapPartitionsRDD[4] at collectAsList at App.java:160), which has no missing parents 2019-08-05 11:12:00 INFO MemoryStore:54 - Block broadcast_0 stored as values in memory (estimated size 86.6 KB, free 93.2 MB) 2019-08-05 11:12:00 INFO MemoryStore:54 - Block broadcast_0_piece0 stored as bytes in memory (estimated size 29.4 KB, free 93.2 MB) 2019-08-05 11:12:00 INFO BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on localhost:35457 (size: 29.4 KB, free: 93.3 MB) 2019-08-05 11:12:00 INFO SparkContext:54 - Created broadcast 0 from broadcast at DAGScheduler.scala:1039 2019-08-05 11:12:00 INFO DAGScheduler:54 - Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[4] at collectAsList at App.java:160) (first 15 tasks are for partitions Vector(0)) 2019-08-05 11:12:00 INFO TaskSchedulerImpl:54 - Adding task set 0.0 with 1 tasks 2019-08-05 11:12:15 WARN TaskSchedulerImpl:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2019-08-05 11:12:30 WARN TaskSchedulerImpl:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2019-08-05 11:12:45 WARN TaskSchedulerImpl:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2019-08-05 11:13:00 WARN TaskSchedulerImpl:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2019-08-05 11:13:15 WARN TaskSchedulerImpl:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2019-08-05 11:13:30 WARN TaskSchedulerImpl:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

监控页面信息:

  • URL: spark://localhost:7077
  • REST URL: spark://localhost:6066 (cluster mode)
  • Alive Workers: 0
  • Cores in use: 0 Total, 0 Used
  • Memory in use: 0.0 B Total, 0.0 B Used
  • Applications: 1 Running, 3 Completed
  • Drivers: 0 Running, 0 Completed
  • Status: ALIVE

Workers (0)

Worker Id Address State Cores Memory

Running Applications (1)

Application ID Name Cores Memory per Executor Submitted Time User State Duration
app-20190805112216-0003 Test TiSpark 0 512.0 MB 2019/08/05 11:22:16 root WAITING 16 s

Completed Applications (3)

Application ID Name Cores Memory per Executor Submitted Time User State Duration
app-20190805111154-0002 Test TiSpark 0 512.0 MB 2019/08/05 11:11:54 root FINISHED 2.0 min
app-20190805110912-0001 Test TiSpark 0 2.0 GB 2019/08/05 11:09:12 root FINISHED 2.6 min
app-20190805110639-0000 Test TiSpark 0 512.0 MB 2019/08/05 11:06:39 root FINISHED 1.6 min

启动参数: ./bin/spark-submit --class com.hydee.App
–master spark://127.0.0.1:7077
–executor-memory 512M
–driver-memory 512M
–total-executor-cores 2
–driver-class-path /root/spark-jar/mysql-connector-java-8.0.17.jar
/root/spark-jar/TiSpark-1.0-SNAPSHOT.jar 201906 2001

机器内存: [root@localhost tidb]# free -m total used free shared buff/cache available Mem: 7288 2435 3933 8 919 4561 Swap: 3583 0 3583

java中配置: SparkConf conf = new SparkConf() .setIfMissing(“spark.tispark.write.allow_spark_sql”, “true”) .setIfMissing(“spark.master”, “127.0.0.1”) .setIfMissing(“spark.sql.extensions”, “org.apache.spark.sql.TiExtensions”) .setIfMissing(“spark.tispark.tidb.addr”, “hydee”) .setIfMissing(“spark.tispark.tidb.password”, “hydee@123”) .setIfMissing(“spark.tispark.tidb.port”, “4000”) .setIfMissing(“spark.tispark.tidb.user”, “root”) .setIfMissing(“spark.tispark.pd.addresses”, “127.0.0.1:2379”);

1赞

Tispark 部署成功以后,请按照文档提供验证 Tispark 是否可以正常使用。

2赞

谢谢,找到原因了,worker没有启动成功

2赞