使用docker-compose部署tidb,在使用pyspark和pytispark时报错

 from pyspark.sql import SparkSession
from pytispark.pytispark import TiContext

spark = SparkSession.builder.appName("test") \
    .master("spark://172.18.0.12:7077") \
    .config("spark.tispark.pd.addresses", "172.18.0.12:2379") \
    .getOrCreate()

ti = TiContext(spark)

2020-08-09 14:56:28 WARN Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/home/caikunling/.local/lib/python3.6/site-packages/pyspark/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2020-08-09 14:56:30 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Setting default log level to “WARN”.
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Traceback (most recent call last):
File “spark_select_test.py”, line 222, in
ti = TiContext(spark)
File “local/lib/python3.6/site-packages/pytispark/pytispark.py”, line 31, in init
self.ti = gw.jvm.TiExtensions.getInstance(sparkSession._jsparkSession).getOrCreateTiContext(sparkSession._jsparkSession)
TypeError: ‘JavaPackage’ object is not callable

  1. 使用的哪个版本?
  2. 这里的安装部署完成了吗? 使用 scala 是否能够访问?
  3. 查看下类似问题,请检查下是否你的包使用的不对

https://github.com/JohnSnowLabs/spark-nlp/issues/232

https://docs.pingcap.com/zh/tidb/dev/deploy-test-cluster-using-docker-compose
使用官网说的这个步骤部署的,spark版本2.4.3.
可以使用scala能够正常访问和查询,看起来是docker中能够导入

import org.apache.spark.sql.TiContext

这样的包,而使用python时发现只有Pyspark而没有pytispark,docker-compose的部署方法能很好的支持python访问吗?

参考文档,这样不能访问spark 吗?

可以使用pyspark访问,但是如问题所说,我能够通过pytispark的方式访问吗?或者说通过pytispark的方式和通过pyspark的方式有区别吗?docker是不是不支持pytispark访问?

from pytispark.pytispark import TiContext

根据这个文档
https://github.com/pingcap/tispark/tree/master/python#via-spark-submit
请确认下在使用前有没有 install pytispark?

有的,感觉是pytispark找不到

Python 3.6.9 (default, Jul 17 2020, 12:50:27) 
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
20/08/14 01:01:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.6
      /_/

Using Python version 3.6.9 (default, Jul 17 2020 12:50:27)
SparkSession available as 'spark'.
>>> import pytispark
>>> import pytispark.pytispark as pti
>>> from pyspark.sql import SparkSession
>>> spark = SparkSession.builder.getOrCreate()
>>> ti = pti.TiContext(spark)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/tidb/.local/lib/python3.6/site-packages/pytispark/pytispark.py", line 31, in __init__
    self.ti = gw.jvm.TiExtensions.getInstance(sparkSession._jsparkSession).getOrCreateTiContext(sparkSession._jsparkSession)
  File "/home/tidb/spark-2.4.6-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1487, in __getattr__
py4j.protocol.Py4JError: org.apache.spark.sql.TiExtensions.getInstance does not exist in the JVM
>>> 

看起来是pytispark找不到org.apache.spark.sql.TiExtensions

根据文档https://github.com/pingcap/tispark/blob/master/python/README.md

是否需要做这些修改?

可以这么改试一下,而且这个报错看上去已经是挺后面了,只是找不到TiExtensions,pyspark 启动的时候有没有用 --jars 指定 tispark 的包呢?

做了修改但是没有变化。。。
使用命令确实是pyspark --jars jars/tispark-assembly-2.3.1.jar
感觉不知道应该从哪何查起了

直接在pyspark 的 shell 里面能对 TiDB 的表进行查询吗?

可以使用pyspark查询,但是不知道这和pytispark有什么区别,用到了tispark吗?

感觉实际上就是python接口调用tispark时没有找到对应的依赖包

所以我怀疑是不是 install pytispark 这步没成功

写个 py 文件然后用 spark-submit 来提交也是报同样的错吗?

比较神奇的是,在pyspark中直接查询 与 在python中调用pyspark包查询的结果不一样,看起来少一些数据库

20/08/14 12:41:56 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect.                                                                             [3/3470]
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.4.6
      /_/

Using Python version 3.6.9 (default, Jul 17 2020 12:50:27)
SparkSession available as 'spark'.
>>> sql('show databases')
20/08/14 12:42:10 INFO ReflectionUtil$: tispark class url: file:/home/caikunling/spark-2.4.6-bin-hadoop2.7/jars/tispark-assembly-2.3.1.jar
20/08/14 12:42:10 INFO ReflectionUtil$: spark wrapper class url: jar:file:/home/caikunling/spark-2.4.6-bin-hadoop2.7/jars/tispark-assembly-2.3.1.jar!/resources/spark-wrapper-spark-2.4/
DataFrame[databaseName: string]
>>> sql('show databases').show()
+-----------------+
|     databaseName|
+-----------------+
|          default|
|face_world_office|
|             test|
|         tpch_001|
|            mysql|
+-----------------+

>>>
[caikunling@FACE_DB ~]20-08-14 12:42$ ipython
Python 3.6.9 (default, Jul 17 2020, 12:50:27)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.16.1 -- An enhanced Interactive Python. Type '?' for help.

In [1]: from pyspark.sql import SparkSession

In [2]: spark = SparkSession.builder.appName("tidb-test") \
   ...:         .master("spark://127.0.0.1:7077") \
   ...:         .config("spark.tispark.pd.addresses", "127.0.0.1:2379") \
   ...:         .config('spark.sql.extensions','org.apache.spark.sql.TiExtensions') \
   ...:         .getOrCreate()
20/08/14 12:42:36 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).

In [3]: spark.sql('show databases').show()
+------------+
|databaseName|
+------------+
|     default|
+------------+
Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: http://mirrors.i.brainpp.cn/pypi/simple/, http://pypi.i.brainpp.cn/brain/dev/+simple
Collecting pytispark
  Downloading http://mirrors.i.brainpp.cn/pypi/packages/d2/c5/b09c69ddc6dab157347cd6fb9577d856a89c814ede0ccd38080b3bf9509d/pytispark-2.0-py3-none-any.whl (2.6 kB)
Requirement already satisfied: pyspark==2.3.3 in ./.local/lib/python3.6/site-packages (from pytispark) (2.3.3)
Requirement already satisfied: py4j==0.10.7 in ./.local/lib/python3.6/site-packages (from pytispark) (0.10.7)
Installing collected packages: pytispark
Successfully installed pytispark-2.0

sprak-submit提交py脚本也是相同的报错

[tidb@FACE_DB ~]20-08-14 12:57$ spark-submit \

–jars ~/spark-2.4.6-bin-hadoop2.7/jars/tispark-assembly-2.3.1.jar
~/face-world-demo/spark_select_test.py
20/08/14 13:01:51 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Traceback (most recent call last):
File “/home/tidb/face-world-demo/spark_select_test.py”, line 224, in
ti = TiContext(spark)
File “/home/tidb/.local/lib/python3.6/site-packages/pytispark/pytispark.py”, line 31, in init
self.ti = gw.jvm.TiExtensions.getInstance(sparkSession._jsparkSession).getOrCreateTiContext(sparkSession._jsparkSession)
File “/home/tidb/spark-2.4.6-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py”, line 1487, in getattr
py4j.protocol.Py4JError: org.apache.spark.sql.TiExtensions.getInstance does not exist in the JVM

这个看上去是下面这个查询应该没用到 tispark

2.3.1 的 tispark 应该有些 bug, 你换一下这个试一试?
链接: 百度网盘-链接不存在 提取码: k868

确实是2.3.1的tispark jar有问题,更换jar包之后就正常了

[tidb@FACE_DB ~]20-08-16 14:06$ spark-submit \
--jars ~/spark-2.4.6-bin-hadoop2.7/jars/tispark-assembly-2.3.2-SNAPSHOT-20200814.jar  \
~/face-world-demo/spark_select_test.py
20/08/16 14:10:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/08/16 14:10:53 INFO ReflectionUtil$: tispark class url: file:/home/tidb/spark-2.4.6-bin-hadoop2.7/jars/tispark-assembly-2.3.2-SNAPSHOT-20200814.jar
20/08/16 14:10:53 INFO ReflectionUtil$: spark wrapper class url: jar:file:/home/tidb/spark-2.4.6-bin-hadoop2.7/jars/tispark-assembly-2.3.2-SNAPSHOT-20200814.jar!/resources/spark-wrapper-spark-2.4/
+-----------------+
|     databaseName|
+-----------------+
|          default|
|face_world_office|
|             test|
|         tpch_001|
|            mysql|
+-----------------+