创建包含架构详细信息的dataframe时Dataproc上出现Pyspark错误

创建包含架构详细信息的dataframe时Dataproc上出现Pyspark错误,pyspark,anaconda,google-cloud-dataproc,Pyspark,Anaconda,Google Cloud Dataproc,我有一个带Anaconda的Dataproc集群。我已经创建了一个虚拟环境。在anacondamy env中,因为我需要在那里安装开源RDkit,因此我再次安装了PySpark(不使用预装的)。现在,使用下面的代码,我在my env中得到了错误,但在my env 代码: 从pyspark.sql.types导入StructField、StructType、StringType、LongType 从pyspark.sql导入SparkSession 从py4j.protocol导入Py4JJava

我有一个带Anaconda的Dataproc集群。我已经创建了一个虚拟环境。在anaconda
my env
中,因为我需要在那里安装开源RDkit,因此我再次安装了PySpark(不使用预装的)。现在,使用下面的代码,我在
my env
中得到了错误,但在
my env

代码:

从pyspark.sql.types导入StructField、StructType、StringType、LongType
从pyspark.sql导入SparkSession
从py4j.protocol导入Py4JJavaError
spark=SparkSession.builder.appName(“测试”).getOrCreate()
fields=[StructField(“col0”,StringType(),True),
StructField(“col1”,StringType(),True),
StructField(“col2”,StringType(),True),
StructField(“col3”,StringType(),True)]
schema=StructType(字段)
chem_info=spark.createDataFrame([],模式)
这就是我得到的错误:

  File
"/home/.conda/envs/my-env/lib/python3.6/site-packages/pyspark/sql/session.py",
line 749, in createDataFrame
    jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())   File
"/home/.conda/envs/my-env/lib/python3.6/site-packages/pyspark/rdd.py",
line 2297, in _to_java_object_rdd
    rdd = self._pickled()   File "/home/.conda/envs/my-env/lib/python3.6/site-packages/pyspark/rdd.py",
line 196, in _pickled
    return self._reserialize(AutoBatchedSerializer(PickleSerializer()))   File
"/home/.conda/envs/my-env/lib/python3.6/site-packages/pyspark/rdd.py",
line 594, in _reserialize
    self = self.map(lambda x: x, preservesPartitioning=True)   File "/home/.conda/envs/my-env/lib/python3.6/site-packages/pyspark/rdd.py",
line 325, in map
    return self.mapPartitionsWithIndex(func, preservesPartitioning)   File
"/home/.conda/envs/my-env/lib/python3.6/site-packages/pyspark/rdd.py",
line 365, in mapPartitionsWithIndex
    return PipelinedRDD(self, f, preservesPartitioning)   File "/home/.conda/envs/my-env/lib/python3.6/site-packages/pyspark/rdd.py",
line 2514, in __init__
    self.is_barrier = prev._is_barrier() or isFromBarrier   File "/home/.conda/envs/my-env/lib/python3.6/site-packages/pyspark/rdd.py",
line 2414, in _is_barrier
    return self._jrdd.rdd().isBarrier()   File "/home/.conda/envs/my-env/lib/python3.6/site-packages/py4j/java_gateway.py",
line 1257, in __call__
    answer, self.gateway_client, self.target_id, self.name)   File "/home/.conda/envs/my-env/lib/python3.6/site-packages/pyspark/sql/utils.py",
line 63, in deco
    return f(*a, **kw)   File "/home/.conda/envs/my-env/lib/python3.6/site-packages/py4j/protocol.py",
line 332, in get_return_value
    format(target_id, ".", name, value)) py4j.protocol.Py4JError: An error occurred while calling o57.isBarrier. Trace: py4j.Py4JException:
Method isBarrier([]) does not exist
        at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318)
        at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:326)
        at py4j.Gateway.invoke(Gateway.java:274)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)
您能帮我解决吗?

如问题中所述,此错误是由Dataproc cluster中安装的不同版本的Spark与您在conda环境中手动安装的PySpark之间不兼容引起的

要解决此问题,您需要检查群集上的Spark版本并安装相应版本的PySpark:

$spark提交--版本
欢迎来到
____              __
/ __/__  ___ _____/ /__
_\ \/ _ \/ _ `/ __/  '_/
/___/.版本2.4.4
/_/
使用Scala版本2.12.10、OpenJDK 64位服务器虚拟机、1.8.0_232
$conda install pyspark==2.4.4
如问题中所述,此错误是由Dataproc cluster中安装的不同版本的Spark与您在conda环境中手动安装的PySpark之间不兼容引起的

要解决此问题,您需要检查群集上的Spark版本并安装相应版本的PySpark:

$spark提交--版本
欢迎来到
____              __
/ __/__  ___ _____/ /__
_\ \/ _ \/ _ `/ __/  '_/
/___/.版本2.4.4
/_/
使用Scala版本2.12.10、OpenJDK 64位服务器虚拟机、1.8.0_232
$conda install pyspark==2.4.4