Python 在执行PySpark代码时遇到Py4JJavaError
我正试图在PyCharm IDE中执行我的第一个PySpark代码,并面临以下异常Python 在执行PySpark代码时遇到Py4JJavaError,python,apache-spark,pyspark,pycharm,Python,Apache Spark,Pyspark,Pycharm,我正试图在PyCharm IDE中执行我的第一个PySpark代码,并面临以下异常 from pyspark import SparkContext def example(): sc = SparkContext('local') words = sc.parallelize(["scala", "java", "hadoop", "spark", "akka"]) print(sc.getConf().getAll()) return words.coun
from pyspark import SparkContext
def example():
sc = SparkContext('local')
words = sc.parallelize(["scala", "java", "hadoop", "spark", "akka"])
print(sc.getConf().getAll())
return words.count()
print(example())
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.lang.IllegalArgumentException
并打印了以下数据
[('spark.master', 'local'), ('spark.rdd.compress', 'True'), ('spark.serializer.objectStreamReset', '100'), ('spark.driver.port', '59627'), ('spark.executor.id', 'driver'), ('spark.submit.deployMode', 'client'), ('spark.app.id', 'local-1526547201037'), ('spark.driver.host', 'LAPTOP-DDRRK6SB'), ('spark.ui.showConsoleProgress', 'true'), ('spark.app.name', 'pyspark-shell')]
以及以下例外情况
from pyspark import SparkContext
def example():
sc = SparkContext('local')
words = sc.parallelize(["scala", "java", "hadoop", "spark", "akka"])
print(sc.getConf().getAll())
return words.count()
print(example())
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.lang.IllegalArgumentException
对不起,我的英语不好,希望代码有什么问题 我不知道确切的问题,在将1.8.0_171回滚到这个java版本时,它运行良好。感谢Rumoku的建议。哪一行失败了?同样适用于mewords.count(),用于执行这一行,面对例外也适用于我。您使用哪一个Java/JDK版本?