Python Pyspark-Jupyter误差

Python Pyspark-Jupyter误差,python,apache-spark,pyspark,anaconda,Python,Apache Spark,Pyspark,Anaconda,我正在Jupyter笔记本中运行以下简单代码,并出现以下错误: from pyspark import SparkContext sc = SparkContext("local", "My App") one_through_9 = range(1,10) parallel = sc.parallelize(one_through_9, 3) parallel.count() ----------------------------------------------------------

我正在Jupyter笔记本中运行以下简单代码,并出现以下错误:

from pyspark import SparkContext
sc = SparkContext("local", "My App")
one_through_9 = range(1,10)
parallel = sc.parallelize(one_through_9, 3)
parallel.count()

--------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-4-c394a330f44c> in <module>()
      1 one_through_9 = range(1,10)
      2 parallel = sc.parallelize(one_through_9, 3)
----> 3 parallel.count()

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.io.IOException: Cannot run program "/Users/myuser/anaconda3": error=13, Permission denied
从pyspark导入SparkContext
sc=SparkContext(“本地”、“我的应用程序”)
一到九=范围(1,10)
parallel=sc.parallelize(一到九,三)
parallel.count()
--------------------------------------------------------------------------
Py4JJavaError回溯(最近一次调用)
在()
1一到九=范围(1,10)
2并行=sc.并行化(1到9,3)
---->3.count()
Py4JJavaError:调用z:org.apache.spark.api.python.PythonRDD.collectAndServe时出错。
:org.apache.spark.sparkeexception:作业因阶段失败而中止:阶段0.0中的任务0失败1次,最近的失败:阶段0.0中的任务0.0丢失(TID 0,本地主机,执行器驱动程序):java.io.IOException:无法运行程序“/Users/myuser/anaconda3”:错误=13,权限被拒绝
我没有获得anaconda3文件夹的所有权限,但问题仍然存在