Google cloud platform Google DataProc的Jupyter中出现Python版本错误

Google cloud platform Google DataProc的Jupyter中出现Python版本错误,google-cloud-platform,pyspark,jupyter-notebook,google-cloud-dataproc,Google Cloud Platform,Pyspark,Jupyter Notebook,Google Cloud Dataproc,我用Jupyter初始化创建了一个DataProc集群。我使用的图像版本是1.4。我使用ssh连接到主节点和工作节点,并运行python--version,两者都显示python3.6.5::Anaconda,Inc. 但是,当我尝试从Google运行该示例时: 对于Jupyter(PySpark内核),它会出现以下错误: Py4JJavaError Traceback (most recent call last) <ipython

我用Jupyter初始化创建了一个DataProc集群。我使用的图像版本是1.4。我使用ssh连接到主节点和工作节点,并运行
python--version
,两者都显示
python3.6.5::Anaconda,Inc.

但是,当我尝试从Google运行该示例时: 对于Jupyter(PySpark内核),它会出现以下错误:

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-13-1cf15cbebfd5> in <module>
     55 
     56 # Display 10 results.
---> 57 pprint.pprint(word_counts.take(10))
     58 
     59 

/usr/lib/spark/python/pyspark/rdd.py in take(self, num)
   1358 
   1359             p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1360             res = self.context.runJob(self, takeUpToNumLeft, p)
   1361 
   1362             items += res

/usr/lib/spark/python/pyspark/context.py in runJob(self, rdd, partitionFunc, partitions, allowLocal)
   1049         # SparkContext#runJob.
   1050         mappedRDD = rdd.mapPartitions(partitionFunc)
-> 1051         sock_info = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
   1052         return list(_load_from_socket(sock_info, mappedRDD._jrdd_deserializer))
   1053 

/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 24.0 failed 4 times, most recent failure: Lost task 0.3 in stage 24.0 (TID 563, test-1-w-0.c.abc.internal, executor 3): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 262, in main
    ("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 2.7 than that in driver 3.6, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588)
    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1124)
    at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1888)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1875)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2109)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2058)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2047)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
    at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:153)
    at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 262, in main
    ("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 2.7 than that in driver 3.6, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588)
    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1124)
    at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ... 1 more
Py4JJavaError回溯(最近一次调用)
在里面
55
56#显示10个结果。
--->57磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅磅
58
59
/take中的usr/lib/spark/python/pyspark/rdd.py(self,num)
1358
1359 p=范围(零件扫描,最小值(零件扫描+数值扫描,总零件))
->1360 res=self.context.runJob(self,takeUpToNumLeft,p)
1361
1362项+=res
/runJob中的usr/lib/spark/python/pyspark/context.py(self、rdd、partitionFunc、partitions、allowLocal)
1049#SparkContext#runJob。
1050 mappedRDD=rdd.mapPartitions(partitionFunc)
->1051 sock_info=self.\u jvm.PythonRDD.runJob(self.\u jsc.sc(),mappedRDD.\u jrdd,分区)
1052返回列表(_从_套接字加载(sock信息,mappedRDD._jrdd_反序列化器))
1053
/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in____调用(self,*args)
1255 answer=self.gateway\u client.send\u命令(command)
1256返回值=获取返回值(
->1257应答,self.gateway_客户端,self.target_id,self.name)
1258
1259对于临时参数中的临时参数:
/装饰中的usr/lib/spark/python/pyspark/sql/utils.py(*a,**kw)
61 def装饰(*a,**千瓦):
62尝试:
--->63返回f(*a,**kw)
64除py4j.protocol.Py4JJavaError外的其他错误为e:
65 s=e.java_exception.toString()
/获取返回值中的usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py(答案、网关客户端、目标id、名称)
326 raise Py4JJavaError(
327“调用{0}{1}{2}时出错。\n”。
-->328格式(目标id,“.”,名称),值)
329其他:
330升起Py4JError(
Py4JJavaError:调用z:org.apache.spark.api.python.PythonRDD.runJob时出错。
:org.apache.spark.sparkeexception:作业因阶段失败而中止:阶段24.0中的任务0失败4次,最近的失败:阶段24.0中的任务0.3丢失(TID 563,test-1-w-0.c.abc.internal,executor 3):org.apache.spark.api.python异常:回溯(最近一次调用):
文件“/usr/lib/spark/python/lib/pyspark.zip/pyspark/worker.py”,第262行,在main中
(%d.%d”%sys.version\u info[:2],version))
异常:worker中的Python与驱动程序3.6中的Python版本2.7不同,PySpark无法使用不同的次要版本运行。请检查环境变量PySpark_Python和PySpark_driver_Python是否正确设置。
位于org.apache.spark.api.python.BasePythonRunner$readeriator.handlePythonException(PythonRunner.scala:452)
位于org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588)
位于org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571)
位于org.apache.spark.api.python.BasePythonRunner$readerierator.hasNext(PythonRunner.scala:406)
在org.apache.spark.interruptblediator.hasNext(interruptblediator.scala:37)
位于scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1124)
位于scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1130)
位于scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
位于org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)上
在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)上
位于org.apache.spark.scheduler.Task.run(Task.scala:121)
位于org.apache.spark.executor.executor$TaskRunner$$anonfun$10.apply(executor.scala:402)
位于org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:408)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
运行(Thread.java:748)
驱动程序堆栈跟踪:
位于org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1888)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
位于scala.collection.mutable.resizeblearray$class.foreach(resizeblearray.scala:59)
位于scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
位于org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1875)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
位于scala.Option.foreach(Option.scala:257)
位于org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2109)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2058)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2047)
位于org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
位于org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
A.
gcloud dataproc clusters create cluster-name \
  --optional-components=JUPYTER \
  --image-version=1.4 \
  ... other flags