Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark Can';不要让Spark在Windows中使用IPython笔记本电脑_Apache Spark_Ipython Notebook_Pyspark - Fatal编程技术网

Apache spark Can';不要让Spark在Windows中使用IPython笔记本电脑

Apache spark Can';不要让Spark在Windows中使用IPython笔记本电脑,apache-spark,ipython-notebook,pyspark,Apache Spark,Ipython Notebook,Pyspark,我已经在Windows10上安装了spark,在Pyspark控制台上安装效果很好。但最近我尝试将Ipython笔记本配置为使用Spark安装。我做了以下几点进口 os.environ['SPARK_HOME'] = "E:/Spark/spark-1.6.0-bin-hadoop2.6" sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/bin") sys.path.append("E:/Spark/spark-1.6.0-bin-hado

我已经在Windows10上安装了spark,在Pyspark控制台上安装效果很好。但最近我尝试将Ipython笔记本配置为使用Spark安装。我做了以下几点进口

os.environ['SPARK_HOME'] = "E:/Spark/spark-1.6.0-bin-hadoop2.6"
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/bin")
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/python")
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/python/pyspark")
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/python/lib")
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip")
sys.path.append("E:/Spark/spark-1.6.0-bin-hadoop2.6/python/lib/py4j-0.9-    src.zip")
sys.path.append("C:/Program Files/Java/jdk1.8.0_51/bin")
这对于创建SparkContext和类似

sc.parallelize([1, 2, 3])
但是当我写下面的

file = sc.textFile("E:/scripts.sql")
words = sc.count()
我得到以下错误

Py4JJavaError Traceback (most recent call last)
<ipython-input-22-3c172daac960> in <module>()
 1 file = sc.textFile("E:/scripts.sql")
 ----> 2 file.count()

 E:/Spark/spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py in count(self)
 1002         3
 1003         """
 -> 1004         return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
 1005 
 1006     def stats(self):

 E:/Spark/spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py in sum(self)
 993         6.0
 994         """
 --> 995         return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
 996 
 997     def count(self):

 E:/Spark/spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py in fold(self, zeroValue, op)
 867         # zeroValue provided to each partition is unique from the one provided
 868         # to the final reduce call
 --> 869         vals = self.mapPartitions(func).collect()
 870         return reduce(op, vals, zeroValue)
 871 

 E:/Spark/spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py in collect(self)
 769         """
 770         with SCCallSiteSync(self.context) as css:
 --> 771             port =     self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
 772         return list(_load_from_socket(port, self._jrdd_deserializer))
 773 

 E:\Spark\spark-1.6.0-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\java_gateway.py in __call__(self, *args)
 811         answer = self.gateway_client.send_command(command)
 812         return_value = get_return_value(
 --> 813             answer, self.gateway_client, self.target_id, self.name)
 814 
 815         for temp_arg in temp_args:

 E:\Spark\spark-1.6.0-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
 306                 raise Py4JJavaError(
 307                     "An error occurred while calling {0}{1}{2}.\n".
 --> 308                     format(target_id, ".", name), value)
309             else:
310                 raise Py4JError(Py4JJavaError: An error occurred while calling     z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 1 times, most recent failure: Lost task 0.0 in stage 8.0 (TID 8, localhost): org.apache.spark.SparkException: Python worker did not connect back in time at   org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:136)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:65)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:131)
... 12 more

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker did not connect back in time
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:136)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:65)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:131)
... 12 more
Py4JJavaError回溯(最近一次调用)
在()
1 file=sc.textFile(“E:/scripts.sql”)
---->2.file.count()
E:/Spark/Spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py计数(self)
1002         3
1003         """
->1004返回self.mapPartitions(lambda i:[sum(1表示i中的u)]).sum()
1005
1006 def状态(自身):
E:/Spark/Spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py总和(self)
993         6.0
994         """
-->995返回self.mapPartitions(lambda x:[求和(x)]).fold(0,运算符.add)
996
997 def计数(自身):
E:/Spark/Spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py折叠(self,zeroValue,op)
867#提供给每个分区的zeroValue与提供的分区是唯一的
868#到最后的reduce call
-->869 vals=self.mapPartitions(func.collect())
870返回减少(op、VAL、零值)
871
E:/Spark/Spark-1.6.0-bin-hadoop2.6/python\pyspark\rdd.py在collect(self)中
769         """
770,使用SCCallSiteSync(self.context)作为css:
-->771 port=self.ctx.\u jvm.PythonRDD.collectAndServe(self.\u jrdd.rdd())
772返回列表(_从_套接字加载(端口,self._jrdd_反序列化器))
773
E:\Spark\Spark-1.6.0-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\java\u gateway.py in\uuuuu call\uuu(self,*args)
811 answer=self.gateway\u client.send\u命令(command)
812返回值=获取返回值(
-->813应答,self.gateway\u客户端,self.target\u id,self.name)
814
815对于临时参数中的临时参数:
E:\Spark\Spark-1.6.0-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\protocol.py在get\u return\u值中(答案、网关\u客户端、目标\u id、名称)
306 raise PY4JJAVA错误(
307“调用{0}{1}{2}时出错。\n”。
-->308格式(目标id,“.”,名称),值)
309其他:
310 raise Py4JError(Py4JJavaError:调用z:org.apache.spark.api.python.PythonRDD.collectAndServe时出错。
:org.apache.spark.SparkException:作业因阶段失败而中止:阶段8.0中的任务0失败1次,最近的失败:阶段8.0中的任务0.0丢失(TID 8,localhost):org.apache.spark.SparkException:Python工作人员未及时连接回org.apache.spark.api.Python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:136)
位于org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:65)
位于org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134)
位于org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101)
位于org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:306)上
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:270)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
位于org.apache.spark.scheduler.Task.run(Task.scala:89)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:213)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(未知源)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(未知源)
位于java.lang.Thread.run(未知源)
原因:java.net.SocketTimeoutException:接受超时
位于java.net.DualStackPlainSocketImpl.waitForNewConnection(本机方法)
位于java.net.DualStackPlainSocketImpl.socketAccept(未知源)
位于java.net.AbstractPlainSocketImpl.accept(未知源)
位于java.net.PlainSocketImpl.accept(未知源)
位于java.net.ServerSocket.implacpt(未知源)
位于java.net.ServerSocket.accept(未知源)
位于org.apache.spark.api.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:131)
…还有12个
驱动程序堆栈跟踪:
位于org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
位于scala.collection.mutable.resizeblearray$class.foreach(resizeblearray.scala:59)
位于scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
位于org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
位于scala.Option.foreach(Option.scala:236)
位于org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
位于org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
位于org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
位于org.apache.spark.rdd.rdd$$anonfun$collect$1.apply(rdd.scala:927)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
在org.apache.sp上
file = sc.textFile("E:\\scripts.sql")
words = sc.count()
file = sc.textFile("E:/scripts.sql")
words = file.count()