Python 为什么collect()工作正常,但count()和take()在Spark中给了我错误?

Python 为什么collect()工作正常,但count()和take()在Spark中给了我错误?,python,apache-spark,pyspark,Python,Apache Spark,Pyspark,上面的回复是: rdd = sc.textFile("test_file.txt").cache() rdd.collect() 然后rdd.count()给我这个错误: ['my number is 0', 'my number is 1', 'my number is 2'] --------------------------------------------------------------------------- Py4JJavaError回溯(最近一次调

上面的回复是:

rdd = sc.textFile("test_file.txt").cache()
rdd.collect()
然后
rdd.count()
给我这个错误:

['my number is 0', 'my number is 1', 'my number is 2']
---------------------------------------------------------------------------
Py4JJavaError回溯(最近一次调用)
在里面
---->1 rdd.count()
计数中的~\Anaconda3\envs\bigdata lab\lib\site packages\pyspark\rdd.py(self)
1053         3
1054         """
->1055返回self.mapPartitions(lambda i:[sum(1代表i中的u)]).sum()
1056
1057 def状态(自身):
~\Anaconda3\envs\bigdata lab\lib\site packages\pyspark\rdd.py总和(self)
1044         6.0
1045         """
->1046返回self.mapPartitions(lambda x:[求和(x)]).fold(0,运算符.add)
1047
1048 def计数(自身):
~\Anaconda3\envs\bigdata lab\lib\site packages\pyspark\rdd.py折叠(self,zeroValue,op)
915#提供给每个分区的zeroValue与提供的分区是唯一的
916#到最后一次通话
-->917 VAL=self.mapPartitions(func.collect())
918返回减少(op、VAL、零值)
919
收集中的~\Anaconda3\envs\bigdata lab\lib\site packages\pyspark\rdd.py(self)
814         """
815使用SCCallSiteSync(self.context)作为css:
-->816 sock_info=self.ctx.\u jvm.PythonRDD.collectAndServe(self.\u jrdd.rdd())
817返回列表(_从_套接字加载(sock信息,self._jrdd_反序列化器))
818
~\Anaconda3\envs\bigdata lab\lib\site packages\py4j\java\u gateway.py在调用中(self,*args)
1255 answer=self.gateway\u client.send\u命令(command)
1256返回值=获取返回值(
->1257应答,self.gateway_客户端,self.target_id,self.name)
1258
1259对于临时参数中的临时参数:
获取返回值中的~\Anaconda3\envs\bigdata lab\lib\site packages\py4j\protocol.py(应答、网关客户端、目标id、名称)
326 raise Py4JJavaError(
327“调用{0}{1}{2}时出错。\n”。
-->328格式(目标id,“.”,名称),值)
329其他:
330升起Py4JError(
Py4JJavaError:调用z:org.apache.spark.api.python.PythonRDD.collectAndServe时出错。
:org.apache.SparkException:作业因阶段失败而中止:阶段17.0中的任务0失败1次,最近的失败:阶段17.0中的任务0.0丢失(TID 38,本地主机,执行器驱动程序):org.apache.SparkException:Python worker无法连接回。
位于org.apache.spark.api.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
位于org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
位于org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
位于org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
位于org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:324)
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:288)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
位于org.apache.spark.scheduler.Task.run(Task.scala:121)
位于org.apache.spark.executor.executor$TaskRunner$$anonfun$10.apply(executor.scala:402)
位于org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:408)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(未知源)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(未知源)
位于java.lang.Thread.run(未知源)
原因:java.net.SocketTimeoutException:接受超时
位于java.net.DualStackPlainSocketImpl.waitForNewConnection(本机方法)
位于java.net.DualStackPlainSocketImpl.socketAccept(未知源)
位于java.net.AbstractPlainSocketImpl.accept(未知源)
位于java.net.PlainSocketImpl.accept(未知源)
位于java.net.ServerSocket.implacpt(未知源)
位于java.net.ServerSocket.accept(未知源)
位于org.apache.spark.api.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
…还有14个
驱动程序堆栈跟踪:
位于org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
位于scala.collection.mutable.resizeblearray$class.foreach(resizeblearray.scala:59)
位于scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
位于org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
位于scala.Option.foreach(Option.scala:257)
位于org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
位于org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
位于org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
位于org.apache.spark.rdd.rdd$$anonfun$collect$1.apply(rdd.scala:9
---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-27-377a7789e04b> in <module>
----> 1 rdd.count()

~\Anaconda3\envs\bigdata-lab\lib\site-packages\pyspark\rdd.py in count(self)
   1053         3
   1054         """
-> 1055         return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
   1056 
   1057     def stats(self):

~\Anaconda3\envs\bigdata-lab\lib\site-packages\pyspark\rdd.py in sum(self)
   1044         6.0
   1045         """
-> 1046         return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
   1047 
   1048     def count(self):

~\Anaconda3\envs\bigdata-lab\lib\site-packages\pyspark\rdd.py in fold(self, zeroValue, op)
    915         # zeroValue provided to each partition is unique from the one provided
    916         # to the final reduce call
--> 917         vals = self.mapPartitions(func).collect()
    918         return reduce(op, vals, zeroValue)
    919 

~\Anaconda3\envs\bigdata-lab\lib\site-packages\pyspark\rdd.py in collect(self)
    814         """
    815         with SCCallSiteSync(self.context) as css:
--> 816             sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    817         return list(_load_from_socket(sock_info, self._jrdd_deserializer))
    818 

~\Anaconda3\envs\bigdata-lab\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

~\Anaconda3\envs\bigdata-lab\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 17.0 failed 1 times, most recent failure: Lost task 0.0 in stage 17.0 (TID 38, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
    at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketTimeoutException: Accept timed out
    at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
    at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
    at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
    at java.net.PlainSocketImpl.accept(Unknown Source)
    at java.net.ServerSocket.implAccept(Unknown Source)
    at java.net.ServerSocket.accept(Unknown Source)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
    ... 14 more

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1887)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1875)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1874)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1874)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2108)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2057)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2046)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Python worker failed to connect back.
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:170)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:97)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
    at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    ... 1 more
Caused by: java.net.SocketTimeoutException: Accept timed out
    at java.net.DualStackPlainSocketImpl.waitForNewConnection(Native Method)
    at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
    at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
    at java.net.PlainSocketImpl.accept(Unknown Source)
    at java.net.ServerSocket.implAccept(Unknown Source)
    at java.net.ServerSocket.accept(Unknown Source)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:164)
    ... 14 more
my number is 0
my number is 1
my number is 2
rdd = sc.textFile("/FileStore/tables/test-4.txt")
rdd.count()  // this gives me output 3