Python 2.7 RDD不';读取文件后无法正常工作

Python 2.7 RDD不';读取文件后无法正常工作,python-2.7,apache-spark,rdd,Python 2.7,Apache Spark,Rdd,我一直在寻找解决方案,但还没有找到任何远程连接到它 我正在尝试使用Spark中的sc.textFile()读取csv文件。我使用的是1.5.1版。当我读入它时,我没有得到一个错误,但是当我尝试执行除.count()以外的任何函数时,我会得到一个错误…例如.take(2)。我可以将该文件作为panda文件读入,然后将其更改为Spark数据帧,这样一切都会正常进行。下面是我尝试执行rdd.take(2)时的代码和错误 将熊猫作为pd导入 从pyspark.sql导入SQLContext sqlCon

我一直在寻找解决方案,但还没有找到任何远程连接到它

我正在尝试使用Spark中的sc.textFile()读取csv文件。我使用的是1.5.1版。当我读入它时,我没有得到一个错误,但是当我尝试执行除.count()以外的任何函数时,我会得到一个错误…例如.take(2)。我可以将该文件作为panda文件读入,然后将其更改为Spark数据帧,这样一切都会正常进行。下面是我尝试执行rdd.take(2)时的代码和错误

将熊猫作为pd导入
从pyspark.sql导入SQLContext
sqlContext=sqlContext(sc)
data\u pd=pandas.read\u csv('data\u Cortex\u Nuclear.csv'))
df=pd.DataFrame(数据帧)
df=sqlContext.createDataFrame(df)
rdd=sc.textFile('Data\u Cortex\u Nuclear.csv')
rdd.take(2)#Get错误
df.take(2)#获取正确的代码
---------------------------------------------------------------------------
Py4JJavaError回溯(最近一次调用)
在()
8.
9 rdd=sc.textFile('Data\u Cortex\u Nuclear.csv')
--->10日剂量,服用(2)
11.采取行动(2)
---------------------------------------------------------------------------
Py4JJavaError回溯(最近一次调用)
在()
8.
9 rdd=sc.textFile('Data\u Cortex\u Nuclear.csv')
--->10日剂量,服用(2)
11.采取行动(2)
C:\Users\Anna\Downloads\spark-1.5.1-bin-hadoop2.6\spark-1.5.1-bin-hadoop2.6\python\pyspark\rdd.pyc in take(self,num)
1297
1298 p=范围(零件扫描,最小值(零件扫描+numPartsToTry,总零件))
->1299 res=self.context.runJob(self,takeUpToNumLeft,p)
1300
1301项+=res
C:\Users\Anna\Downloads\spark-1.5.1-bin-hadoop2.6\spark-1.5.1-bin-hadoop2.6\python\pyspark\context.pyc在runJob中(self、rdd、partitionFunc、partitions、allowLocal)
914#SparkContext#runJob。
915 mappedRDD=rdd.mapPartitions(partitionFunc)
-->916 port=self.\u jvm.PythonRDD.runJob(self.\u jsc.sc(),mappedRDD.\u jrdd,分区)
917返回列表(_load_from_socket(端口,mapperdd._jrdd_反序列化器))
918
C:\Users\Anna\Downloads\spark-1.5.1-bin-hadoop2.6\spark-1.5.1-bin-hadoop2.6\python\lib\py4j-0.8.2.1-src.zip\py4j\java\u gateway.py in\uuuuuuu call\uuu(self,*args)
536 answer=self.gateway\u client.send\u命令(command)
537返回值=获取返回值(应答,self.gateway\u客户端,
-->538 self.target_id,self.name)
539
540对于临时参数中的临时参数:
C:\Users\Anna\Downloads\spark-1.5.1-bin-hadoop2.6\spark-1.5.1-bin-hadoop2.6\python\pyspark\sql\utils.pyc in deco(*a,**kw)
34 def装饰(*a,**千瓦):
35尝试:
--->36返回f(*a,**kw)
37除py4j.protocol.Py4JJavaError为e外:
38 s=e.java_exception.toString()
C:\Users\Anna\Downloads\spark-1.5.1-bin-hadoop2.6\spark-1.5.1-bin-hadoop2.6\python\lib\py4j-0.8.2.1-src.zip\py4j\protocol.py in get\u return\u value(答案、网关\u客户端、目标\u id、名称)
298 raise Py4JJavaError(
299'调用{0}{1}{2}时出错。\n'。
-->300格式(目标id,,,,名称),值)
301其他:
302升起Py4JError(
Py4JJavaError:调用z:org.apache.spark.api.python.PythonRDD.runJob时出错。
:org.apache.spark.sparkeexception:作业因阶段失败而中止:阶段78.0中的任务0失败1次,最近的失败:阶段78.0中的任务0.0丢失(TID 78,本地主机):java.net.SocketException:对等方重置连接:套接字写入错误
位于java.net.SocketOutputStream.socketWrite0(本机方法)
位于java.net.SocketOutputStream.socketWrite(未知源)
位于java.net.SocketOutputStream.write(未知源)
位于java.io.BufferedOutputStream.flushBuffer(未知源)
位于java.io.BufferedOutputStream.write(未知源)
位于java.io.DataOutputStream.write(未知源)
位于java.io.FilterOutputStream.write(未知源)
位于org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:622)
位于org.apache.spark.api.PythonRDD$.org$apache$spark$api$PythonRDD$$write$1(PythonRDD.scala:442)
位于org.apache.spark.api.PythonRDD$$anonfun$writeiteInteratorToStream$1.apply(PythonRDD.scala:452)
位于org.apache.spark.api.PythonRDD$$anonfun$writeiteInteratorToStream$1.apply(PythonRDD.scala:452)
位于scala.collection.Iterator$class.foreach(Iterator.scala:727)
位于scala.collection.AbstractIterator.foreach(迭代器.scala:1157)
位于org.apache.spark.api.PythonRDD$.writeiteiteratortostream(PythonRDD.scala:452)
位于org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:280)
位于org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1699)
位于org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:239)
驱动程序堆栈跟踪:
位于org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
位于scala.collection.mutable.resizeblearray$class.foreach(resizeblearray.scala:59)
位于scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
位于org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
位于scala.Option.foreach(Option.scala:236)
位于org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed的
    import pandas as pd
    from pyspark.sql import SQLContext
    sqlContext = SQLContext(sc)

    data_pd = pandas.read_csv('Data_Cortex_Nuclear.csv')
    df = pd.DataFrame(data_pd)
    df = sqlContext.createDataFrame(df)

    rdd = sc.textFile('Data_Cortex_Nuclear.csv')
    rdd.take(2)  #Get error
    df.take(2)   #Get correct code



    ---------------------------------------------------------------------------
         Py4JJavaError                             Traceback (most recent call   last)
         <ipython-input-165-8a14f6c71652> in <module>()
      8 
      9 rdd = sc.textFile('Data_Cortex_Nuclear.csv')
---> 10 rdd.take(2)
     11 df.take(2)
    ---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-168-8a14f6c71652> in <module>()
      8 
      9 rdd = sc.textFile('Data_Cortex_Nuclear.csv')
---> 10 rdd.take(2)
     11 df.take(2)

C:\Users\Anna\Downloads\spark-1.5.1-bin-hadoop2.6\spark-1.5.1-bin-hadoop2.6\python\pyspark\rdd.pyc in take(self, num)
   1297 
   1298             p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1299             res = self.context.runJob(self, takeUpToNumLeft, p)
   1300 
   1301             items += res

C:\Users\Anna\Downloads\spark-1.5.1-bin-hadoop2.6\spark-1.5.1-bin-hadoop2.6\python\pyspark\context.pyc in runJob(self, rdd, partitionFunc, partitions, allowLocal)
    914         # SparkContext#runJob.
    915         mappedRDD = rdd.mapPartitions(partitionFunc)
--> 916         port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
    917         return list(_load_from_socket(port, mappedRDD._jrdd_deserializer))
    918 

C:\Users\Anna\Downloads\spark-1.5.1-bin-hadoop2.6\spark-1.5.1-bin-hadoop2.6\python\lib\py4j-0.8.2.1-src.zip\py4j\java_gateway.py in __call__(self, *args)
    536         answer = self.gateway_client.send_command(command)
    537         return_value = get_return_value(answer, self.gateway_client,
--> 538                 self.target_id, self.name)
    539 
    540         for temp_arg in temp_args:

C:\Users\Anna\Downloads\spark-1.5.1-bin-hadoop2.6\spark-1.5.1-bin-hadoop2.6\python\pyspark\sql\utils.pyc in deco(*a, **kw)
     34     def deco(*a, **kw):
     35         try:
---> 36             return f(*a, **kw)
     37         except py4j.protocol.Py4JJavaError as e:
     38             s = e.java_exception.toString()

C:\Users\Anna\Downloads\spark-1.5.1-bin-hadoop2.6\spark-1.5.1-bin-hadoop2.6\python\lib\py4j-0.8.2.1-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
    298                 raise Py4JJavaError(
    299                     'An error occurred while calling {0}{1}{2}.\n'.
--> 300                     format(target_id, '.', name), value)
    301             else:
    302                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 78.0 failed 1 times, most recent failure: Lost task 0.0 in stage 78.0 (TID 78, localhost): java.net.SocketException: Connection reset by peer: socket write error
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(Unknown Source)
    at java.net.SocketOutputStream.write(Unknown Source)
    at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
    at java.io.BufferedOutputStream.write(Unknown Source)
    at java.io.DataOutputStream.write(Unknown Source)
    at java.io.FilterOutputStream.write(Unknown Source)
    at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:622)
    at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:442)
    at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:452)
    at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:452)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:452)
    at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:280)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1699)
    at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:239)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1848)
    at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
    at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
    at sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:207)
    at java.lang.Thread.run(Unknown Source)
Caused by: java.net.SocketException: Connection reset by peer: socket write error
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(Unknown Source)
    at java.net.SocketOutputStream.write(Unknown Source)
    at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
    at java.io.BufferedOutputStream.write(Unknown Source)
    at java.io.DataOutputStream.write(Unknown Source)
    at java.io.FilterOutputStream.write(Unknown Source)
    at org.apache.spark.api.python.PythonRDD$.writeUTF(PythonRDD.scala:622)
    at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:442)
    at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:452)
    at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:452)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:452)
    at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:280)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1699)
    at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:239)