Can';t在pyspark中写入拼花地板文件

Can';t在pyspark中写入拼花地板文件,pyspark,Pyspark,我正在尝试将“pyspark.sql.dataframe.dataframe”写入拼花地板文件 我的代码是- from pyspark import sql import json from pyspark.context import SparkContext from pyspark.sql.session import SparkSession sc = SparkContext('local') spark = SparkSession(sc) from pyspark.sql imp

我正在尝试将“pyspark.sql.dataframe.dataframe”写入拼花地板文件

我的代码是-

from pyspark import sql
import json
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext('local')
spark = SparkSession(sc)

from pyspark.sql import functions as F
daf=spark.read.json('C:/Users//rr3628911523729/Downloads/JSONS/people.json', multiLine=True)
print type(daf)
daf.write.parquet("E:/hi",mode='overwrite')
但是我得到了以下错误。我找不到这是什么原因造成的。这一错误的原因可能是什么?所述文件夹具有写入权限

    <class 'pyspark.sql.dataframe.DataFrame'>
---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-14-77b4dcca60c9> in <module>()
      2 daf=spark.read.json('C:/Users//bh388709/Downloads/JSONS/people.json', multiLine=True)
      3 print type(daf)
----> 4 daf.write.parquet("E:/hi",mode='overwrite')

c:\python27\lib\site-packages\pyspark\sql\readwriter.pyc in parquet(self, path, mode, partitionBy, compression)
    800             self.partitionBy(partitionBy)
    801         self._set_opts(compression=compression)
--> 802         self._jwrite.parquet(path)
    803 
    804     @since(1.6)

c:\python27\lib\site-packages\py4j\java_gateway.pyc in __call__(self, *args)
   1158         answer = self.gateway_client.send_command(command)
   1159         return_value = get_return_value(
-> 1160             answer, self.gateway_client, self.target_id, self.name)
   1161 
   1162         for temp_arg in temp_args:

c:\python27\lib\site-packages\pyspark\sql\utils.pyc in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

c:\python27\lib\site-packages\py4j\protocol.pyc in get_return_value(answer, gateway_client, target_id, name)
    318                 raise Py4JJavaError(
    319                     "An error occurred while calling {0}{1}{2}.\n".
--> 320                     format(target_id, ".", name), value)
    321             else:
    322                 raise Py4JError(

Py4JJavaError: An error occurred while calling o407.parquet.
: org.apache.spark.SparkException: Job aborted.
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:224)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
    at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:547)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)    
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)    
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)    
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)    
    at py4j.Gateway.invoke(Gateway.java:282)    
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 13.0 failed 1 times, most recent failure: Lost task 0.0 in stage 13.0 (TID 13, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows.    
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285)

---------------------------------------------------------------------------
Py4JJavaError回溯(最近一次调用)
在()
2 daf=spark.read.json('C:/Users//bh388709/Downloads/JSONS/people.json',multiLine=True)
3打印类型(daf)
---->4 daf.write.parquet(“E:/hi”,mode='overwrite')
拼花地板中的c:\python27\lib\site packages\pyspark\sql\readwriter.pyc(self、path、mode、partitionBy、compression)
800自我分割比(分割比)
801自设置选项(压缩=压缩)
-->802自写拼花地板(路径)
803
804@自(1.6)
c:\python27\lib\site packages\py4j\java\u gateway.pyc in\uuuu调用(self,*args)
1158 answer=self.gateway\u client.send\u命令(command)
1159返回值=获取返回值(
->1160应答,self.gateway\u客户端,self.target\u id,self.name)
1161
1162对于临时参数中的临时参数:
c:\python27\lib\site packages\pyspark\sql\utils.pyc in deco(*a,**kw)
61 def装饰(*a,**千瓦):
62尝试:
--->63返回f(*a,**kw)
64除py4j.protocol.Py4JJavaError外的其他错误为e:
65 s=e.java_exception.toString()
c:\python27\lib\site packages\py4j\protocol.pyc在get\u return\u值中(应答、网关\u客户端、目标\u id、名称)
318 raise PY4JJAVA错误(
319“调用{0}{1}{2}时出错。\n”。
-->320格式(目标id,“.”,名称),值)
321其他:
322升起Py4JError(
Py4JJavaError:调用o407.parquet时出错。
:org.apache.spark.sparkeexception:作业已中止。
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:224)
在org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
位于org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
位于org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
位于org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
位于org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
位于org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
位于org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
位于org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
位于org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
位于org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
位于org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
位于org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
位于org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654)
位于org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
位于org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654)
位于org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
位于org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
位于org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
位于org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:547)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(未知源)
在sun.reflect.DelegatingMethodAccessorImpl.invoke处(未知源)
位于java.lang.reflect.Method.invoke(未知源)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:282)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:214)
位于java.lang.Thread.run(未知源)
原因:org.apache.spark.SparkException:作业因阶段失败而中止:阶段13.0中的任务0失败1次,最近的失败:阶段13.0中的任务0.0丢失(TID 13,localhost,executor driver):org.apache.spark.SparkException:任务在写入行时失败。
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:285)