如何在python的.CSV或.XLSX文件中高效导出使用pyspark生成的关联规则

如何在python的.CSV或.XLSX文件中高效导出使用pyspark生成的关联规则,pyspark,python-3.6,fpgrowth,Pyspark,Python 3.6,Fpgrowth,解决此问题后: 我正在尝试使用pyspark将fpgrowth的关联规则输出导出到python中的.csv文件。运行近8-10小时后,会出现错误。 我的机器有足够的空间和内存 Association Rule output is like this: Antecedent Consequent Lift ['A','B'] ['C'] 1 代码位于链接中: 再加一行就行了 ar = a

解决此问题后: 我正在尝试使用pyspark将fpgrowth的关联规则输出导出到python中的.csv文件。运行近8-10小时后,会出现错误。 我的机器有足够的空间和内存

    Association Rule output is like this:

    Antecedent           Consequent      Lift
    ['A','B']              ['C']           1
代码位于链接中: 再加一行就行了

    ar = ar.coalesce(24)
    ar.write.csv('/output', header=True)
使用的配置:

 ``` conf = SparkConf().setAppName("App")
     conf = (conf.setMaster('local[*]')
    .set('spark.executor.memory', '200G')
    .set('spark.driver.memory', '700G')
    .set('spark.driver.maxResultSize', '400G')) #8,45,10
    sc = SparkContext.getOrCreate(conf=conf)
  spark = SparkSession(sc)
这将持续运行并消耗1000GB的C:/drive

是否有任何有效的方法以.CSV格式或.XLSX格式保存输出

错误是:

  ```The error is:

   Py4JJavaError: An error occurred while calling o207.csv.
   org.apache.spark.SparkException: Job aborted.at 
   org.apache.spark.sql.execution.
   datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)

   atorg.apache.spark.sql.execution.datasources.InsertIntoHadoopFs
   RelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
   at 
   org.apache.spark.sql.execution.command.
  DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
  at org.apache.spark.sql.execution.command.
  DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:664)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 10 in stage 9.0 failed 1 times, most recent failure: Lost task 10.0 in stage 9.0 (TID 226, localhost, executor driver): java.io.IOException: There is not enough space on the disk
at java.io.FileOutputStream.writeBytes(Native Method)

正如在注释中所述,您应该尝试避免toPandas(),因为此函数将所有数据加载到驱动程序。您可以使用pysparks写出数据,但在将数据写入csv之前,必须将数组列(先行和后续)转换为不同的格式,因为数组不受支持。将列强制转换为支持的类型(如字符串)的一种方法是

导入pyspark.sql.F函数
从pyspark.ml.fpm导入FPGrowth
df=spark.createDataFrame([
(0, [1, 2, 5]),
(1, [1, 2, 3, 5]),
(2, [1, 2])
],[“id”,“items”])
fpGrowth=fpGrowth(itemsCol=“items”,minSupport=0.5,minConfidence=0.6)
模型=fpGrowth.fit(df)
ar=model.associationRules.withColumn('antecedent',F.concat_ws('-',F.col(“antecedent”).cast(“array”))\
.withColumn('结果',F.concat_ws('-',F.col(“结果”).cast(“数组”))
ar.show()
输出:

+-------------+-------------+-----------------------+-------------+
|前因|后因|信心|提升|
+----------+----------+------------------+----+ 
|         5|         1|               1.0| 1.0| 
|         5|         2|               1.0| 1.0| 
|       1-2|         5|0.6666666666666666| 1.0| 
|       5-2|         1|               1.0| 1.0| 
|       5-1|         2|               1.0| 1.0| 
|         2|         1|               1.0| 1.0| 
|         2|         5|0.6666666666666666| 1.0| 
|         1|         2|               1.0| 1.0| 
|         1|         5|0.6666666666666666| 1.0| 
+----------+----------+------------------+----+
您现在可以将数据写入csv:

ar.write.csv('/bla',header=True)
这将为每个分区创建一个csv文件。您可以使用以下命令更改分区数:

ar=ar.coalesce(1)

如果spark由于内存问题无法写入csv文件,请尝试不同数量的分区(在调用ar.write之前),必要时使用其他工具对文件进行加密。

您不应使用pandas创建csv文件。只需使用pyspark DatastreamWriter,如
ar.write.csv('mycsv.csv')
。这将创建大量的csv文件。你可以用@cronoik来控制数字,你能说得更多吗?在ar.coalesce()@cronoik中应该包含什么?它不起作用,因为输出是以列表的形式出现的。仍然将错误获取为。。。。Py4JJavaError:调用o96.csv时出错:org.apache.spark.SparkException:作业中止。org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)我的数据有1002653行和160列,我正在这些行和列上运行FP-growth。您能在您的问题中添加完整的错误消息吗?您尝试过不同的合并值吗?在此行的ar.write.csv('/bla',header=True)中失败。因此无法应用不同的合并值。占用的磁盘空间是随机分区,因为spark无法将所有计算数据保留在内存中。它在写入csv时不会崩溃,在计算过程中会崩溃。你有多少公羊?



     The progress:
     19/07/15 14:12:32 WARN TaskSetManager: Stage 1 contains a task of very large size (26033 KB). The maximum recommended task size is 100 KB.
     19/07/15 14:12:33 WARN TaskSetManager: Stage 2 contains a task of very large size (26033 KB). The maximum recommended task size is 100 KB.
     19/07/15 14:12:38 WARN TaskSetManager: Stage 4 contains a task of very large size (26033 KB). The maximum recommended task size is 100 KB.
     [Stage 5:>                (0 + 24) / 24][Stage 6:>                 (0 + 0) / 24][I 14:14:02.723 NotebookApp] Saving file at /app1.ipynb
     [Stage 5:==>              (4 + 20) / 24][Stage 6:===>              (4 + 4) / 24]