Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/73.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
spark#u write#u csv#x27;不再工作(与Sparkyr)_R_Apache Spark_Sparklyr - Fatal编程技术网

spark#u write#u csv#x27;不再工作(与Sparkyr)

spark#u write#u csv#x27;不再工作(与Sparkyr),r,apache-spark,sparklyr,R,Apache Spark,Sparklyr,spark_write_csv功能不再工作,可能是因为我升级了spark版本。有人能帮忙吗 下面是代码示例和下面的错误消息: library(sparklyr) library(dplyr) spark_conn <- spark_connect(master = "local") iris <- copy_to(spark_conn, iris, overwrite = TRUE) spark_write_csv(iris

spark_write_csv功能不再工作,可能是因为我升级了spark版本。有人能帮忙吗

下面是代码示例和下面的错误消息:

    library(sparklyr)
    library(dplyr)
    spark_conn <- spark_connect(master = "local")
    iris <- copy_to(spark_conn, iris, overwrite = TRUE)
    spark_write_csv(iris, path = "iris.csv")

Error: org.apache.spark.SparkException: Job aborted.
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
库(年)
图书馆(dplyr)
斯帕克康涅狄格州