spark#u write#u csv#x27;不再工作(与Sparkyr)
spark_write_csv功能不再工作,可能是因为我升级了spark版本。有人能帮忙吗 下面是代码示例和下面的错误消息:spark#u write#u csv#x27;不再工作(与Sparkyr),r,apache-spark,sparklyr,R,Apache Spark,Sparklyr,spark_write_csv功能不再工作,可能是因为我升级了spark版本。有人能帮忙吗 下面是代码示例和下面的错误消息: library(sparklyr) library(dplyr) spark_conn <- spark_connect(master = "local") iris <- copy_to(spark_conn, iris, overwrite = TRUE) spark_write_csv(iris
library(sparklyr)
library(dplyr)
spark_conn <- spark_connect(master = "local")
iris <- copy_to(spark_conn, iris, overwrite = TRUE)
spark_write_csv(iris, path = "iris.csv")
Error: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
库(年)
图书馆(dplyr)
斯帕克康涅狄格州