斯巴克2.2.0。写AVRO失败了
我对Spark比较陌生,从SparkR访问它,并尝试将AVRO文件写入磁盘,但我一直收到一个错误,说任务在写入行时失败 我正在运行SparkR 2.2.0-SNAPSHOT,Scala版本2.11.8,并通过以下方式启动我的SparkR会话:斯巴克2.2.0。写AVRO失败了,r,apache-spark,avro,sparkr,R,Apache Spark,Avro,Sparkr,我对Spark比较陌生,从SparkR访问它,并尝试将AVRO文件写入磁盘,但我一直收到一个错误,说任务在写入行时失败 我正在运行SparkR 2.2.0-SNAPSHOT,Scala版本2.11.8,并通过以下方式启动我的SparkR会话: sparkR.session(master = "spark://[some ip here]:7077", appName = "nateSparkRAVROTest", sparkHome = "/home/
sparkR.session(master = "spark://[some ip here]:7077",
appName = "nateSparkRAVROTest",
sparkHome = "/home/ubuntu/spark",
enableHiveSupport = FALSE,
sparkConfig = list(spark.executor.memory="28g"),
sparkPackages =c("org.apache.hadoop:hadoop-aws:2.7.3", "com.amazonaws:aws-java-sdk-pom:1.10.34", "com.databricks:spark-avro_2.11:3.2.0"))
我想知道我是否需要设置或安装任何特殊的东西?我在会话启动命令中包含com.databricks:spark-avro_2.11:3.2.0包,我看到它在启动会话时下载了该包,我正试图通过该命令编写avro文件:
SparkR::write.df(myFormalClassSparkDataFrameObject, path = "/home/nathan/SparkRAVROTest/", source = "com.databricks.spark.avro", mode="overwrite")
我希望有更多使用SparkR经验的人经历过这个错误,并能提供一些见解。谢谢你抽出时间
亲切问候,,
Nate我可以在spark配置中使用com.databricks:spark-avro_2.11:4.0.0让它工作 SparkR配置的一个示例如下:
SparkR::sparkR.session(master="local[*]",
sparkConfig = list(spark.driver.memory="14g",
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version="2",
spark.hadoop.mapreduce.fileoutputcommitter.marksuccessfuljobs = "FALSE",
spark.kryoserializer.buffer.max="1024m",
spark.speculation="FALSE",
spark.referenceTracking="FALSE"
),
sparkPackages =c("org.apache.hadoop:hadoop-aws:2.7.3",
"com.amazonaws:aws-java-sdk:1.7.4",
"com.amazonaws:aws-java-sdk-pom:1.11.221",
"com.databricks:spark-avro_2.11:4.0.0",
"org.apache.httpcomponents:httpclient:4.5.2"))