Apache spark AWS Glue中的简单ETL作业称;“文件已存在”;

Apache spark AWS Glue中的简单ETL作业称;“文件已存在”;,apache-spark,aws-glue,Apache Spark,Aws Glue,我们正在评估一个大数据项目的AWS胶水,并使用一些ETL。我们添加了一个爬虫程序,它可以正确地从S3中提取CSV文件。最初,我们只想将该CSV转换为JSON,并将文件放到另一个S3位置(相同的bucket,不同的路径) 我们使用AWS提供的脚本(这里没有自定义脚本)。只映射了所有列 目标文件夹为空(作业刚刚创建),但作业失败,显示“文件已存在”: 我们假装丢弃的S3位置在开始作业之前输出是空的。但是,错误发生后,我们确实看到了两个文件,但这些文件似乎是局部文件: 对可能发生的事情有什么想法吗

我们正在评估一个大数据项目的AWS胶水,并使用一些ETL。我们添加了一个爬虫程序,它可以正确地从S3中提取CSV文件。最初,我们只想将该CSV转换为JSON,并将文件放到另一个S3位置(相同的bucket,不同的路径)

我们使用AWS提供的脚本(这里没有自定义脚本)。只映射了所有列

目标文件夹为空(作业刚刚创建),但作业失败,显示“文件已存在”: 我们假装丢弃的S3位置在开始作业之前输出是空的。但是,错误发生后,我们确实看到了两个文件,但这些文件似乎是局部文件:

对可能发生的事情有什么想法吗

以下是完整的堆栈:

Container: container_1513099821372_0007_01_000001 on ip-172-31-49-38.ec2.internal_8041 LogType:stdout Log Upload Time:Tue Dec 12 19:12:04 +0000 2017 LogLength:8462 Log Contents: Traceback (most recent call last): File "script_2017-12-12-19-11-08.py", line 30, in datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = { "path": "s3://primero-viz/output/tcw_entries" } , format = "json", transformation_ctx = "datasink2") File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/PyGlue.zip/awsglue/dynamicframe.py", line 523, in from_options File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/PyGlue.zip/awsglue/context.py", line 175, in write_dynamic_frame_from_options File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/PyGlue.zip/awsglue/context.py", line 198, in write_from_options File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/PyGlue.zip/awsglue/data_sink.py", line 32, in write File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/PyGlue.zip/awsglue/data_sink.py", line 28, in writeFrame File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__ File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco File "/mnt/yarn/usercache/root/appcache/application_1513099821372_0007/container_1513099821372_0007_01_000001/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o86.pyWriteDynamicFrame. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, ip-172-31-63-141.ec2.internal, executor 1): java.io.IOException: File already exists:s3://primero-viz/output/tcw_entries/run-1513105898742-part-r-00000 at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.create(S3NativeFileSystem.java:604) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:915) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:896) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:793) at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.create(EmrFileSystem.java:176) at com.amazonaws.services.glue.hadoop.TapeOutputFormat.getRecordWriter(TapeOutputFormat.scala:65) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1119) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1102) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1951) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1158) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1085) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1005) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:996) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:996) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:996) at com.amazonaws.services.glue.HadoopDataSink$$anonfun$2.apply$mcV$sp(DataSink.scala:192) at com.amazonaws.services.glue.HadoopDataSink.writeDynamicFrame(DataSink.scala:202) at com.amazonaws.services.glue.DataSink.pyWriteDynamicFrame(DataSink.scala:48) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: File already exists:s3://primero-viz/output/tcw_entries/run-1513105898742-part-r-00000 容器:ip-172-31-49-38.ec2.internal上的容器-U 1513099821372-U 0007-U 01-U000001 日志类型:stdout 日志上传时间:2017年12月12日星期二19:12:04+0000 对数长度:8462 日志内容: 回溯(最近一次呼叫最后一次): 文件“script_2017-12-12-19-11-08.py”,第30行,在 datasink2=glueContext.write_dynamic_frame.from_options(frame=applymapping1,connection_type=“s3”,connection_options= { “路径”:“s3://primero viz/output/tcw_条目” } ,format=“json”,transformation\u ctx=“datasink2”) 文件“/mnt/thread/usercache/root/appcache/application\u 1513099821372\u 0007/container\u 1513099821372\u 0007\u 01\u000001/PyGlue.zip/awsglue/dynamicframe.py”,第523行,在from\u选项中 文件“/mnt/thread/usercache/root/appcache/application\u 1513099821372\u 0007/container\u 1513099821372\u 0007\u 01\u000001/PyGlue.zip/awsglue/context.py”,第175行,在write\u dynamic\u frame\u from\u选项中 文件“/mnt/thread/usercache/root/appcache/application\u 1513099821372\u 0007/container\u 1513099821372\u 0007\u 01\u000001/PyGlue.zip/awsglue/context.py”,第198行,在write\u from\u选项中 写入文件“/mnt/thread/usercache/root/appcache/application\u 1513099821372\u 0007/container\u 1513099821372\u 0007\u 01\u000001/PyGlue.zip/awsglue/data\u sink.py”,第32行 文件“/mnt/thread/usercache/root/appcache/application\u 1513099821372\u 0007/container\u 1513099821372\u 0007\u 01\u000001/PyGlue.zip/awsglue/data\u sink.py”,第28行,书面名称 文件“/mnt/thread/usercache/root/appcache/application\u 1513099821372\u 0007/container\u 1513099821372\u 0007\u 01\u000001/py4j-0.10.4-src.zip/py4j/java\u gateway.py”,第1133行,在__ 文件“/mnt/thread/usercache/root/appcache/application\u 1513099821372\u 0007/container\u 1513099821372\u 0007\u 01\u000001/pyspark.zip/pyspark/sql/utils.py”,第63行,装饰 文件“/mnt/thread/usercache/root/appcache/application\u 1513099821372\u 0007/container\u 1513099821372\u 0007\u 01\u000001/py4j-0.10.4-src.zip/py4j/protocol.py”,第319行,在get\u return\u值中 py4j.protocol.Py4JJavaError:调用o86.pyWriteDynamicFrame时出错。 :org.apache.spark.sparkeexception:作业因阶段失败而中止:阶段0.0中的任务0失败4次,最近的失败:阶段0.0中的任务0.3丢失(TID 3,ip-172-31-63-141.ec2.internal,executor 1):java.io.IOException:文件已存在:s3://primero viz/output/tcw_entries/run-1513105898742-part-r-00000 位于com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.create(S3NativeFileSystem.java:604) 位于org.apache.hadoop.fs.FileSystem.create(FileSystem.java:915) 位于org.apache.hadoop.fs.FileSystem.create(FileSystem.java:896) 位于org.apache.hadoop.fs.FileSystem.create(FileSystem.java:793) 位于com.amazon.ws.emr.hadoop.fs.EmrFileSystem.create(EmrFileSystem.java:176) 位于com.amazonaws.services.glue.hadoop.TapeOutputFormat.getRecordWriter(TapeOutputFormat.scala:65) 位于org.apache.spark.rdd.pairddfunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(pairddfunctions.scala:1119) 位于org.apache.spark.rdd.pairddfunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(pairddfunctions.scala:1102) 位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 位于org.apache.spark.scheduler.Task.run(Task.scala:99) 位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:282) 位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 运行(Thread.java:748) 驱动程序堆栈跟踪: 位于org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435) 位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423) 位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422) 位于scala.collection.mutable.resizeblearray$class.foreach(resizeblearray.scala:59) 位于scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) 位于org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422) 位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) 位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) 位于scala.Option.foreach(Option.scala:257) 位于org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802) 位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650) 位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605) 位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594) 位于org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 位于org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628) 位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1918) 位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1931) 位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1951) 在org.apache.spark.rdd.pairddfunctions$$anonfun$saveAsNewAPIHadoopDat上
events.toDF().write.json(events_dir, mode="append", partitionBy=["partition_0", "partition_1"])