Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/amazon-web-services/14.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python AWS Glue,输出一个带分区的文件_Python_Amazon Web Services_Csv_Apache Spark_Aws Glue - Fatal编程技术网

Python AWS Glue,输出一个带分区的文件

Python AWS Glue,输出一个带分区的文件,python,amazon-web-services,csv,apache-spark,aws-glue,Python,Amazon Web Services,Csv,Apache Spark,Aws Glue,我有一个Glue ETL脚本,它获取一个分区的Athena表并将其输出到CSV。该表根据两个标准进行分区,即单位和场地。当胶水作业运行时,它会为单元和站点分区的每个组合创建不同的CSV文件。相反,我想要一个包含所有分区的输出文件,类似于athena表的结构 我对“datasource0.toDF().repartition(1)”做了一些修改,但我不确定它如何与AWS提供的脚本接口。我用拼花地板桌子做过,但这个脚本的结构不同 注意:对于下面的脚本,我已经删除了大多数标记映射 from awsgl

我有一个Glue ETL脚本,它获取一个分区的Athena表并将其输出到CSV。该表根据两个标准进行分区,即单位和场地。当胶水作业运行时,它会为单元和站点分区的每个组合创建不同的CSV文件。相反,我想要一个包含所有分区的输出文件,类似于athena表的结构

我对“datasource0.toDF().repartition(1)”做了一些修改,但我不确定它如何与AWS提供的脚本接口。我用拼花地板桌子做过,但这个脚本的结构不同

注意:对于下面的脚本,我已经删除了大多数标记映射

from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "testdata-2018-2019", table_name = "testdata", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "formatted-test-2018-2019", table_name = "testdata", transformation_ctx = "datasource0")
datasource0.toDF().repartition(1)
## @type: ApplyMapping
## @args: [mapping = [("time", "string", "time", "string"), ("unit", "string", "unit", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("time", "string", "time", "string"), ("`data.pv`", "double", ("site", "string", "site", "string"), ("unit", "string", "unit", "string")], transformation_ctx = "applymapping1")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://testbucket/ParsedCSV-Data"}, format = "csv", transformation_ctx = "datasink2"]
## @return: datasink2
## @inputs: [frame = applymapping1]
datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://buckettest/ParsedCSV-Data"}, format = "csv", transformation_ctx = "datasink2").repartition(1)
job.commit()

我想修改上面的脚本,只输出一个包含分区列的CSV文件。如何执行此操作?

在编写动态框架之前,您需要重新分区

repartitioned1 = applymapping1.repartition(1)
datasink2 = glueContext.write_dynamic_frame.from_options(frame = repartitioned1, connection_type = "s3", connection_options = {"path": "s3://20182019testdata/ParsedCSV-Data"}, format = "csv", transformation_ctx = "datasink2")
关于在输出文件中包含分区列,我认为这是不可能的。作为一种解决方法,您可以将列复制到具有不同名称的新列中

df = applymapping1.toDF
repartitioned_with_new_column_df = df.withColumn("_column1", df["column1"]).repartition(1)
dyf = DynamicFrame.fromDF(repartitioned_with_new_column_df, glueContext, "enriched")
datasink2 = glueContext.write_dynamic_frame.from_options(frame = dyf, connection_type = "s3", connection_options = {"path": "s3://20182019testdata/ParsedCSV-Data", , "partitionKeys": ["_column1"]}, format = "csv", transformation_ctx = "datasink2")
作为第一个。您可以使用
.coalesce(1)
。像这样的事情:

dynamic_Frame=applymapping1.coalesce(1)
datasink2 = glueContext.write_dynamic_frame.from_options(frame = dynamic_Frame, connection_type = "s3", connection_options = 

它适用于我的案例。

更新:
。coalesce
不适用于大文件。我的工作在1小时内运行(仍在运行)。惊人的解决方案!