Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/rust/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon web services 为什么AWS Glue会生成多个json文件?_Amazon Web Services - Fatal编程技术网

Amazon web services 为什么AWS Glue会生成多个json文件?

Amazon web services 为什么AWS Glue会生成多个json文件?,amazon-web-services,Amazon Web Services,我正忙于使用POC(使用AWS胶水)从RDS AWS Postgresql表中提取数据,我想生成一个JSON文件 我正在使用下面的脚本,但它不断生成多个文件,每个文件中有5行。如何使其仅生成1个文件 import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context

我正忙于使用POC(使用AWS胶水)从RDS AWS Postgresql表中提取数据,我想生成一个JSON文件

我正在使用下面的脚本,但它不断生成多个文件,每个文件中有5行。如何使其仅生成1个文件

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

## @type: DataSource
## @args: [database = "temp-crawlerdb-xxxxx", table_name = "taxservice__3fa3bf8633994e1a827498190adbe56a_contingencyrunningtotal", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "temp-crawlerdb-xxxxx", table_name = "taxservice__3fa3bf8633994e1a827498190adbe56a_contingencyrunningtotal", transformation_ctx = "datasource0")

## @type: ApplyMapping
## @args: [mapping = [("stake", "decimal(18,6)", "stake", "decimal(18,6)"), ("branchid", "long", "branchid", "long"), ("winningstake", "decimal(18,6)", "winningstake", "decimal(18,6)"), ("grossrevenue", "decimal(18,6)", "grossrevenue", "decimal(18,6)"), ("vatrate", "decimal(18,6)", "vatrate", "decimal(18,6)"), ("tmstamp", "timestamp", "tmstamp", "timestamp"), ("usrid", "string", "usrid", "string"), ("contingencyexternalreference", "string", "contingencyexternalreference", "string"), ("winnings", "decimal(18,6)", "winnings", "decimal(18,6)"), ("ggrtaxrate", "decimal(18,6)", "ggrtaxrate", "decimal(18,6)"), ("taxpayable", "decimal(18,6)", "taxpayable", "decimal(18,6)"), ("vatpayable", "decimal(18,6)", "vatpayable", "decimal(18,6)")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]

applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("stake", "decimal(18,6)", "stake", "decimal(18,6)"), ("branchid", "long", "branchid", "long"), ("winningstake", "decimal(18,6)", "winningstake", "decimal(18,6)"), ("grossrevenue", "decimal(18,6)", "grossrevenue", "decimal(18,6)"), ("vatrate", "decimal(18,6)", "vatrate", "decimal(18,6)"), ("tmstamp", "timestamp", "tmstamp", "timestamp"), ("usrid", "string", "usrid", "string"), ("contingencyexternalreference", "string", "contingencyexternalreference", "string"), ("winnings", "decimal(18,6)", "winnings", "decimal(18,6)"), ("ggrtaxrate", "decimal(18,6)", "ggrtaxrate", "decimal(18,6)"), ("taxpayable", "decimal(18,6)", "taxpayable", "decimal(18,6)"), ("vatpayable", "decimal(18,6)", "vatpayable", "decimal(18,6)")], transformation_ctx = "applymapping1")

## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://tax-service-xxxxx"}, format = "json", transformation_ctx = "datasink2"]
## @return: datasink2
## @inputs: [frame = applymapping1]
datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://tax-service-xxxxx"}, format = "csv", transformation_ctx = "datasink2")

job.commit()

在应用映射之前,请执行以下操作:

from awsglue.dynamicframe import DynamicFrame

# Convert to a dataframe and partition based on "partition_col"
partitioned_dataframe = datasource0.toDF().repartition(1)

# Convert back to a DynamicFrame for further processing.
partitioned_dynamicframe = DynamicFrame.fromDF(partitioned_dataframe, glueContext, "partitioned_df")

以防其他人遇到这种情况,上述方法确实有效,但无法真正解释原因。这篇文章更详细地介绍了它

“为什么Glue会产生更多的小文件?
如果您在Glue中处理小块文件,它将读取这些文件并将其转换为动态帧。Glue在Spark上运行。因此,动态帧将移动到EMT集群中的分区。Glue将在所有节点之间均匀地对数据进行分区,以获得更好的性能。一旦对其进行了处理,所有分区都将被删除ushing to your target.每个分区都将包含一个文件。这就是我们获得更多文件的原因。“

不要忘记导入语句-来自awsglue.dynamicframe import dynamicframe作者应该将导入部分作为答案提问者询问生成一个文件的方法。。。你回答了“为什么会这样”作为一个好的答案,你能再加上“如何解决”吗