Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon s3 如何在AWS中使用粘合作业覆盖s3数据_Amazon S3_Amazon Dynamodb_Aws Glue_Aws Glue Spark - Fatal编程技术网

Amazon s3 如何在AWS中使用粘合作业覆盖s3数据

Amazon s3 如何在AWS中使用粘合作业覆盖s3数据,amazon-s3,amazon-dynamodb,aws-glue,aws-glue-spark,Amazon S3,Amazon Dynamodb,Aws Glue,Aws Glue Spark,我有dynamo db表,我正在使用glue job将dynamo db数据发送到s3。每当运行glue作业以将新数据更新到s3时,它也会附加旧数据。它应该覆盖下面的旧data.Job脚本 import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context impo

我有dynamo db表,我正在使用glue job将dynamo db数据发送到s3。每当运行glue作业以将新数据更新到s3时,它也会附加旧数据。它应该覆盖下面的旧data.Job脚本

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "abc", table_name = "xyz", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "abc", table_name = "xyz", transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: [mapping = [("address", "string", "address", "string"), ("name", "string", "name", "string"), ("company", "string", "company", "string"), ("id", "string", "id", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("address", "string", "address", "string"), ("name", "string", "name", "string"), ("company", "string", "company", "string"), ("id", "string", "id", "string")], transformation_ctx = "applymapping1")
## @type: ResolveChoice
## @args: [choice = "make_struct", transformation_ctx = "resolvechoice2"]
## @return: resolvechoice2
## @inputs: [frame = applymapping1]
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")
## @type: DropNullFields
## @args: [transformation_ctx = "dropnullfields3"]
## @return: dropnullfields3
## @inputs: [frame = resolvechoice2]
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://xyztable"}, format = "parquet", transformation_ctx = "datasink4"]
## @return: datasink4
## @inputs: [frame = dropnullfields3]
datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://xyztable"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()

如果您试图覆盖s3中的数据,DynamicFrame当前无法更改为保存模式,但您可以更改
toDF()
,并使用共享的方法将最后一行替换为该行

df = dropnullfields3.toDF()

df.write.mode('overwrite').parquet('s3://xyzPath')

每次运行作业时,它都会替换文件夹,因为glue库目前不支持模式,所以我们在这里使用pyspark libs。

你能分享代码吗,这将有助于更好地建议。我已经上传了我收到的脚本(解析日志获取错误消息:IllegalArgumentException:'无法从空字符串创建路径'回溯上次调用)此错误。您要通过的路径是什么3 bucket Path df.write.mode('overwrite').parquet('s3://xyztable'))