Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/amazon-web-services/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon web services 使用AWS胶水创建分区数据并保存到s3中_Amazon Web Services_Apache Spark_Amazon S3_Aws Glue - Fatal编程技术网

Amazon web services 使用AWS胶水创建分区数据并保存到s3中

Amazon web services 使用AWS胶水创建分区数据并保存到s3中,amazon-web-services,apache-spark,amazon-s3,aws-glue,Amazon Web Services,Apache Spark,Amazon S3,Aws Glue,我有上面的脚本,我不明白为什么不工作,或者它是否是正确的方式 有人能回顾一下,让我知道我做错了什么吗 这里的目标是每天运行此作业,并将此表按上述方式分区,并将其保存在s3 json或parquet中。在操作列时,您引用的数据帧错误 applymapping1.select*实际上应该是df.select* import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from py

我有上面的脚本,我不明白为什么不工作,或者它是否是正确的方式

有人能回顾一下,让我知道我做错了什么吗


这里的目标是每天运行此作业,并将此表按上述方式分区,并将其保存在s3 json或parquet中。

在操作列时,您引用的数据帧错误

applymapping1.select*实际上应该是df.select*

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql.functions import col,year,month,dayofmonth,to_date,from_unixtime

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "db_name", table_name = "table_name", transformation_ctx = "datasource0")

applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("dateregistered", "timestamp", "dateregistered", "timestamp"), ("id", "int", "id", "int")], transformation_ctx = "applymapping1")

df = applymapping1.toDF()

repartitioned_with_new_columns_df = applymapping1.select("*")
    .withColumn("date_col", to_date(from_unixtime(col("dateRegistered"))))
    .withColumn("year", year(col("date_col")))
    .withColumn("month", month(col("date_col")))
    .withColumn("day", dayofmonth(col("date_col")))
    .drop(col("date_col"))
    #.repartition(1)

dyf = DynamicFrame.fromDF(repartitioned_with_new_columns_df, glueContext, "enriched")

datasink = glueContext.write_dynamic_frame.from_options(
    frame = dyf, 
    connection_type = "s3", 
    connection_options = {
        "path": "bucket-path", 
        "partitionKeys": ["year", "month", "day"]
    }, 
    format = "json", 
    transformation_ctx = "datasink")

job.commit()