Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/amazon-web-services/12.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon web services 在Prem大桌子上移动到红移AWS胶水_Amazon Web Services_Connection Timeout_Aws Glue - Fatal编程技术网

Amazon web services 在Prem大桌子上移动到红移AWS胶水

Amazon web services 在Prem大桌子上移动到红移AWS胶水,amazon-web-services,connection-timeout,aws-glue,Amazon Web Services,Connection Timeout,Aws Glue,我使用下面的脚本将不同大小的表中的所有列(9000万到2.5亿条记录)从内部部署的Oracle数据库移动到AWS Redshift。脚本还附加了以下几个审核列: add_metadata1 = custom_spark_df.withColumn('line_number', F.row_number().over(Window.orderBy(lit(1)))) add_metadata2 = add_metadata1.withColumn('source_system', lit(sour

我使用下面的脚本将不同大小的表中的所有列(9000万到2.5亿条记录)从内部部署的Oracle数据库移动到AWS Redshift。脚本还附加了以下几个审核列:

add_metadata1 = custom_spark_df.withColumn('line_number', F.row_number().over(Window.orderBy(lit(1))))
add_metadata2 = add_metadata1.withColumn('source_system', lit(source_system))
add_metadata3 = add_metadata2.withColumn('input_filename', lit(input_filename))
add_metadata4 = add_metadata3.withColumn('received_timestamp', lit(received_timestamp))
add_metadata5 = add_metadata4.withColumn('received_timestamp_unix', lit(received_timestamp_unix))
add_metadata6 = add_metadata5.withColumn('eff_data_date', lit(eff_data_date))
目前,作业的长时间运行特性会导致连接在3-5小时后超时,因此从未完成:

  import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

## Start - Custom block of imports ##
from awsglue.dynamicframe import DynamicFrame
from pyspark.sql import functions as F
from pyspark.sql.window import Window
import datetime 
from pyspark.sql.functions import lit
## End - Custom block of imports ##

## @params: [TempDir, JOB_NAME]
args = getResolvedOptions(sys.argv, ['TempDir','JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "metadatastore", table_name = "TableName", transformation_ctx = "datasource0")

applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("...MAPPINGS OUTLINED...")], transformation_ctx = "applymapping1")

resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_cols", transformation_ctx = "resolvechoice2")

dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")

## Start - Custom block for creation of metadata columns ##
now = datetime.datetime.now()

##line_number = '1'
## Remember to update source_system (if needed) and input_filename
source_system = 'EDW'
input_filename = 'TableName' 
received_timestamp = datetime.datetime.strptime(now.strftime("%Y-%m-%d %H:%M:%S"), "%Y-%m-%d %H:%M:%S")

received_timestamp_unix = int((now - datetime.datetime(1970,1,1)).total_seconds())

eff_data_date = datetime.datetime.strptime(now.strftime("%Y-%m-%d"), "%Y-%m-%d").date()

## Update to the last dataframe used
## Do not forget to update write_dynamic_frame to use custom_dynamic_frame for the frame name and add schema to the dbtable name
custom_spark_df = dropnullfields3.toDF()

add_metadata1 = custom_spark_df.withColumn('line_number', F.row_number().over(Window.orderBy(lit(1))))
add_metadata2 = add_metadata1.withColumn('source_system', lit(source_system))
add_metadata3 = add_metadata2.withColumn('input_filename', lit(input_filename))
add_metadata4 = add_metadata3.withColumn('received_timestamp', lit(received_timestamp))
add_metadata5 = add_metadata4.withColumn('received_timestamp_unix', lit(received_timestamp_unix))
add_metadata6 = add_metadata5.withColumn('eff_data_date', lit(eff_data_date))

custom_dynamic_frame = DynamicFrame.fromDF(add_metadata6, glueContext, "add_metadata6")
## End - Custom block for creation of metadata columns ##

datasink4 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = custom_dynamic_frame, catalog_connection = "Redshift", connection_options = {"dbtable": "schema_name.TableName", "database": "dev"}, redshift_tmp_dir = args["TempDir"], transformation_ctx = "datasink4")
job.commit()

如何改进此脚本以减少运行时间并允许完全执行

同意索伦的观点。我认为你最好创建CSV转储,gzip它,把它放到s3中。一旦文件在S3中,您还可以使用胶水将其转换为拼花地板格式。对于一次性转储,此方法将更快

对于要从源代码加载到S3的AWS粘合代码,您只需更改代码的最后第二行即可。使用类似以下内容:

datasink4 = glueContext.write_dynamic_frame.from_options(frame = custom_dynamic_frame, connection_type = "s3", connection_options = {"path": s3_output}, format = "parquet", transformation_ctx = "datasink4")

同意索伦的观点。我认为你最好创建CSV转储,gzip它,把它放到s3中。一旦文件在S3中,您还可以使用胶水将其转换为拼花地板格式。对于一次性转储,此方法将更快

对于要从源代码加载到S3的AWS粘合代码,您只需更改代码的最后第二行即可。使用类似以下内容:

datasink4 = glueContext.write_dynamic_frame.from_options(frame = custom_dynamic_frame, connection_type = "s3", connection_options = {"path": s3_output}, format = "parquet", transformation_ctx = "datasink4")

要了解什么是慢的,请将on prem读取与红移写入分开。此外,如果您将一个文件从Oracle提取到S3,您可以使用红移批量加载程序来加速红移加载。是否有示例代码可以使用AWS Glue从源代码直接加载到S3?为什么您只想执行Glue作业。对我来说,您的案例看起来像是数据库从on-prem oracle数据库迁移到AWS Redshift。如果是这样的话:“真的”,我会选择AWS DMS服务。要了解什么是慢的,请将on-prem读取和红移写入分开。此外,如果您将一个文件从Oracle提取到S3,您可以使用红移批量加载程序来加速红移加载。是否有示例代码可以使用AWS Glue从源代码直接加载到S3?为什么您只想执行Glue作业。对我来说,您的案例看起来像是数据库从on-prem oracle数据库迁移到AWS Redshift。如果是这样的话:“真的”我会选择AWS DMS服务。