Python AWS Glue 2.0连接超时

Python AWS Glue 2.0连接超时,python,pyspark,amazon-redshift,aws-glue,aws-glue-data-catalog,Python,Pyspark,Amazon Redshift,Aws Glue,Aws Glue Data Catalog,我正在尝试使用AWS胶水将数据从EMR Hive传输到Redshift。 在创建EMR集群时,我提到要对元数据表使用Glue Catalog。 还创建并测试了红移胶水连接和爬虫 为了测试我的脚本,我创建了一个dev端点并打开了jupyter笔记本。下面是代码 from pyspark.sql.functions import * from pyspark.context import SparkContext from pyspark.sql.window import * from datet

我正在尝试使用AWS胶水将数据从EMR Hive传输到Redshift。 在创建EMR集群时,我提到要对元数据表使用Glue Catalog。 还创建并测试了红移胶水连接和爬虫

为了测试我的脚本,我创建了一个dev端点并打开了jupyter笔记本。下面是代码

from pyspark.sql.functions import *
from pyspark.context import SparkContext
from pyspark.sql.window import *
from datetime import date, timedelta, datetime
from dateutil.relativedelta import relativedelta
from pyspark.sql.types import DateType, TimestampType
from awsglue.utils import getResolvedOptions
from awsglue.context import GlueContext
from awsglue.job import Job
import sys
from awsglue.dynamicframe import DynamicFrame

glueContext = GlueContext(SparkContext.getOrCreate())
spark = glueContext.sparkSession

customer = spark.table('demo.customers').where(col("insert_timestamp") >= '2020-04-01')
cust_dyn = DynamicFrame.fromDF(customer, glueContext, 'cust_dyn')
glueContext.write_dynamic_frame.from_catalog(frame = cust_dyn, database = "classic_models", catalog_connection = 'Redshift',
                                             table_name = "dev_classic_models_customers", 
                                             redshift_tmp_dir = 's3://temp/')
现在,当我通过创建一个带有红移连接的glue 2.0作业来运行它时,我得到了下面提到的超时错误:

Traceback (most recent call last):
  File "/tmp/Demo", line 16, in <module>
    customer = spark.table('demo.customers').where(col("insert_timestamp") >= '2020-04-01')
  File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/session.py", line 780, in table
    return DataFrame(self._jsparkSession.table(tableName), self._wrapped)
  File "/opt/amazon/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 69, in deco
    raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: 'java.lang.RuntimeException: com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to glue.ap-south-1.amazonaws.com:443 [glue.ap-south-1.amazonaws.com/3.7.225.56, glue.ap-south-1.amazonaws.com/13.234.179.141, glue.ap-south-1.amazonaws.com/13.127.192.167, glue.ap-south-1.amazonaws.com/13.235.123.170] failed: connect timed out;'
回溯(最近一次呼叫最后一次):
文件“/tmp/Demo”,第16行,在
customer=spark.table('demo.customers')。其中(col(“插入时间戳”)>='2020-04-01')
文件“/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/session.py”,第780行,在表中
返回数据帧(self.\u jsparkSession.table(tableName),self.\u包装)
文件“/opt/amazon/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py”,第1257行,in_u调用__
回答,self.gateway\u客户端,self.target\u id,self.name)
文件“/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py”,第69行,deco格式
引发分析异常(s.split(“:”,1)[1],stackTrace)
pyspark.sql.utils.AnalysisException:'java.lang.RuntimeException:com.amazonaws.SdkClientException:无法执行HTTP请求:连接到glue.ap-south-1.amazonaws.com:443[glue.ap-south-1.amazonaws.com/3.7.225.56,glue.ap-south-1.amazonaws.com/13.234.179.141,glue.ap-south-1.amazonaws.com/13.127.192.167,glue.ap-south-1.amazonaws.com/13.235.123.170]失败:连接超时;'
我注意到一件事,当我不添加红移连接,只使用select语句时,它运行良好。 但没有红移连接,我无法将数据传输到红移数据库。 有什么原因导致它失败吗?我们能做什么?
谢谢

你能确认用于开发端点的子网是私有的而不是公共的吗?@Prabhakarredy我没有为开发端点提供任何子网。子网字段为空。你在ap-south-1地区吗?你的redshift\u tmp\u目录位于哪个地区?