Pyspark Glue AWS:调用o60.getDynamicFrame时出错

Pyspark Glue AWS:调用o60.getDynamicFrame时出错,pyspark,spark-dataframe,amazon-redshift,etl,aws-glue,Pyspark,Spark Dataframe,Amazon Redshift,Etl,Aws Glue,我已经定义了一个基本脚本来创建一个DF,其中数据来自我的一个红移表。我运行了这个过程,但有一段时间我一直在为一个我无法解释的消息而挣扎 日志中的错误输出为: “/mnt/warn/usercache/root/appcache/application_1525803778049_0004/container_1525803778049_0004_01_000001/py4j-0.10.4-src.zip/py4j/protocol.py”,第319行,在获取返回值py4j.protocol.Py

我已经定义了一个基本脚本来创建一个DF,其中数据来自我的一个红移表。我运行了这个过程,但有一段时间我一直在为一个我无法解释的消息而挣扎

日志中的错误输出为:

“/mnt/warn/usercache/root/appcache/application_1525803778049_0004/container_1525803778049_0004_01_000001/py4j-0.10.4-src.zip/py4j/protocol.py”,第319行,在获取返回值py4j.protocol.Py4JJavaError中:调用o60.getDynamicFrame时出错:java.lang.UnsupportedOperationException:empty.reduceLeft at scala.collection.

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame, DynamicFrameReader, DynamicFrameWriter, DynamicFrameCollection
from pyspark.sql.functions import lit
from awsglue.job import Job

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)

table = glueContext.create_dynamic_frame.from_options(connection_type="redshift", connection_options = 
    {"url": "jdbc:redshift://xxxxx.yyyyy.us-east-1.redshift.amazonaws.com:5439/db",
    "user": "yyyy",
    "password": "yyyyy",
    "dbtable": "schema.table_name",
    "redshiftTmpDir": "s3://aws-glue-temporary-accountnumber-us-east-1/"},
    format="orc", 
    transformation_ctx="table" )

table.show()

dfred = table.toDF().createOrReplaceTempView("table_df")

job.commit()

谢谢你能给我的任何帮助。非常感谢

好吧,在继续努力解决这个问题之后,我花了这么多时间学习了官方代码课程,我在代码中添加了一个apply format transformation(应用格式转换)来映射来自红移读取表的结果,以及提取表的方法,我跳过了参数
transformation_ctx
,该参数在错误中失败o60

我的最终版本代码是:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame, DynamicFrameReader, DynamicFrameWriter, DynamicFrameCollection
from pyspark.sql.functions import lit
from awsglue.job import Job

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)

table = glueContext.create_dynamic_frame.from_options(connection_type="redshift", connection_options = 
    {"url": "jdbc:redshift://xxxxx.yyyyy.us-east-1.redshift.amazonaws.com:5439/db",
    "user": "yyyy",
    "password": "yyyyy",
    "dbtable": "schema.table_name",
    "redshiftTmpDir": "s3://aws-glue-temporary-accountnumber-us-east-1/"}
     )

applyformat = ApplyMapping.apply(frame =table, mappings =
    [("field1","string","field1","string"),
    ("field2","string","field2","string") ], transformation_ctx = "applyformat")    


dfred = table.toDF().createOrReplaceTempView("table_df")

sqlDF = spark.sql(
    "SELECT COUNT(*) FROM table_df"
    )


print sqlDF.show()

job.commit()