Python 如何在AWS glue中使用snowflake JDBC连接驱动程序运行pySpark

Python 如何在AWS glue中使用snowflake JDBC连接驱动程序运行pySpark,python,apache-spark,pyspark,snowflake-task,aws-glue-spark,Python,Apache Spark,Pyspark,Snowflake Task,Aws Glue Spark,并且得到了一个错误 I am trying to run the below code in AWS glue: import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import

并且得到了一个错误

I am trying to run the below code in AWS glue:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from py4j.java_gateway import java_import
SNOWFLAKE_SOURCE_NAME = "net.snowflake.spark.snowflake"

## @params: [JOB_NAME, URL, ACCOUNT, WAREHOUSE, DB, SCHEMA, USERNAME, PASSWORD]
args = getResolvedOptions(sys.argv, ['JOB_NAME', 'URL', 'ACCOUNT', 'WAREHOUSE', 'DB', 'SCHEMA', 'USERNAME', 'PASSWORD'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
java_import(spark._jvm, "net.snowflake.spark.snowflake")

## uj = sc._jvm.net.snowflake.spark.snowflake
spark._jvm.net.snowflake.spark.snowflake.SnowflakeConnectorUtils.enablePushdownSession(spark._jvm.org.apache.spark.sql.SparkSession.builder().getOrCreate())

options = {
"sfURL" : args['URL'],
"sfAccount" : args['ACCOUNT'],
"sfUser" : args['USERNAME'],
"sfPassword" : args['PASSWORD'],
"sfDatabase" : args['DB'],
"sfSchema" : args['SCHEMA'],
"sfWarehouse" : args['WAREHOUSE'],
}

df = spark.read \
  .format("snowflake") \
  .options(**options) \
  .option("dbtable", "STORE") \
  .load()

display(df)

## Perform any kind of transformations on your data and save as a new Data Frame: “df1”
##df1 = [Insert any filter, transformation, etc]

## Write the Data Frame contents back to Snowflake in a new table
##df1.write.format(SNOWFLAKE_SOURCE_NAME).options(**sfOptions).option("dbtable", "[new_table_name]").mode("overwrite").save()
job.commit()
Traceback(最近一次调用last):文件“/tmp/spark\u snowflake”,第35行,在
.option(“数据库表”、“存储”)\File
“/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py”,加载返回中的第172行
self._df(self._jreader.load())文件“/opt/amazon/spark/python/lib/py4j-0.10.7-
src.zip/py4j/java_gateway.py”,第1257行,在callanswer中,self.gateway_客户端,self.target_id, self.name)文件“/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py”,第63行,deco格式 返回f(*a,**kw)文件“/opt/amazon/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py”,行 328,在get_return_value格式(target_id,“.”,name),value)py4j.protocol.Py4JJavaError:一个错误 调用o78.load时发生:java.lang.ClassNotFoundException:未能找到数据源: 雪花。请在以下地址查找包裹: org.apache.spark.sql.execution.datasources.DataSource$.lookupdateasource(DataSource.scala:657)位于 org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:194)位于 org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)位于 sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)位于 位于的sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)位于 java.lang.reflect.Method.invoke(Method.java:498)位于 py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)位于 位于的py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) py4j.Gateway.invoke(Gateway.java:282)位于 py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)位于 py4j.commands.CallCommand.execute(CallCommand.java:79)位于 在java.lang.Thread.run(Thread.java:748)处运行py4j.GatewayConnection.run(GatewayConnection.java:238) 原因:java.lang.ClassNotFoundException:snowflake.DefaultSource位于 java.net.URLClassLoader.findClass(URLClassLoader.java:382)位于 loadClass(ClassLoader.java:418)位于 sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)位于 loadClass(ClassLoader.java:351)位于

org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scal a:634)在

处,错误消息显示“java.lang.ClassNotFoundException:未能找到数据源:snowflake”。创建作业时,是否使用了合适的罐子并将其传递给胶水?这里有一些例子

Traceback (most recent call last): File "/tmp/spark_snowflake", line 35, in <module> 
.option("dbtable", "STORE") \ File 
"/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 172, in load return 
self._df(self._jreader.load()) File "/opt/amazon/spark/python/lib/py4j-0.10.7-