PySpark中的GCS连接器未读取CSV

PySpark中的GCS连接器未读取CSV,pyspark,google-cloud-storage,hadoop2,google-cloud-dataproc,Pyspark,Google Cloud Storage,Hadoop2,Google Cloud Dataproc,我得到的错误是- java.lang.RuntimeException: java.lang.NoSuchMethodException: com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS.<init>() java.lang.RuntimeException:java.lang.NoSuchMethodException: com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS.() 这是一个简单的

我得到的错误是-

java.lang.RuntimeException: java.lang.NoSuchMethodException: 
com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS.<init>()
java.lang.RuntimeException:java.lang.NoSuchMethodException:
com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS.()
这是一个简单的代码,过去工作正常,但最近我在尝试读取GCS bucket上存储的CSV时出错,我从Google Cloud网站下载了正确的JAR,但无法成功运行。请告诉我我做错了什么,以帮助我

from pyspark.sql import SparkSession

spark = SparkSession \
  .builder \
  .master('local[*]') \
  .appName('spark-gcs-demo') \
  .getOrCreate()
bucket = "testBucket"
spark.conf.set('temporaryGcsBucket', "bucket") ####temporary 
import os  
os.environ['GOOGLE_APPLICATION_CREDENTIALS']=r"<pathtoJSON>"
spark._jsc.hadoopConfiguration().set('fs.AbstractFileSystem.gs.impl', 'com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS')
spark._jsc.hadoopConfiguration().set("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS")
# This is required if you are using service account and set true, 
spark._jsc.hadoopConfiguration().set('fs.gs.auth.service.account.enable', 'true')
df= spark.read.csv("gs://bucket/iris.csv")
从pyspark.sql导入SparkSession
火花=火花会话\
建筑商先生\
.master(“本地[*]”)\
.appName('spark-gcs-demo')\
.getOrCreate()
bucket=“testBucket”
spark.conf.set('temporaryGcsBucket',“bucket”)#####临时
导入操作系统
os.environ['GOOGLE_应用程序_凭据]]=r“”
spark.jsc.hadoopConfiguration().set('fs.AbstractFileSystem.gs.impl','com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS'))
spark._jsc.hadoopConfiguration().set(“fs.gs.impl”、“com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS”)
#如果您正在使用服务帐户并设置为true,则需要此选项,
spark.jsc.hadoopConfiguration().set('fs.gs.auth.service.account.enable','true')
df=spark.read.csv(“gs://bucket/iris.csv”)
我得到的错误是:

Py4JJavaError: An error occurred while calling o38.csv.
: java.lang.RuntimeException: java.lang.NoSuchMethodException: com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS.<init>()
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2668)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:561)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:559)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.immutable.List.flatMap(List.scala:355)
    at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:559)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:373)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:242)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:230)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:638)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NoSuchMethodException: com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS.<init>()
    at java.lang.Class.getConstructor0(Unknown Source)
    at java.lang.Class.getDeclaredConstructor(Unknown Source)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128)
    ... 29 more
Py4JJavaError:调用o38.csv时出错。
:java.lang.RuntimeException:java.lang.NoSuchMethodException:com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS。()
位于org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
位于org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2668)
位于org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
位于org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
位于org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
位于org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
位于org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
在org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$datasources$$checkandglobpathif needed$1.apply(DataSource.scala:561)
在org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$datasources$$checkandglobpathif needed$1.apply(DataSource.scala:559)
位于scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
位于scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
位于scala.collection.immutable.List.foreach(List.scala:392)
位于scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
位于scala.collection.immutable.List.flatMap(List.scala:355)
在org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$datasources$DataSource$$checkandglobpathif needed(DataSource.scala:559)
位于org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:373)
位于org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:242)
位于org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:230)
位于org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:638)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(未知源)
在sun.reflect.DelegatingMethodAccessorImpl.invoke处(未知源)
位于java.lang.reflect.Method.invoke(未知源)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:282)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:238)
位于java.lang.Thread.run(未知源)
原因:java.lang.NoSuchMethodException:com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS。()
位于java.lang.Class.getConstructor0(未知源)
位于java.lang.Class.getDeclaredConstructor(未知源)
位于org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128)
... 还有29个

由于地面军事系统连接器配置错误,您将看到此异常


您已经将
fs.gs.impl
Hadoop属性设置为
com.google.cloud.Hadoop.fs.gcs.GoogleHadoopFS
,但它应该设置为
com.google.cloud.Hadoop.fs.gcs.GoogleHadoopFileSystem
,或者您甚至可以忽略此属性,因为使用。

Hi Igor,我尝试了您提到的解决方案,但它仍然存在相同的问题,我想了解您提到的两个类之间的区别-*.GoogleHadoopFileSystem与*.googlehadoopfs2这两个类分别实现了Hadoop和,这就是为什么您需要对它们进行不同的配置。请参阅以使用Spark正确配置它。是否有任何文档专门告诉您配置了哪个gcs-connector-hadoop2 jar用于哪个,他们的git回购协议非常混乱。这里有记录: