Scala NoClassDefFoundError:org/apache/hadoop/fs/StreamCapabilities在使用spark读取s3数据时

Scala NoClassDefFoundError:org/apache/hadoop/fs/StreamCapabilities在使用spark读取s3数据时,scala,amazon-web-services,apache-spark,amazon-s3,aws-sdk,Scala,Amazon Web Services,Apache Spark,Amazon S3,Aws Sdk,我想在我的本地开发机器上运行一个简单的spark作业(通过Intellij)从AmazonS3读取数据 我的build.sbt文件: 我的代码片段: val spark = SparkSession .builder .appName("test") .master("local[2]") .getOrCreate() spark .sparkContext .hadoopConfiguration .set("fs.s3n.impl

我想在我的本地开发机器上运行一个简单的spark作业(通过Intellij)从AmazonS3读取数据

我的build.sbt文件:

我的代码片段:

val spark = SparkSession
    .builder
    .appName("test")
    .master("local[2]")
    .getOrCreate()

  spark
    .sparkContext
    .hadoopConfiguration
    .set("fs.s3n.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")

  val schema_p = ...

  val df = spark
    .read
    .schema(schema_p)
    .parquet("s3a:///...")
我得到以下例外情况:

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StreamCapabilities
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2093)
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2058)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2152)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2580)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2593)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:354)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
    at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:622)
    at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:606)
    at Test$.delayedEndpoint$Test$1(Test.scala:27)
    at Test$delayedInit$body.apply(Test.scala:4)
    at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
    at scala.App$$anonfun$main$1.apply(App.scala:76)
    at scala.App$$anonfun$main$1.apply(App.scala:76)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
    at scala.App$class.main(App.scala:76)
    at Test$.main(Test.scala:4)
    at Test.main(Test.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.StreamCapabilities
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 41 more
s3a://
替换为
s3://
时,我遇到另一个错误:
没有scheme的文件系统:s3

由于我是AWS新手,我不知道我是否应该使用
s3://
s3a://
s3n://
。我已经使用AWS cli设置了我的AWS凭据

我的机器上没有任何火花装置


提前感谢您的帮助

我将从以下内容开始

不要试图“插入”比Hadoop版本更新的AWS SDK版本,无论您有什么问题,更改AWS SDK版本不会解决问题,只会更改您看到的堆栈跟踪


无论本地spark安装的hadoop-JAR版本是什么,都需要与hadoop aws的版本完全相同,并且与hadoop aws构建时使用的aws SDK版本完全相同。请尝试了解详细信息。

对我来说,除了上述内容之外,还通过在pom.xml中添加以下依赖项来解决此问题:

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-common</artifactId>
    <version>3.1.1</version>
</dependency>

org.apache.hadoop
hadoop通用
3.1.1

在我的例子中,我通过选择正确的依赖版本修复了它:

  "org.apache.spark" % "spark-core_2.11" % "2.4.0",
  "org.apache.spark" % "spark-sql_2.11" % "2.4.0",
  "org.apache.hadoop" % "hadoop-common" % "3.2.1",
  "org.apache.hadoop" % "hadoop-aws" % "3.2.1"

@ogen什么是最适合你的设置?我搜索了Maven repo,正在使用com.amazonaws:aws java sdk:1.11.217,org.apache.hadoop:hadoop aws:3.1.1,org.apache.hadoop:hadoop common:3.1.1,但它不起作用。根据Maven的存储库,org.apache.hadoop:hadoop aws:3.1.1依赖于com.amazonaws:aws java sdk:1.11.271,而不是com.amazonaws:aws java sdk:1.11.217,我想你打错了。希望这能解决你的问题!注意:hadoop中的spark版本位于依赖项树中的
org.apache.hadoop
依赖项下,请将
hadoop aws
依赖项设置为该值。由于spark不使用hadoop 3.1.1 jars,因此对此不确定
  "org.apache.spark" % "spark-core_2.11" % "2.4.0",
  "org.apache.spark" % "spark-sql_2.11" % "2.4.0",
  "org.apache.hadoop" % "hadoop-common" % "3.2.1",
  "org.apache.hadoop" % "hadoop-aws" % "3.2.1"