Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 如何在spark Submit中将s3a与Apache spark 2.2(hadoop 2.8)结合使用?_Scala_Apache Spark_Hadoop_Amazon S3_Pyspark Sql - Fatal编程技术网

Scala 如何在spark Submit中将s3a与Apache spark 2.2(hadoop 2.8)结合使用?

Scala 如何在spark Submit中将s3a与Apache spark 2.2(hadoop 2.8)结合使用?,scala,apache-spark,hadoop,amazon-s3,pyspark-sql,Scala,Apache Spark,Hadoop,Amazon S3,Pyspark Sql,我试图使用使用hadoop 2.8版本构建的spark 2.2.0从spark访问S3数据,我在类路径中使用/jars/hadoop-aws-2.8.3.jar,/jars/aws-java-sdk-S3-1.10.6.jar和/jars/aws-java-sdk-core-1.10.6.jar 我得到以下例外 java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StorageStatistics a

我试图使用使用hadoop 2.8版本构建的spark 2.2.0从spark访问
S3
数据,我在
类路径中使用
/jars/hadoop-aws-2.8.3.jar
/jars/aws-java-sdk-S3-1.10.6.jar
/jars/aws-java-sdk-core-1.10.6.jar

我得到以下例外

         java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StorageStatistics
            at java.lang.Class.forName0(Native Method)
            at java.lang.Class.forName(Class.java:348)
            at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
            at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
            at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
            at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
            at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
            at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
            at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
            at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
            at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
            at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
            at org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:301)
            at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:344)
            at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
            at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:441)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
            at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
            at py4j.Gateway.invoke(Gateway.java:280)
            at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
            at py4j.commands.CallCommand.execute(CallCommand.java:79)
            at py4j.GatewayConnection.run(GatewayConnection.java:214)
            at java.lang.Thread.run(Thread.java:745)
        Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.StorageStatistics
            at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
            ... 27 more
然后我从spark安装目录
/sparkinstallation/jars/hadoop-common-2.8.3.jar
将hadoop公共jar添加到
类路径中,现在我得到以下错误:

        java.lang.IllegalAccessError: tried to access method org.apache.hadoop.metrics2.lib.MutableCounterLong.<init>(Lorg/apache/hadoop/metrics2/MetricsInfo;J)V from class org.apache.hadoop.fs.s3a.S3AInstrumentation
            at org.apache.hadoop.fs.s3a.S3AInstrumentation.streamCounter(S3AInstrumentation.java:194)
            at org.apache.hadoop.fs.s3a.S3AInstrumentation.streamCounter(S3AInstrumentation.java:216)
            at org.apache.hadoop.fs.s3a.S3AInstrumentation.<init>(S3AInstrumentation.java:139)
            at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:174)
            at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
            at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
            at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
            at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
            at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
            at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
            at org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:301)
            at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:344)
            at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
            at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:441)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
            at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
            at py4j.Gateway.invoke(Gateway.java:280)
            at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
            at py4j.commands.CallCommand.execute(CallCommand.java:79)
            at py4j.GatewayConnection.run(GatewayConnection.java:214)
            at java.lang.Thread.run(Thread.java:745)
java.lang.IllegalAccessError:试图访问org.apache.hadoop.metrics2.lib.MutableCounterLong方法。(Lorg/apache/hadoop/metrics2/MetricsInfo;J)V来自org.apache.hadoop.fs.s3a.S3AInstrumentation类
位于org.apache.hadoop.fs.s3a.s3ainstructure.streamCounter(s3ainstructure.java:194)
位于org.apache.hadoop.fs.s3a.s3ainstructure.streamCounter(s3ainstructure.java:216)
位于org.apache.hadoop.fs.s3a.S3AInstrumentation。(S3AInstrumentation.java:139)
位于org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:174)
位于org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
位于org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
位于org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
位于org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
位于org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
位于org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
位于org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:301)
位于org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:344)
位于org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
位于org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:441)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:280)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:214)
运行(Thread.java:745)
如果我遗漏了什么,有人能帮忙吗


我参考了链接-,但没有帮助

我建议将依赖项添加到
spark submit
命令中,如下所示,该命令将下载所需的所有依赖项。如果您只是添加了一个jar,您可能仍然缺少一些其他依赖项:

 spark-shell --packages "org.apache.hadoop:hadoop-aws:2.7.3"
 spark-submit --packages "org.apache.hadoop:hadoop-aws:2.7.3"

另一种方法是将依赖项捆绑到作业jar文件中,然后使用普通的
spark sbumit

我建议将依赖项添加到
spark submit
命令中,如下所示,该命令将下载所需的所有依赖项。如果您只是添加了一个jar,您可能仍然缺少一些其他依赖项:

 spark-shell --packages "org.apache.hadoop:hadoop-aws:2.7.3"
 spark-submit --packages "org.apache.hadoop:hadoop-aws:2.7.3"

另一种方法是将依赖项捆绑到作业jar文件中,然后使用普通的
spark sbumit

查看此处关于版本依赖项的疑难解答指南查看此处关于版本依赖项的疑难解答指南谢谢@BinziCao-这很有帮助。我将使用第二个选项捆绑dependenciesThanks@BinziCao——这很有帮助。我将使用捆绑依赖项的第二个选项