Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 必须包括log4J,但它会在apachesparkshell中导致错误。如何避免错误?_Scala_Log4j_Apache Spark_Type Mismatch - Fatal编程技术网

Scala 必须包括log4J,但它会在apachesparkshell中导致错误。如何避免错误?

Scala 必须包括log4J,但它会在apachesparkshell中导致错误。如何避免错误?,scala,log4j,apache-spark,type-mismatch,Scala,Log4j,Apache Spark,Type Mismatch,由于JAR的复杂性,我必须将其包含在Spark代码中,因此我想寻求帮助,找出在不删除log4j导入的情况下解决此问题的方法 简单代码如下: :cp symjar/log4j-1.2.17.jar import org.apache.spark.rdd._ val hadoopConf=sc.hadoopConfiguration; hadoopConf.set("fs.s3n.impl", "org.apache.hadoop.fs.s3native.Nativ

由于JAR的复杂性,我必须将其包含在Spark代码中,因此我想寻求帮助,找出在不删除log4j导入的情况下解决此问题的方法

简单代码如下:

    :cp symjar/log4j-1.2.17.jar
import org.apache.spark.rdd._

      val hadoopConf=sc.hadoopConfiguration;
      hadoopConf.set("fs.s3n.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
      hadoopConf.set("fs.s3n.awsAccessKeyId","AKEY")
      hadoopConf.set("fs.s3n.awsSecretAccessKey","SKEY") 
    val numOfProcessors = 2
    val filePath = "s3n://SOMEFILE.csv"
    var rdd = sc.textFile(filePath, numOfProcessors)
    def doStuff(rdd: RDD[String]): RDD[String] = {rdd}
    doStuff(rdd)
首先,我得到了这个错误:

error: error while loading StorageLevel, class file '/root/spark/lib/spark-assembly-1.3.0-hadoop1.0.4.jar(org/apache/spark/storage/StorageLevel.class)' has location not matching its contents: contains class StorageLevel
error: error while loading Partitioner, class file '/root/spark/lib/spark-assembly-1.3.0-hadoop1.0.4.jar(org/apache/spark/Partitioner.class)' has location not matching its contents: contains class Partitioner
error: error while loading BoundedDouble, class file '/root/spark/lib/spark-assembly-1.3.0-hadoop1.0.4.jar(org/apache/spark/partial/BoundedDouble.class)' has location not matching its contents: contains class BoundedDouble
error: error while loading CompressionCodec, class file '/root/spark/lib/spark-assembly-1.3.0-hadoop1.0.4.jar(org/apache/hadoop/io/compress/CompressionCodec.class)' has location not matching its contents: contains class CompressionCodec
然后,我再次运行此行,错误消失:

var rdd = sc.textFile(filePath, numOfProcessors)
然而,该准则的最终结果是:

error: type mismatch;
 found   : org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.RDD[String]
 required: org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.RDD[String]
              doStuff(rdd)
                      ^

我怎样才能避免从导入中删除log4j而不获得提到的错误?(这很重要,因为我使用的jar大量使用log4j,并且与sparkshell冲突)。

答案不仅仅是使用:cp命令,而是在exportSpark\u SUBMIT\u CLASSPATH=“…/the/path/to/a.jar”

下添加包含…/Spark/conf/Spark-env.sh中的所有内容,如果使用诸如Scala for Eclipse和maven之类的IDE,则将从maven中排除JAR。例如,我想排除OMMNS编解码器(然后将不同版本作为JAR包含到项目中),并在pom.xml中添加了如下更改:

...............
             <dependencies>
                            <dependency>
                                <groupId>org.apache.spark</groupId>
                                <artifactId>spark-core_2.11</artifactId>
                                <version>1.3.0</version>
                             <exclusions>
                            <exclusion>
                           <groupId>commons-codec</groupId>
                          <artifactId>commons-codec</artifactId>
                          <version>1.3</version>
                          </exclusion>
                          </exclusions>
                         </dependency>
                        </dependencies>
...............
。。。。。。。。。。。。。。。
org.apache.spark
spark-core_2.11
1.3.0
通用编解码器
通用编解码器
1.3
...............