elasticsearch,apache-spark-2.0,Hadoop,Apache Spark,elasticsearch,Apache Spark 2.0" /> elasticsearch,apache-spark-2.0,Hadoop,Apache Spark,elasticsearch,Apache Spark 2.0" />

Hadoop Spark 2&;的Scala版本错误;ElasticSearch 5.4.2

Hadoop Spark 2&;的Scala版本错误;ElasticSearch 5.4.2,hadoop,apache-spark,elasticsearch,apache-spark-2.0,Hadoop,Apache Spark,elasticsearch,Apache Spark 2.0,我正在使用Spark 2.2(使用Scala 2.11.8构建)将我的数据索引到ElasticSearch 5.4.2中。 弹性搜索: 我的项目spark使用以下pom.xml: <dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch-hadoop</artifactId> <version>

我正在使用Spark 2.2(使用Scala 2.11.8构建)将我的数据索引到ElasticSearch 5.4.2中。

弹性搜索:

我的项目spark使用以下pom.xml:

  <dependency>
    <groupId>org.elasticsearch</groupId>
    <artifactId>elasticsearch-hadoop</artifactId>
        <version>5.4.2</version>
        <exclusions>
            <exclusion>
                <groupId>org.slf4j</groupId>
                <artifactId>log4j-over-slf4j</artifactId>
            </exclusion>
        </exclusions>
    </dependency>

    <dependency>
        <groupId>org.elasticsearch</groupId>
        <artifactId>elasticsearch-spark-20_2.11</artifactId>
        <version>5.4.2</version>
    </dependency>

org.elasticsearch
弹性搜索hadoop
5.4.2
org.slf4j
log4j-over-slf4j
org.elasticsearch
elasticsearch-spark-20_2.11
5.4.2
当我运行作业时,会出现以下异常:

原因:java.lang.NoSuchMethodError:scala.reflect.api.JavaUniverse.runtimeMirror(Ljava/lang/ClassLoader;)Lscala/reflect/api/JavaMirrors$JavaMirror; 位于org.elasticsearch.spark.serialization.ReflectionUtils$.org$elasticsearch$spark$serialization$ReflectionUtils$$checkCaseClass(ReflectionUtils.scala:42) 位于org.elasticsearch.spark.serialization.ReflectionUtils$$anonfun$checkCaseClassCache$1.apply(ReflectionUtils.scala:84) 位于org.elasticsearch.spark.serialization.ReflectionUtils$$anonfun$checkCaseClassCache$1.apply(ReflectionUtils.scala:83)

我知道我的问题是Scala版本(构建/运行)

谢谢你的帮助

编辑构建POM:

  <build>
    <plugins>
        <plugin>
            <groupId>net.alchim31.maven</groupId>
            <artifactId>scala-maven-plugin</artifactId>
            <version>3.3.1</version>
            <executions>
                <execution>
                    <goals>
                        <goal>compile</goal>
                        <goal>testCompile</goal>
                    </goals>
                    <configuration>
                        <args>
                            <argLine>-J-Xms128m</argLine>
                            <argLine>-J-Xmx512m</argLine>
                            <argLine>-J--XX:MaxPermSize=300m</argLine>
                            <argLine>-Djava.net.preferIPv4Stack=true</argLine>
                            <arg>-dependencyfile</arg>
                            <arg>${project.build.directory}/.scala_dependencies</arg>
                        </args>
                    </configuration>
                </execution>
            </executions>
        </plugin>


        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-shade-plugin</artifactId>
            <version>2.4.3</version>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>shade</goal>
                    </goals>
                    <configuration>
                        <shadedArtifactAttached>true</shadedArtifactAttached>
                        <shadedClassifierName>UBER</shadedClassifierName>
                        <artifactSet>
                            <includes>
                                <include>com.databricks:spark-csv_${scala.compact.version}</include>
                                <include>org.apache.commons:commons-csv</include>
                                <include>org.elasticsearch:elasticsearch-hadoop</include>
                            </includes>
                        </artifactSet>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

net.alchim31.maven
scala maven插件
3.3.1
编译
测试编译
-J-Xms128m
-J-Xmx512m
-J--XX:MaxPermSize=300m
-Djava.net.preferIPv4Stack=true
-从属文件
${project.build.directory}/.scala\u依赖项
org.apache.maven.plugins
maven阴影插件
2.4.3
包裹
阴凉处
真的
优步
com.databricks:spark-csv{scala.compact.version}
org.apache.commons:commons csv
org.elasticsearch:elasticsearch hadoop

你在建一个肥罐子吗?是的。我用我的pom构建进行了编辑。你能用UNRR/unzip打开该zip并检查所提到的类是否可用吗?在org.elasticsearch.spark.serialization中,我得到了>ReflectionUtils$$anonfun$checkCaseClassCache$1.class类可用我编辑了OPare上的执行选项你正在构建一个胖罐子吗?是的。我用我的pom版本编辑。你能用UNRR/unzip打开该zip并检查所提到的类是否可用吗?在org.elasticsearch.spark.serialization中,我得到了>ReflectionUtils$$anonfun$checkCaseClassCache$1.class类可用我在OP上编辑了执行选项