Spark Java版本错误
我使用的是Spark 2.1.1 Scala 2.11.8和Java 8 主要Java Spark类:Spark Java版本错误,java,apache-spark,Java,Apache Spark,我使用的是Spark 2.1.1 Scala 2.11.8和Java 8 主要Java Spark类: import java.util.Arrays; import java.util.HashMap; import java.util.HashSet; import java.util.Map; import java.util.Set; import java.util.regex.Pattern; import org.apache.spark.SparkConf; import or
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.regex.Pattern;
import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaPairInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;
import kafka.serializer.StringDecoder;
import scala.Tuple2;
public class SparkSample {
private static final Pattern SPACE = Pattern.compile(" ");
public static void main(String[] args) throws Exception{
SparkConf sparkConf = new SparkConf().setAppName("App");
//Duration the interval at which streaming data will be divided into batches
JavaStreamingContext javaStreamingContext = new JavaStreamingContext(sparkConf, Durations.seconds(10));
Set<String> topicsSet = new HashSet<>(Arrays.asList("MY-TOPIC".split(",")));
Map<String, String> kafkaConfiguration = new HashMap<>();
kafkaConfiguration.put("metadata.broker.list", "MYIP:9092");
kafkaConfiguration.put("group.id", "Stream");
JavaPairInputDStream<String, String> messages = KafkaUtils.createDirectStream(
javaStreamingContext,
String.class,
String.class,
StringDecoder.class,
StringDecoder.class,
kafkaConfiguration,
topicsSet
);
messages.print();
//BELOW PART THROWS ERRORS IF UNCOMMENTED
//JavaDStream<String> lines = messages.map(Tuple2::_2);
//JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(SPACE.split(x)).iterator());
//JavaPairDStream<String, Integer> wordCounts = words.mapToPair(s -> new Tuple2<>(s, 1))
// .reduceByKey((i1, i2) -> i1 + i2);
//wordCounts.print();
// Start the computation
javaStreamingContext.start();
javaStreamingContext.awaitTermination();
}
}
如果我保持它的评论,然后它打印消息就好了。你知道为什么吗?我正在用Java8构建/编译 该错误表明您运行的Java代码的版本对于运行它的运行时版本来说太新了。Java 7或更早版本可能是您机器上的默认版本(驱动程序、工作程序、主程序等) 需要检查和/或修复的事项:
- 检查运行时是否与目标匹配。在运行spark应用程序的命令行上使用
这必须在驱动程序计算机和集群中的每台计算机上执行。对于您的代码,它们都必须具有正确的Java版本java-version
- 如果要为早期Java版本进行编译(如果要在早期版本上运行),则可能需要更改目标编译版本:
maven编译器插件 ... 1.7 1.7
- 确保spark用户的
解析到正确的JAVA安装目录。还要检查JAVA\u HOME
环境变量是否包含正确的PATH
路径$JAVA\u HOME/bin
Unsupported major.minor version 52.0
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>...</version>
<configuration>
<source>1.7</source>
<target>1.7</target>