使用Spark 2.1.0的Java FlatMap

使用Spark 2.1.0的Java FlatMap,java,apache-spark,spark-streaming,Java,Apache Spark,Spark Streaming,我正在尝试使用Java8中的spark 2.1.0进行平面映射 2.2.0文档显示了此示例 JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(x.split(" ")).iterator()); JavaDStream words=lines.flatMap(x->array.asList(x.split(“”).iterator()); 当我从2.1.0开始尝试时,我得到了以下结果 Error:(31

我正在尝试使用Java8中的spark 2.1.0进行平面映射

2.2.0文档显示了此示例

JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(x.split(" ")).iterator());
JavaDStream words=lines.flatMap(x->array.asList(x.split(“”).iterator());
当我从2.1.0开始尝试时,我得到了以下结果

Error:(31, 25) java: method flatMap in class org.apache.spark.rdd.RDD<T> cannot be applied to given types;
required: scala.Function1<java.lang.String,scala.collection.TraversableOnce<U>>,scala.reflect.ClassTag<U>
found: (x)->Array[...]tor()
reason: cannot infer type-variable(s) U
(actual and formal argument lists differ in length)
错误:(31,25)java:org.apache.spark.rdd.rdd类中的方法flatMap无法应用于给定类型;
必需:scala.Function1,scala.reflect.ClassTag
发现:(x)->数组[…]或()
原因:无法推断类型变量U
(实际参数列表和正式参数列表长度不同)

在这些版本中,使用flatMap的正确方法是什么?

下面的代码适用于Spark 2.1.0

JavaDStream<String> lines = messages.map(tuple -> tuple._2());
JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(SPACE.split(x)).iterator());
JavaPairDStream<String, Integer> wordCounts = words.mapToPair(s -> new Tuple2<>(s, 1))
    .reduceByKey((i1, i2) -> i1 + i2);
JavaDStream lines=messages.map(tuple->tuple._2());
JavaDStream words=lines.flatMap(x->Arrays.asList(SPACE.split(x)).iterator());
JavaPairDStream wordCounts=words.mapToPair(s->new Tuple2(s,1))
.还原基((i1,i2)->i1+i2);

请检查您的spark dependencies版本。如果您想参考Spark 2.1.0版本的示例,请转到下面的代码适用于Spark 2.1.0

JavaDStream<String> lines = messages.map(tuple -> tuple._2());
JavaDStream<String> words = lines.flatMap(x -> Arrays.asList(SPACE.split(x)).iterator());
JavaPairDStream<String, Integer> wordCounts = words.mapToPair(s -> new Tuple2<>(s, 1))
    .reduceByKey((i1, i2) -> i1 + i2);
JavaDStream lines=messages.map(tuple->tuple._2());
JavaDStream words=lines.flatMap(x->Arrays.asList(SPACE.split(x)).iterator());
JavaPairDStream wordCounts=words.mapToPair(s->new Tuple2(s,1))
.还原基((i1,i2)->i1+i2);
请检查您的spark dependencies版本。如果您想参考Spark 2.1.0版本的示例,请访问