ApacheSpark:reduceByKey函数停止Java应用程序

ApacheSpark:reduceByKey函数停止Java应用程序,java,eclipse,apache-spark,apache-kafka,apache-zookeeper,Java,Eclipse,Apache Spark,Apache Kafka,Apache Zookeeper,经过长时间的搜索,它是无法帮助,我需要问你!我想用apachespark对tweet中的hashtag进行简单的字数统计。应用程序从Kafka获得hashtag,在reduceByKey函数出现之前,一切正常。(我知道Twitter和Spark之间有着直接的联系) 如果没有此功能,结果如下所示: ------------------------------------------- Time: 1483986210000 ms ----------------------------------

经过长时间的搜索,它是无法帮助,我需要问你!我想用apachespark对tweet中的hashtag进行简单的字数统计。应用程序从Kafka获得hashtag,在reduceByKey函数出现之前,一切正常。(我知道Twitter和Spark之间有着直接的联系)

如果没有此功能,结果如下所示:

-------------------------------------------
Time: 1483986210000 ms
-------------------------------------------
(Presse,1)
(Trump,1)
(TheResistanceGQ,1)
(MerylStreep,1)
(theresistance,1)
(Theranos,1)
(Russian,1)
(Trump,1)
(trump,1)
(Üstakıl,1)
...
我需要的是similiar Hastags获得计数并显示,因此我需要reduceByKey函数,但我得到以下错误:

17/01/09 19:28:54 INFO DAGScheduler: ShuffleMapStage 0 (mapToPair at JavaDirectKafkaWordCount.java:106) finished in 0,377 s
17/01/09 19:28:54 INFO DAGScheduler: looking for newly runnable stages
17/01/09 19:28:54 INFO DAGScheduler: running: Set()
17/01/09 19:28:54 INFO DAGScheduler: waiting: Set(ResultStage 1)
17/01/09 19:28:54 INFO DAGScheduler: failed: Set()
17/01/09 19:28:54 INFO DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[4] at reduceByKey at JavaDirectKafkaWordCount.java:113), which has no missing parents
17/01/09 19:28:54 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.2 KB, free 899.7 MB)
17/01/09 19:28:54 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1948.0 B, free 899.7 MB)
17/01/09 19:28:54 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on XXX.XXX.XXX.XXX:56435 (size: 1948.0 B, free: 899.7 MB)
17/01/09 19:28:54 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1012
17/01/09 19:28:54 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (ShuffledRDD[4] at reduceByKey at JavaDirectKafkaWordCount.java:113)
17/01/09 19:28:54 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/01/09 19:28:54 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, localhost, partition 0, ANY, 5800 bytes)
17/01/09 19:28:54 INFO Executor: Running task 0.0 in stage 1.0 (TID 2)
17/01/09 19:28:54 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks out of 2 blocks
17/01/09 19:28:54 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 5 ms
17/01/09 19:28:54 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 2)
java.lang.NoClassDefFoundError: net/jpountz/util/SafeUtils
    at org.apache.spark.io.LZ4BlockInputStream.read(LZ4BlockInputStream.java:124)
    at java.io.ObjectInputStream$PeekInputStream.read(ObjectInputStream.java:2338)
    at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2351)
    at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2822)
    at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:804)
    at java.io.ObjectInputStream.<init>(ObjectInputStream.java:301)
...
17/01/09 19:28:54信息调度程序:ShuffleMapStage 0(mapToPair位于javaDirectKafkawkordCount.java:106)在0377秒内完成
17/01/09 19:28:54信息调度程序:正在查找新运行的阶段
17/01/09 19:28:54信息调度程序:正在运行:设置()
17/01/09 19:28:54信息调度程序:等待:设置(结果阶段1)
17/01/09 19:28:54信息调度程序:失败:设置()
17/01/09 19:28:54信息调度程序:提交ResultStage 1(位于reduceByKey的Shuffledd[4]位于JavaDirectKafkaWordCount.java:113),其中没有丢失的父级
17/01/09 19:28:54信息内存存储:块广播存储为内存中的值(估计大小为3.2 KB,可用大小为899.7 MB)
17/01/09 19:28:54信息存储器存储:块广播\u 1\u片段0以字节形式存储在内存中(估计大小1948.0 B,可用899.7 MB)
17/01/09 19:28:54信息块管理信息:在XXX.XXX.XXX.XXX:56435(大小:1948.0 B,可用空间:899.7 MB)上的内存中添加了广播片段0
17/01/09 19:28:54信息SparkContext:从DAGScheduler的广播创建了广播1。scala:1012
17/01/09 19:28:54信息调度程序:从ResultStage 1提交1个缺少的任务(位于reduceByKey的Shuffledd[4]位于JavaDirectKafkaWordCount.java:113)
17/01/09 19:28:54信息任务计划RIMPL:添加任务集1.0和1个任务
17/01/09 19:28:54信息任务集管理器:在阶段1.0中启动任务0.0(TID 2,本地主机,分区0,任意,5800字节)
17/01/09 19:28:54信息执行者:在阶段1.0(TID 2)中运行任务0.0
17/01/09 19:28:54信息ShuffleBlockFetcherInterator:从2个块中获取2个非空块
17/01/09 19:28:54信息ShuffleBlockFetcherInterator:在5毫秒内启动了0次远程抓取
17/01/09 19:28:54错误执行者:任务0.0在阶段1.0(TID 2)中出现异常
java.lang.NoClassDefFoundError:net/jpountz/util/SafeUtils
位于org.apache.spark.io.LZ4BlockInputStream.read(LZ4BlockInputStream.java:124)
位于java.io.ObjectInputStream$PeekInputStream.read(ObjectInputStream.java:2338)
在java.io.ObjectInputStream$PeekInputStream.readFully处(ObjectInputStream.java:2351)
位于java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2822)
位于java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:804)
位于java.io.ObjectInputStream。(ObjectInputStream.java:301)
...
这是我的密码:

package org.apache.spark.examples.streaming;

import java.util.HashMap;
import java.util.HashSet;
import java.io.FileOutputStream;
import java.io.PrintStream;
import java.time.Duration;
import java.util.Arrays;
import java.util.Iterator;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import java.util.regex.Pattern;

import scala.Tuple2;

import kafka.serializer.StringDecoder;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.function.*;
import org.apache.spark.streaming.api.java.*;
import org.apache.spark.streaming.kafka.KafkaUtils;
import org.apache.spark.streaming.Durations;
import org.apache.log4j.Logger;

/**
 * Consumes messages from one or more topics in Kafka and does wordcount.
 */

public final class JavaDirectKafkaWordCount {
    private static final Pattern SPACE = Pattern.compile(" ");

    public static void main(String[] args) throws Exception {

        String brokers = "XXX.XXX.XXX.XXX:9092";
        String topics = "topicMontag";

        // Create context with a 2 seconds batch interval
        SparkConf sparkConf = new SparkConf().setAppName("JavaDirectKafkaWordCount").setMaster("local[*]");
        JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, Durations.seconds(2));

        Set<String> topicsSet = new HashSet<>(Arrays.asList(topics.split(",")));
        Map<String, String> kafkaParams = new HashMap<>();
        kafkaParams.put("metadata.broker.list", brokers);
        kafkaParams.put("group.id", "1");
        kafkaParams.put("auto.offset.reset", "smallest");

        // Create direct kafka stream with brokers and topics
        JavaPairInputDStream<String, String> messages = KafkaUtils.createDirectStream(jssc, String.class, String.class,
                StringDecoder.class, StringDecoder.class, kafkaParams, topicsSet);

        messages.foreachRDD(rdd -> {
            System.out.println(
                    "--- New RDD with " + rdd.partitions().size() + " partitions and " + rdd.count() + " records");
            // rdd.foreach(record -> System.out.println(record._2));
        });

        JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
            @Override
            public String call(Tuple2<String, String> tuple2) {
                return tuple2._2();
            }
        });

        JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
            @Override
            public Iterator<String> call(String x) {
                return Arrays.asList(SPACE.split(x)).iterator();
            }
        });

        JavaPairDStream<String, Integer> wordCounts = words.mapToPair(new PairFunction<String, String, Integer>() {
            @Override
            public Tuple2<String, Integer> call(String s) {
                return new Tuple2<>(s, 1);
            }
        });

        JavaPairDStream<String, Integer> result = wordCounts.reduceByKey(new Function2<Integer, Integer, Integer>() {
            @Override
            public Integer call(Integer i1, Integer i2) {
                return new Integer(i1 + i2);
            }
        });

        //wordCounts.print();
        result.print();
        // PrintStream out = new PrintStream(new
        // FileOutputStream("output.txt"));
        // System.setOut(out);

        // Start the computation

        jssc.start();
        jssc.awaitTermination();
    }
}
package org.apache.spark.examples.streaming;
导入java.util.HashMap;
导入java.util.HashSet;
导入java.io.FileOutputStream;
导入java.io.PrintStream;
导入java.time.Duration;
导入java.util.array;
导入java.util.Iterator;
导入java.util.Map;
导入java.util.Map.Entry;
导入java.util.Set;
导入java.util.regex.Pattern;
导入scala.Tuple2;
导入kafka.serializer.StringDecoder;
导入org.apache.spark.SparkConf;
导入org.apache.spark.api.java.javapairdd;
导入org.apache.spark.api.java.function.*;
导入org.apache.spark.streaming.api.java.*;
导入org.apache.spark.streaming.kafka.KafkaUtils;
导入org.apache.spark.streaming.Durations;
导入org.apache.log4j.Logger;
/**
*使用卡夫卡中一个或多个主题的消息并进行字数统计。
*/
公共最终类JavaDirectKafkaWordCount{
私有静态最终模式空间=Pattern.compile(“”);
公共静态void main(字符串[]args)引发异常{
字符串代理=“XXX.XXX.XXX.XXX:9092”;
String topics=“topicMontag”;
//以2秒的批处理间隔创建上下文
SparkConf SparkConf=new SparkConf().setAppName(“JavaDirectKafkaWordCount”).setMaster(“local[*]”);
JavaStreamingContext jssc=新的JavaStreamingContext(sparkConf,Durations.seconds(2));
Set-topicsSet=newhashset(Arrays.asList(topics.split(“,”));
Map kafkaParams=新HashMap();
kafkaParams.put(“metadata.broker.list”,brokers);
kafkaParams.put(“group.id”,“1”);
卡夫卡帕拉姆斯.普特(“自动偏移.复位”,“最小”);
//创建带有代理和主题的直接卡夫卡流
JavaPairInputStream messages=KafkaUtils.createDirectStream(jssc,String.class,String.class,
StringDecoder.class、StringDecoder.class、kafkaParams、TopicSet);
messages.foreachRDD(rdd->{
System.out.println(
---新的RDD,带有“+RDD.partitions().size()+”分区和“+RDD.count()+”记录”;
//rdd.foreach(记录->系统输出.println(记录._2));
});
JavadStreamLines=messages.map(新函数(){
@凌驾
公共字符串调用(Tuple2 Tuple2){
返回tuple2._2();
}
});
JavaDStream words=lines.flatMap(新的flatMap函数(){
@凌驾
公共迭代器调用(字符串x){
返回Arrays.asList(SPACE.split(x)).iterator();
}
});
JavaPairDStream wordCounts=words.mapToPair(新PairFunction(){
@凌驾
公共元组2调用(字符串s){
返回新的Tuple2(s,1);
}
});
JavaPairDStream结果=wordCounts.reduceByKey(新函数2(){
@凌驾
公共整数调用(整数i1、整数i2){
返回新的整数(i1+i2);
}
});
//wordCounts.print();
result.print();
//PrintStream out=新的PrintStream(新的
//FileOutputStream(“output.txt”);
//系统放样;
//开始计算
<dependencies>
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>3.8.1</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka_2.10</artifactId>
        <version>0.8.2.2</version>
    </dependency>
    <dependency>
        <groupId>org.twitter4j</groupId>
        <artifactId>twitter4j-stream</artifactId>
        <version>4.0.4</version>
    </dependency>
    <dependency>
        <groupId>com.twitter</groupId>
        <artifactId>hbc-core</artifactId>
        <version>2.2.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.10</artifactId>
        <version>2.0.1</version>
    </dependency>
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-xml</artifactId>
        <version>2.11.0-M4</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-kafka_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-examples_2.10</artifactId>
        <version>1.0.0</version>
    </dependency>
</dependencies>
<dependencies>
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>3.8.1</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.twitter4j</groupId>
        <artifactId>twitter4j-stream</artifactId>
        <version>4.0.4</version>
    </dependency>
    <dependency>
        <groupId>com.twitter</groupId>
        <artifactId>hbc-core</artifactId>
        <version>2.2.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.10</artifactId>
        <version>2.0.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-kafka-0-8_2.10</artifactId>
        <version>2.1.0</version>
    </dependency>
</dependencies>