Apache spark Spark streaming:错误StreamingContext:无法构造kafka使用者

Apache spark Spark streaming:错误StreamingContext:无法构造kafka使用者,apache-spark,apache-kafka,spark-streaming,kafka-consumer-api,Apache Spark,Apache Kafka,Spark Streaming,Kafka Consumer Api,我正在尝试使用spark流媒体访问卡夫卡主题。我不认为我遗漏了任何依赖项或导入,但当我尝试运行代码时,如下所示: public static void main(String[] args) { String URL = "spark://localhost:7077"; SparkConf conf = new SparkConf().setAppName("Kafka-test").setMaster(URL); JavaStreamingContext ssc

我正在尝试使用spark流媒体访问卡夫卡主题。我不认为我遗漏了任何依赖项或导入,但当我尝试运行代码时,如下所示:

public static void main(String[] args) {

    String URL = "spark://localhost:7077";

    SparkConf conf = new SparkConf().setAppName("Kafka-test").setMaster(URL);
    JavaStreamingContext ssc = new JavaStreamingContext(conf, Durations.seconds(1));

    Map<String, Object> kafkaParams = new HashMap<>();
    kafkaParams.put("bootstrap.servers", "localhost:6667");
    kafkaParams.put("key.deserializer", StringDeserializer.class);
    kafkaParams.put("value.deserializer", StringDeserializer.class);
    kafkaParams.put("group.id", "ID1");
    kafkaParams.put("auto.offset.reset", "latest");
    kafkaParams.put("enable.auto.commit", false);

    Collection<String> topics = Arrays.asList("MAX_LEGO", "CanBeDeleted");

    JavaInputDStream<ConsumerRecord<String, String>> stream = KafkaUtils.createDirectStream(ssc,
            LocationStrategies.PreferConsistent(),
            ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams));

    JavaPairDStream<Object, Object> max = stream.mapToPair(record -> new Tuple2<>(record.key(), record.value()));
    max.count();
    max.print();

    ssc.start();

}
publicstaticvoidmain(字符串[]args){
字符串URL=”spark://localhost:7077";
SparkConf conf=new SparkConf().setAppName(“卡夫卡测试”).setMaster(URL);
JavaStreamingContext ssc=新的JavaStreamingContext(conf,Durations.seconds(1));
Map kafkaParams=新HashMap();
kafkaParams.put(“bootstrap.servers”,“localhost:6667”);
kafkaParams.put(“key.deserializer”,StringDeserializer.class);
kafkaParams.put(“value.deserializer”,StringDeserializer.class);
kafkaParams.put(“group.id”、“ID1”);
kafkaParams.put(“自动偏移重置”、“最新”);
kafkaParams.put(“enable.auto.commit”,false);
集合主题=Arrays.asList(“MAX_LEGO”,“CanBeDeleted”);
JavaInputDStream=KafkaUtils.createDirectStream(ssc,
LocationStrategies.PreferConsistent(),
订阅(主题,卡夫卡帕兰));
JavaPairDStream max=stream.mapToPair(record->newtuple2(record.key(),record.value());
最大计数();
最大打印次数();
ssc.start();
}
我收到一条错误消息:

18/02/10 16:57:08错误流。StreamingContext:启动上下文时出错,将其标记为已停止
org.apache.kafka.common.KafkaException:无法构造kafka使用者
在org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:703)
位于org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:553)
位于org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:536)
访问org.apache.spark.streaming.kafka010.Subscribe.onStart(ConsumerStrategy.scala:83)
位于org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.consumer(DirectKafkaInputDStream.scala:75)
位于org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:243)
位于org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:49)
位于org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:49)
位于scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach_quick(ParArray.scala:143)
位于scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:136)
位于scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:972)
在scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
位于scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
位于scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
位于scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
在scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:969)
位于scala.collection.parallel.AdaptiveWorksteachingTasks$WrappedTask$class.compute(Tasks.scala:152)
位于scala.collection.parallel.AdaptiveWorksteachingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
位于scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
位于scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
位于scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
位于scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
在scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)中
在使用org.apache.spark.util.ThreadUtils在单独的线程中运行。。。()
位于org.apache.spark.streaming.StreamingContext.liftedTree1$1(StreamingContext.scala:578)
位于org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:572)
位于org.apache.spark.streaming.api.java.JavaStreamingContext.start(JavaStreamingContext.scala:556)
在org.kafkanconnection2.main(kafkanconnection2.java:50)

由以下原因引起:org.apache.kafka.common.KafkaException:com.fasterxml.jackson.databind.desr.std.StringDeserializer不是org.apache.kafka.common.serialization.Deserializer的实例
位于org.apache.kafka.common.config.AbstractConfig.getConfiguredInstance(AbstractConfig.java:205)
在org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:624)
位于org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:553)
位于org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:536)
访问org.apache.spark.streaming.kafka010.Subscribe.onStart(ConsumerStrategy.scala:83)
位于org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.consumer(DirectKafkaInputDStream.scala:75)
位于org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:243)
位于org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:49)
位于org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:49)
位于scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach_quick(ParArray.scala:143)
位于scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:136)
位于scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:972)
在scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
位于scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
位于scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
位于scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
在scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:969)
位于scala.collection.parallel.AdaptiveWorksteachingTasks$WrappedTask$class.compute(Tasks.scala:152)
位于scala.collection.parallel.AdaptiveWorksteachingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
位于scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
在scala.concurre