Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 为什么我得到kafka.cluster.BrokerEndPoint不能强制转换为kafka.cluster.Broker?_Apache Spark_Apache Kafka_Kafka Consumer Api - Fatal编程技术网

Apache spark 为什么我得到kafka.cluster.BrokerEndPoint不能强制转换为kafka.cluster.Broker?

Apache spark 为什么我得到kafka.cluster.BrokerEndPoint不能强制转换为kafka.cluster.Broker?,apache-spark,apache-kafka,kafka-consumer-api,Apache Spark,Apache Kafka,Kafka Consumer Api,当我运行这段代码时,我得到以下错误。我检查了另一个答案,但对我不起作用 有人知道怎么解决这个问题吗?我检查了依赖项 import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.streaming.Duration; import org.apache.spark.streaming.api.java.JavaPairInputDStrea

当我运行这段代码时,我得到以下错误。我检查了另一个答案,但对我不起作用

有人知道怎么解决这个问题吗?我检查了依赖项

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaPairInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;

import java.util.*;


/**
 * Created by jonas on 10/10/16.
 */
public class SparkStream {

    public static void main(String[] args){

        SparkConf conf = new SparkConf()
                .setAppName("kafka-sandbox")
                .setMaster("local[*]");
        JavaSparkContext sc = new JavaSparkContext(conf);
        JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(2000));

        Map<String, String> kafkaParams = new HashMap<>();
        kafkaParams.put("metadata.broker.list", "localhost:9092");
        Set<String> topics = Collections.singleton("Test");

        JavaPairInputDStream<String, String> directKafkaStream = KafkaUtils.createDirectStream(ssc, String.class
        , String.class, kafka.serializer.StringDecoder.class, kafka.serializer.StringDecoder.class, kafkaParams, topics);

        directKafkaStream.foreachRDD(rdd -> {
            System.out.println("--- New RDD with " + rdd.partitions().size()
                    + " partitions and " + rdd.count() + " records");
            rdd.foreach(record -> System.out.println(record._2));
        });

        // TODO: processing pipeline

        ssc.start();


    }


}

这似乎是我正在调试的一个库问题。 我正在使用版本为0.10.0.0和Scala版本2.11的kafka服务器 我的spark core/流媒体版本是2.11:2.0.1 火花流卡夫卡库是0-8_2.11:2.0.1 卡夫卡客户端和流是0.10.0.1
当我使用kafka 2.11:0.10.0.1 lib时,我会出现此错误,但当我使用kafka 2.10:0.10.0.1时,它工作正常。

确保依赖项彼此兼容。 以下是一些协同工作的方法:

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming_2.10</artifactId>
  <version>1.6.2</version>
</dependency>

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.10</artifactId>
  <version>1.6.2</version>
</dependency>

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming-kafka_2.10</artifactId>
  <version>1.6.2</version>
</dependency>

org.apache.spark
spark-2.10
1.6.2
org.apache.spark
spark-core_2.10
1.6.2
org.apache.spark
spark-streaming-kafka_2.10
1.6.2

是否尝试将
元数据.broker.list
更改为
引导.servers
?这对我来说很有用。是的,我试过了。
<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming_2.10</artifactId>
  <version>1.6.2</version>
</dependency>

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.10</artifactId>
  <version>1.6.2</version>
</dependency>

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming-kafka_2.10</artifactId>
  <version>1.6.2</version>
</dependency>