Java Spark和卡夫卡直接方法

Java Spark和卡夫卡直接方法,java,apache-spark,apache-kafka,spark-streaming,Java,Apache Spark,Apache Kafka,Spark Streaming,我是Apache Spark的新手,我正在尝试运行Spark Streaming+Kafka集成直接方法示例(JavaDirectKafkaWordCount.java) 我已经下载了所有的库,但是当我尝试运行时,我得到了这个错误 Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object; at kafka.api.Re

我是Apache Spark的新手,我正在尝试运行Spark Streaming+Kafka集成直接方法示例(JavaDirectKafkaWordCount.java)

我已经下载了所有的库,但是当我尝试运行时,我得到了这个错误

Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object;
at kafka.api.RequestKeys$.<init>(RequestKeys.scala:48)
at kafka.api.RequestKeys$.<clinit>(RequestKeys.scala)
at kafka.api.TopicMetadataRequest.<init>(TopicMetadataRequest.scala:55)
at org.apache.spark.streaming.kafka.KafkaCluster.getPartitionMetadata(KafkaCluster.scala:122)
at org.apache.spark.streaming.kafka.KafkaCluster.getPartitions(KafkaCluster.scala:112)
at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:211)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:607)
at org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala)
at it.unimi.di.luca.SimpleApp.main(SimpleApp.java:53)
线程“main”java.lang.NoSuchMethodError中的异常:scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object; 位于kafka.api.RequestKeys$(RequestKeys.scala:48) 位于kafka.api.RequestKeys$(RequestKeys.scala) 位于kafka.api.TopicMetadataRequest。(TopicMetadataRequest.scala:55) 位于org.apache.spark.streaming.kafka.KafkaCluster.getPartitionMetadata(KafkaCluster.scala:122) 位于org.apache.spark.streaming.kafka.KafkaCluster.getPartitions(KafkaCluster.scala:112) 在org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffset上(KafkaUtils.scala:211) 在org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream上(KafkaUtils.scala:484) 位于org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:607) 位于org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala) 位于it.unimi.di.luca.SimpleApp.main(SimpleApp.java:53)
有什么建议吗?

我想可能有一些建议

  • 您可能没有在项目中正确声明依赖项。你需要确保你有卡夫卡和火花流。如果您使用像maven这样的生成器,您可以在此处找到需要添加到生成器文件中的行
  • 如果您试图阅读的主题还不存在,您也会得到一个错误。您可以在命令行中使用以下命令创建它

    bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
    
  • 确保kafka服务器和kafka zookeeper正在运行


如果这没有帮助,那么也许你也应该发布你的main。

使用下面的Scala 2.10、Kafka 0.10、Spark 1.6.2和Cassandra 3.5代码

我使用无接受者方法/直接卡夫卡方法。希望有帮助

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.SaveMode
import org.apache.spark.streaming.Seconds
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.kafka.KafkaUtils
import com.datastax.spark.connector._

import kafka.serializer.StringDecoder
import org.apache.spark.rdd.RDD
import com.datastax.spark.connector.SomeColumns
import java.util.Formatter.DateTime

object StreamProcessor extends Serializable {
  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf().setMaster("local[2]").setAppName("StreamProcessor")
      .set("spark.cassandra.connection.host", "127.0.0.1")

    val sc = new SparkContext(sparkConf)

    val ssc = new StreamingContext(sc, Seconds(2))

    val sqlContext = new SQLContext(sc)

    val kafkaParams = Map("metadata.broker.list" -> "localhost:9092")

    val topics = args.toSet

    val stream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
      ssc, kafkaParams, topics)


        stream
  .map { 
    case (_, msg) => 
      val result = msgParseMaster(msg)
      (result.id, result.data)
   }.foreachRDD(rdd => if (!rdd.isEmpty)     rdd.saveToCassandra("testKS","testTable",SomeColumns("id","data")))

      }
    }

    ssc.start()
    ssc.awaitTermination()

  }

  import org.json4s._
  import org.json4s.native.JsonMethods._
  case class wordCount(id: Long, data1: String, data2: String) extends serializable
  implicit val formats = DefaultFormats
  def msgParseMaster(msg: String): wordCount = {
    val m = parse(msg).extract[wordCount]
    return m

  }

}

你的目标是哪个scala版本?@lu_Ferra,发布你的示例代码可能会帮助其他人更好地回答你的任务。