Apache spark 找不到源Kafka的Kafka Spark流式存档

Apache spark 找不到源Kafka的Kafka Spark流式存档,apache-spark,spark-streaming-kafka,akka-kafka,Apache Spark,Spark Streaming Kafka,Akka Kafka,我正试图从卡夫卡那里引出一个制片人的话题。获取Kafka不是有效数据源的错误 我导入了所有必需的包,比如Kafka SQL流等等 BUILD.Gradle FILE dependencies { compile group: 'org.apache.kafka', name: 'kafka-clients', version: '2.2.0' compile group: 'org.apache.kafka', name: 'kafka_2.12', version: '2.2

我正试图从卡夫卡那里引出一个制片人的话题。获取Kafka不是有效数据源的错误

我导入了所有必需的包,比如Kafka SQL流等等

BUILD.Gradle FILE 
dependencies {
    compile group: 'org.apache.kafka', name: 'kafka-clients', version: '2.2.0'
    compile group: 'org.apache.kafka', name: 'kafka_2.12', version: '2.2.0'
    compile group: 'org.scala-lang', name: 'scala-library', version: '2.12.8'
    compile group: 'org.scala-lang', name: 'scala-reflect', version: '2.12.8'
    compile group: 'org.scala-lang', name: 'scala-compiler', version: '2.12.8'
    compile group: 'org.scala-lang.modules', name: 'scala-parser-combinators_2.12', version: '1.1.2'
    compile group: 'org.scala-lang.modules', name: 'scala-swing_2.12', version: '2.1.1'
    runtime group: 'org.apache.spark', name: 'spark-mllib_2.12', version: '2.4.3'
    compile group: 'org.apache.spark', name: 'spark-core_2.12', version: '2.4.3'
    compile 'org.apache.spark:spark-streaming-flume-assembly_2.11:2.1.0'
    compile group: 'org.apache.spark', name: 'spark-sql_2.12', version: '2.4.3'
    compile group: 'org.apache.spark', name: 'spark-graphx_2.12', version: '2.4.3'
    compile group: 'org.apache.spark', name: 'spark-launcher_2.12', version: '2.4.3'
    testCompile group: 'org.apache.spark', name: 'spark-catalyst_2.12', version: '2.4.3'
    provided group: 'org.apache.spark', name: 'spark-streaming_2.12', version: '2.4.3'
    provided group: 'org.apache.spark', name: 'spark-hive_2.12', version: '2.4.3'
    compile group: 'org.apache.spark', name: 'spark-avro_2.12', version: '2.4.3'
    compile group: 'com.databricks', name: 'spark-avro_2.11', version: '4.0.0'
    compile group: 'io.confluent', name: 'kafka-avro-serializer', version: '3.1.1'
    compile group: 'mysql', name: 'mysql-connector-java', version: '8.0.16'
    compile group: 'org.apache.spark', name: 'spark-streaming-kafka_2.11', version: '1.6.3'
    compile group: 'org.apache.spark', name: 'spark-streaming-kafka-0-10_2.12', version: '2.4.3'
    provided group: 'org.apache.spark', name: 'spark-sql-kafka-0-10_2.12', version: '2.4.3'

}
代码:

结果:

线程“main”org.apache.spark.sql.AnalysisException中的异常: 找不到数据源:kafka。请按以下方式部署应用程序: 根据“结构化流媒体+卡夫卡”的部署部分 整合指南“

结构化流媒体指南也使用相同的格式

import com.util.SparkOpener

import org.apache.spark.streaming._

import org.apache.spark.streaming.kafka.KafkaUtils

import org.apache.spark.streaming.kafka010.{ConsumerStrategies, LocationStrategies}


object SparkConsumer extends SparkOpener

{

  val spark=SparkSessionLoc("SparkKafkaStream")

  spark.sparkContext.setLogLevel("ERROR")

  def main(args: Array[String]): Unit = {

    val Kafka_F1Topic=spark.readStream.format("kafka").option("kafka.bootstrap.servers", "localhost:9092,localhost:9093,localhost:9094").option("subscribe","F1CarDetails").option("key.serializer", "org.apache.kafka.common.serialization.StringSerializer").option("value.serializer", "org.apache.kafka.common.serialization.StringSerializer").load()

    Kafka_F1Topic.show()

  }
}