Apache spark apachespark与Kafka的集成

Apache spark apachespark与Kafka的集成,apache-spark,apache-kafka,spark-structured-streaming,spark-kafka-integration,Apache Spark,Apache Kafka,Spark Structured Streaming,Spark Kafka Integration,我正在学习Udemy上关于卡夫卡和Spark的课程,我正在学习apache Spark与卡夫卡的集成 下面是ApacheSpark的代码 SparkSession session=SparkSession.builder().appName(“KafkaConsumer”).master(“local[*]).getOrCreate(); session.sparkContext().setLogLevel(“错误”); 数据集df=会话 .readStream() .格式(“卡夫卡”) .op

我正在学习Udemy上关于卡夫卡和Spark的课程,我正在学习apache Spark与卡夫卡的集成

下面是ApacheSpark的代码

SparkSession session=SparkSession.builder().appName(“KafkaConsumer”).master(“local[*]).getOrCreate();
session.sparkContext().setLogLevel(“错误”);
数据集df=会话
.readStream()
.格式(“卡夫卡”)
.option(“kafka.bootstrap.servers”,“localhost:9092”)
.option(“订阅”、“第二个主题”).load();
df.show();
下面是pom.xml文件的内容

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.example.kafka.spark</groupId>
  <artifactId>Kafka-Spark-Integration-Code</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <dependencies>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.12</artifactId>
        <version>3.0.0</version>
    </dependency> 
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.12</artifactId>
        <version>3.0.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming -->
<!--    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.12</artifactId>
        <version>3.0.0</version>
    </dependency> -->
    
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql-kafka-0-10 -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql-kafka-0-10_2.12</artifactId>
        <version>3.0.0</version>
   </dependency>
    
 </dependencies>
</project>

4.0.0
com.example.kafka.spark
卡夫卡火花集成码
0.0.1-快照
org.apache.spark
spark-core_2.12
3.0.0
org.apache.spark
spark-sql_2.12
3.0.0
org.apache.spark
spark-sql-kafka-0-10_2.12
3.0.0
然而,当我运行代码时,我发现下面的错误我无法解决。我在MX Linux上使用openjdk 8和spark 3。谢谢

exception in thread "main" java.lang.ClassFormatError: Invalid code attribute name index 24977 in class file org/apache/spark/sql/execution/columnar/InMemoryRelation
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:83)
    at org.apache.spark.sql.SparkSession.$anonfun$sharedState$1(SparkSession.scala:132)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:132)
    at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:131)
    at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:323)
    at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1107)
    at org.apache.spark.sql.SparkSession.$anonfun$sessionState$2(SparkSession.scala:157)
    at scala.Option.getOrElse(Option.scala:189)
    at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:155)
    at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:152)
    at org.apache.spark.sql.streaming.DataStreamReader.<init>(DataStreamReader.scala:519)
    at org.apache.spark.sql.SparkSession.readStream(SparkSession.scala:657)
    at example.code.spark.kafka.KafkaSparkConsumer.main(KafkaSparkConsumer.java:19)
线程“main”java.lang.ClassFormatError中出现异常:类文件org/apache/spark/sql/execution/columnar/InMemoryRelation中的代码属性名称索引24977无效 位于java.lang.ClassLoader.defineClass1(本机方法) 位于java.lang.ClassLoader.defineClass(ClassLoader.java:756) 位于java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) 位于java.net.URLClassLoader.defineClass(URLClassLoader.java:468) 在java.net.URLClassLoader.access$100(URLClassLoader.java:74) 在java.net.URLClassLoader$1.run(URLClassLoader.java:369) 在java.net.URLClassLoader$1.run(URLClassLoader.java:363) 位于java.security.AccessController.doPrivileged(本机方法) 位于java.net.URLClassLoader.findClass(URLClassLoader.java:362) 位于java.lang.ClassLoader.loadClass(ClassLoader.java:418) 位于sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) 位于java.lang.ClassLoader.loadClass(ClassLoader.java:351) 位于org.apache.spark.sql.internal.SharedState。(SharedState.scala:83) 位于org.apache.spark.sql.SparkSession.$anonfun$sharedState$1(SparkSession.scala:132) 位于scala.Option.getOrElse(Option.scala:189) 位于org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:132) 位于org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:131) 位于org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:323) 位于org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1107) 位于org.apache.spark.sql.SparkSession.$anonfun$sessionState$2(SparkSession.scala:157) 位于scala.Option.getOrElse(Option.scala:189) 位于org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:155) 位于org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:152) 位于org.apache.spark.sql.streaming.DataStreamReader。(DataStreamReader.scala:519) 位于org.apache.spark.sql.SparkSession.readStream(SparkSession.scala:657) 例如.code.spark.kafka.kafkasparakconsumer.main(kafkasparakconsumer.java:19)
您可以按照以下示例进行操作:

SparkSession session=SparkSession.builder()
.appName(“卡夫卡消费者”)
.master(“本地[*]”)
.getOrCreate();
数据集df=spark
.readStream()
.格式(“卡夫卡”)
.option(“kafka.bootstrap.servers”,“localhost:9092”)
.选项(“订阅”、“第二个主题”)
.load()
.selectExpr(“转换(键为字符串)”,“转换(值为字符串)”;
使用数据。显示了如何将数据打印到控制台:

StreamingQuery=df
.writeStream()
.格式(“控制台”)
.outputMode(“追加”)
.选项(“检查点位置”、“路径/到/检查点/目录”)
.start();
query.waittermination();

谢谢。我尝试了上面的代码,但出现了相同的错误。我在windows计算机上尝试了相同的示例,但失败了,出现了不同的错误(未找到kafka依赖项),但传递了上述代码的readStream方法。我想知道它是否与我在linux机器上运行的java版本或环境有关?是的,我能够运行非常基本的spark代码。我使用eclipse并运行eclipse中的代码。这是我第一次尝试流媒体应用程序,所以我不知道发生了什么,尽管我在两个环境(windows和linux)中都有相同的pom.xml文件,所以我相信依赖项是相同的。因此我卸载了openjdk 8并从oracle安装了java,它开始工作。然而,现在我的流媒体工作刚刚退出。我想我会一直跑。你确定要打电话询问吗?谢谢,我错过了。非常感谢你。它解决了这个问题。