Apache spark Kafka使用PySpark创建DirectStream

Apache spark Kafka使用PySpark创建DirectStream,apache-spark,pyspark,apache-kafka,spark-streaming,Apache Spark,Pyspark,Apache Kafka,Spark Streaming,我的主要目标是连接Kafka,创建一个数据流,将其作为行保存到局部变量,并将其写入mongo db,并在PySpark中实现端到端的流 但在创建数据流的第一步中,我面临着一个问题,错误是“java.util.ArrayList不能转换为java.lang.String”。你能帮我确定修复方案吗? 详情如下: 我正在尝试使用pyspark连接卡夫卡,如下所示 kafkaParams = {"metadata.broker.list": ['host1:port','host2:port','hos

我的主要目标是连接Kafka,创建一个数据流,将其作为行保存到局部变量,并将其写入mongo db,并在PySpark中实现端到端的流

但在创建数据流的第一步中,我面临着一个问题,错误是“java.util.ArrayList不能转换为java.lang.String”。你能帮我确定修复方案吗? 详情如下:

我正在尝试使用pyspark连接卡夫卡,如下所示

kafkaParams = {"metadata.broker.list": ['host1:port','host2:port','host3:port'],
"security.protocol":"ssl",
"ssl.key.password":"***",
"ssl.keystore.location":"/path1/file.jks",
"ssl.keystore.password":"***",
"ssl.truststore.location":"/path1/file2.jks",
"ssl.truststore.password":"***"}

directKafkaStream = KafkaUtils.createDirectStream(ssc,["lac.mx.digitalchannel.raw.s015-txn-qrdc"],kafkaParams)
但我遇到了错误,不知道如何处理

py4j.protocol.Py4JJavaError: An error occurred while calling o120.createDirectStreamWithoutMessageHandler.
: java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.lang.String
        at org.apache.spark.streaming.kafka.KafkaCluster$SimpleConsumerConfig$.apply(KafkaCluster.scala:419)
        at org.apache.spark.streaming.kafka.KafkaCluster.config(KafkaCluster.scala:54)
        at org.apache.spark.streaming.kafka.KafkaCluster.getPartitionMetadata(KafkaCluster.scala:131)
        at org.apache.spark.streaming.kafka.KafkaCluster.getPartitions(KafkaCluster.scala:120)
        at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:212)
        at org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper.createDirectStream(KafkaUtils.scala:721)
        at org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper.createDirectStreamWithoutMessageHandler(KafkaUtils.scala:689)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:238)
        at java.lang.Thread.run(Thread.java:748)
另外,要打开我正在使用的PySpark CLI

pyspark2 --master local --jars /path/spark-streaming-kafka-0-10_2.11-2.4.0.cloudera2.jar,/path/kafka-clients-2.0.0-cdh6.1.0.jar,/path/spark-sql-kafka-0-10_2.11-2.4.0.cloudera2.jar  --files file.jks,file2.jks

metadata.broker.list
必须是逗号分隔的字符串,而不是列表

主要目的是连接Kafka,创建一个数据流,将其作为行保存到局部变量,并将其写入mongo

  • Mongo支持结构化流式写入

  • Mongo还有一个Kafka Connect插件,它只需要一个配置文件,不需要Spark集群或代码


  • 注意:从Spark 2.4开始,Spark Streaming API已被弃用。鉴于它能够处理您推荐的更改,我已经通过读取结构化流媒体实现了这一点。现在被蒙戈困住了。当执行写入获取错误时,u“'write'不能在流式数据集/数据帧上调用;”。您对此有什么建议吗?替换为
    writeStream