使用PySpark结构化流媒体将Kafka流接收到MongoDB

使用PySpark结构化流媒体将Kafka流接收到MongoDB,mongodb,apache-spark,apache-kafka,Mongodb,Apache Spark,Apache Kafka,我的火花: spark = SparkSession\ .builder\ .appName("Demo")\ .master("local[3]")\ .config("spark.streaming.stopGracefullyonShutdown", "true")\ .config('spark.jars.packages','org.mongodb.spark:mon

我的火花:

spark = SparkSession\
    .builder\
    .appName("Demo")\
    .master("local[3]")\
    .config("spark.streaming.stopGracefullyonShutdown", "true")\
    .config('spark.jars.packages','org.mongodb.spark:mongo-spark-connector_2.12:3.0.1')\
    .getOrCreate()
Mongo URI:

input_uri_weld = 'mongodb://127.0.0.1:27017/db.coll1'
output_uri_weld = 'mongodb://127.0.0.1:27017/db.coll1'
用于将流批写入Mongo的函数:

def save_to_mongodb_collection(current_df, epoc_id, mongodb_collection_name):
    current_df.write\
      .format("com.mongodb.spark.sql.DefaultSource") \
      .mode("append") \
      .option("spark.mongodb.output.uri", output_uri_weld) \
      .save()
mongo_writer = df_parsed.write\
        .format('com.mongodb.spark.sql.DefaultSource')\
        .mode('append')\
        .option("spark.mongodb.output.uri", output_uri_weld)\
        .save()
卡夫卡河:

kafka_df = spark.readStream\
.format("kafka")\
.option("kafka.bootstrap.servers", kafka_broker)\
.option("subscribe", kafka_topic)\
.option("startingOffsets", "earliest")\
.load()
写信给Mongo:

def save_to_mongodb_collection(current_df, epoc_id, mongodb_collection_name):
    current_df.write\
      .format("com.mongodb.spark.sql.DefaultSource") \
      .mode("append") \
      .option("spark.mongodb.output.uri", output_uri_weld) \
      .save()
mongo_writer = df_parsed.write\
        .format('com.mongodb.spark.sql.DefaultSource')\
        .mode('append')\
        .option("spark.mongodb.output.uri", output_uri_weld)\
        .save()
&我的spark.conf文件:

spark.jars.packages                org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1,org.apache.spark:spark-avro_2.12:3.0.1,com.datastax.spark:spark-cassandra-connector_2.12:3.0.0
错误:

java.lang.ClassNotFoundException: Failed to find data source: com.mongodb.spark.sql.DefaultSource. Please find packages at http://spark.apache.org/third-party-projects.html  
我找到了解决办法。 因为我找不到适合结构化流媒体的Mongo驱动程序,所以我研究了另一个解决方案。 现在,我使用到mongoDb的直接连接,并使用“foreach(…)”而不是foreachbatch(…)。我的代码在testSpark.py文件中如下所示:

....
import pymongo
from pymongo import MongoClient

local_url = "mongodb://localhost:27017"


def write_machine_df_mongo(target_df):

    cluster = MongoClient(local_url)
    db = cluster["test_db"]
    collection = db.test1

    post = {
            "machine_id": target_df.machine_id,
            "proc_type": target_df.proc_type,
            "sensor1_id": target_df.sensor1_id,
            "sensor2_id": target_df.sensor2_id,
            "time": target_df.time,
            "sensor1_val": target_df.sensor1_val,
            "sensor2_val": target_df.sensor2_val,
            }

    collection.insert_one(post)

machine_df.writeStream\
    .outputMode("append")\
    .foreach(write_machine_df_mongo)\
    .start()

@一个板球运动员你删除了错误的问题。这一个应该被编辑/删除,因为我忘了在标题中写“问题…”我没有删除任何问题。你创建了一个重复的帖子