Python 如何使用pyspark组合两个数据流?

Python 如何使用pyspark组合两个数据流?,python,json,pyspark,spark-streaming,kafka-consumer-api,Python,Json,Pyspark,Spark Streaming,Kafka Consumer Api,我在pyspark上遇到了以下问题: 我应该将两个数据流合并成一个数据流,但不幸的是,我没有收到任何打印 这是我的代码: sc = SparkContext(appName="Sparkstreaming") spark = SparkSession(sc) sc.setLogLevel("WARN") ssc = StreamingContext(sc,3) kafka_stream = KafkaUtils.createStream(ssc,"lo

我在pyspark上遇到了以下问题:

我应该将两个数据流合并成一个数据流,但不幸的是,我没有收到任何打印

这是我的代码:

    sc = SparkContext(appName="Sparkstreaming")
    spark = SparkSession(sc)
    sc.setLogLevel("WARN")

    ssc = StreamingContext(sc,3)

    kafka_stream = KafkaUtils.createStream(ssc,"localhost:2181","consumer-be-borsa",{"Be_borsa":1})
    kafka_stream1 = KafkaUtils.createStream(ssc,"localhost:2181","consumer-be-teleborsa",{"Be_teleborsa":1})

    dstream = kafka_stream.map(lambda k, v: json.loads(v['id']))
    dstream1 = kafka_stream1.map(lambda k, v : json.loads(v['id']))

    # Join
    streamJoined = dstream.join(dstream1)

    streamJoined.pprint()

    ssc.start()
    time.sleep(100)

    ssc.stop()
以下是我使用的两个JSON文件:

{"id": "Be_20200330", "Date": "2020-03-30", "Name": "Be", "Hour": "15.49.24", "Last Price": "0,862", "Var%": "-1,93", "Last Value": "1.020"}

{"id": "Be_20200330", "Date": "2020-03-30", "Name": "Be", "Volatility": "2,352", "Value_at_risk": "5,471"}
我希望的结果是:

{"id": "Be_20200330", "Date": "2020-03-30", "Name": "Be","Hour": "15.49.24", "Last Price": "0,862", "Var%": "-1,93", "Last Value": "1.020", "Volatility": "2,352", "Value_at_risk": "5,471"}
我如何使用pyspark

我也试着看看这个链接的解决方案:但它不起作用。谢谢