使用pika的python中的SparkStreaming、RabbitMQ和MQTT

使用pika的python中的SparkStreaming、RabbitMQ和MQTT,python,apache-spark,rabbitmq,mqtt,pika,Python,Apache Spark,Rabbitmq,Mqtt,Pika,为了让事情变得棘手,我想使用rabbitMQ队列中的消息。现在我知道rabbit()上有一个MQTT插件 然而,我似乎无法做出一个例子,其中Spark消耗了一条来自pika的信息 例如,我在这里使用简单的wordcount.py程序()来查看是否可以通过以下方式查看消息生产者: import sys import pika import json import future import pprofile def sendJson(json): connection = pika.Blo

为了让事情变得棘手,我想使用rabbitMQ队列中的消息。现在我知道rabbit()上有一个MQTT插件

然而,我似乎无法做出一个例子,其中Spark消耗了一条来自pika的信息

例如,我在这里使用简单的wordcount.py程序()来查看是否可以通过以下方式查看消息生产者

import sys
import pika
import json
import future
import pprofile

def sendJson(json):

  connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
  channel = connection.channel()

  channel.queue_declare(queue='analytics', durable=True)
  channel.queue_bind(exchange='analytics_exchange',
                       queue='analytics')

  channel.basic_publish(exchange='analytics_exchange', routing_key='analytics',body=json)
  connection.close()

if __name__ == "__main__":
  with open(sys.argv[1],'r') as json_file:
    sendJson(json_file.read())
sparkstreaming消费者如下所示:

import sys
import operator

from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.mqtt import MQTTUtils

sc = SparkContext(appName="SS")
sc.setLogLevel("ERROR")
ssc = StreamingContext(sc, 1)
ssc.checkpoint("checkpoint")
#ssc.setLogLevel("ERROR")


#RabbitMQ

"""EXCHANGE = 'analytics_exchange'
EXCHANGE_TYPE = 'direct'
QUEUE = 'analytics'
ROUTING_KEY = 'analytics'
RESPONSE_ROUTING_KEY = 'analytics-response'
"""


brokerUrl = "localhost:5672" # "tcp://iot.eclipse.org:1883"
topic = "analytics"

mqttStream = MQTTUtils.createStream(ssc, brokerUrl, topic)
#dummy functions - nothing interesting...
words = mqttStream.flatMap(lambda line: line.split(" "))
pairs = words.map(lambda word: (word, 1))
wordCounts = pairs.reduceByKey(lambda x, y: x + y)

wordCounts.pprint()
ssc.start()
ssc.awaitTermination()
但是,与简单的wordcount示例不同,我无法使其正常工作,并出现以下错误:

16/06/16 17:41:35 ERROR Executor: Exception in task 0.0 in stage 7.0 (TID 8)
java.lang.NullPointerException
    at org.eclipse.paho.client.mqttv3.MqttConnectOptions.validateURI(MqttConnectOptions.java:457)
    at org.eclipse.paho.client.mqttv3.MqttAsyncClient.<init>(MqttAsyncClient.java:273)
火花流为:

brokerUrl = "tcp://127.0.0.1:5672"
topic = "#" #all messages

mqttStream = MQTTUtils.createStream(ssc, brokerUrl, topic)
records = mqttStream.flatMap(lambda line: json.loads(line))
count = records.map(lambda rec: len(rec))
total = count.reduce(lambda a, b: a + b)
total.pprint()

MqttAsyncClient
Javadoc,服务器URI必须具有以下方案之一:
tcp://
ssl://
local://
。您需要更改上面的
brokerrurl
,以获得其中一个方案

有关更多信息,请参见
MqttAsyncClient
的源代码链接:


看起来您使用了错误的端口号。假设:

  • 您有一个使用默认设置运行的RabbitMQ本地实例,并且您已经启用了MQTT插件(
    RabbitMQ插件启用RabbitMQ\u MQTT
    )并重新启动了RabbitMQ服务器
  • 执行
    spark submit
    /
    pyspark
    时包含
    spark流式mqtt
    (使用
    jars
    /
    驱动程序类路径
您可以使用TCP与
tcp://localhost:1883
。您还必须记住,MQTT正在使用
amq.topic

快速启动

  • 使用以下内容创建
    Dockerfile

    FROM rabbitmq:3-management
    
    RUN rabbitmq-plugins enable rabbitmq_mqtt
    
    import pika
    import time 
    
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='localhost'))
    channel = connection.channel()
    channel.exchange_declare(exchange='amq.topic',
                     type='topic', durable=True)
    
    for i in range(1000):
        channel.basic_publish(
            exchange='amq.topic',  # amq.topic as exchange
            routing_key='hello',   # Routing key used by producer
            body='Hello World {0}'.format(i)
        )
        time.sleep(3)
    
    connection.close()
    
    from pyspark import SparkContext
    from pyspark.streaming import StreamingContext
    from pyspark.streaming.mqtt import MQTTUtils
    
    sc = SparkContext()
    ssc = StreamingContext(sc, 10)
    
    mqttStream = MQTTUtils.createStream(
        ssc, 
        "tcp://localhost:1883",  # Note both port number and protocol
        "hello"                  # The same routing key as used by producer
    )
    mqttStream.count().pprint()
    ssc.start()
    ssc.awaitTermination()
    ssc.stop()
    
  • 构建Docker映像:

    docker build -t rabbit_mqtt .
    
  • 启动映像并等待服务器就绪:

    docker run -p 15672:15672 -p 5672:5672 -p 1883:1883 rabbit_mqtt 
    
  • 创建具有以下内容的
    producer.py

    FROM rabbitmq:3-management
    
    RUN rabbitmq-plugins enable rabbitmq_mqtt
    
    import pika
    import time 
    
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='localhost'))
    channel = connection.channel()
    channel.exchange_declare(exchange='amq.topic',
                     type='topic', durable=True)
    
    for i in range(1000):
        channel.basic_publish(
            exchange='amq.topic',  # amq.topic as exchange
            routing_key='hello',   # Routing key used by producer
            body='Hello World {0}'.format(i)
        )
        time.sleep(3)
    
    connection.close()
    
    from pyspark import SparkContext
    from pyspark.streaming import StreamingContext
    from pyspark.streaming.mqtt import MQTTUtils
    
    sc = SparkContext()
    ssc = StreamingContext(sc, 10)
    
    mqttStream = MQTTUtils.createStream(
        ssc, 
        "tcp://localhost:1883",  # Note both port number and protocol
        "hello"                  # The same routing key as used by producer
    )
    mqttStream.count().pprint()
    ssc.start()
    ssc.awaitTermination()
    ssc.stop()
    
  • 创始制作人

    python producer.py
    
    并访问管理控制台

    查看是否接收到消息

  • 使用以下内容创建
    consumer.py

    FROM rabbitmq:3-management
    
    RUN rabbitmq-plugins enable rabbitmq_mqtt
    
    import pika
    import time 
    
    
    connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='localhost'))
    channel = connection.channel()
    channel.exchange_declare(exchange='amq.topic',
                     type='topic', durable=True)
    
    for i in range(1000):
        channel.basic_publish(
            exchange='amq.topic',  # amq.topic as exchange
            routing_key='hello',   # Routing key used by producer
            body='Hello World {0}'.format(i)
        )
        time.sleep(3)
    
    connection.close()
    
    from pyspark import SparkContext
    from pyspark.streaming import StreamingContext
    from pyspark.streaming.mqtt import MQTTUtils
    
    sc = SparkContext()
    ssc = StreamingContext(sc, 10)
    
    mqttStream = MQTTUtils.createStream(
        ssc, 
        "tcp://localhost:1883",  # Note both port number and protocol
        "hello"                  # The same routing key as used by producer
    )
    mqttStream.count().pprint()
    ssc.start()
    ssc.awaitTermination()
    ssc.stop()
    
  • 下载依赖项(将Scala版本调整为用于构建Spark和Spark版本的版本):

  • 确保
    SPARK\u HOME
    PYTHONPATH
    指向正确的目录

  • 使用提交
    consumer.py
    (如前所述调整版本):


如果您按照所有步骤操作,您应该会在Spark日志中看到Hello world消息。

我试图将生产者更改为使用tcp而不是http,但是我发现我现在遇到以下连接问题:错误ReceiverSupervisorImpl:接收器已停止,错误为:连接丢失(32109)-java.net.SocketException:连接重置谢谢。我来看看。这能与direct和topic一起工作吗?MQTT插件使用不同的exchange,但据我所知。MQTT协议并不比这丰富多少。有没有一种方法可以在没有docker的情况下配置它?例如使用.config文件。我已尝试使用中的默认设置。但这根本不起作用。在没有设置的情况下,我的spark侦听器可以连接以下内容:=信息报告===2016年7月5日::11:52:08===接受MQTT连接(127.0.0.1:47868->127.0.0.1:1883)。但是如何将生成的消息映射到这个端口呢?Docker在这里并不重要,但我真的不理解这个问题。端口不是消息的属性。I它是服务器的全局属性。如果主题和exchange匹配,则不应出现任何问题。你说“它不起作用”是什么意思?当您检查RabbitMQ UI时,是否看到来自生产者的绑定?消费者呢?路由密钥匹配吗?我已经尝试使用我们使用的标准消息队列。我现在尝试使用无队列的主题,这似乎是开箱即用的。但是没有使用Docker。