具有可靠性的Redis Pub/Sub
我一直在考虑使用Redis Pub/Sub替代RabbitMQ 据我所知,Redis的pub/sub与每个订阅者都有一个持久的连接,如果连接被终止,所有未来的消息都将丢失并掉在地板上 一种可能的解决方案是使用列表(和阻塞等待)存储所有消息和发布/订阅,作为一种通知机制。我想这会让我在大部分方面达到目的,但我仍然对失败案例有一些担忧具有可靠性的Redis Pub/Sub,redis,Redis,我一直在考虑使用Redis Pub/Sub替代RabbitMQ 据我所知,Redis的pub/sub与每个订阅者都有一个持久的连接,如果连接被终止,所有未来的消息都将丢失并掉在地板上 一种可能的解决方案是使用列表(和阻塞等待)存储所有消息和发布/订阅,作为一种通知机制。我想这会让我在大部分方面达到目的,但我仍然对失败案例有一些担忧 当订户死亡并重新联机时会发生什么情况,它应该如何处理所有挂起的消息 当系统中出现格式错误的消息时,如何处理这些异常?死信队列 是否有实施重试策略的标准实践 当订户(消
如果不了解更多关于应用程序需求的信息,就很难知道如何明智地选择。通常,如果您的消息需要完全的ACID保护,那么您可能还需要使用redis事务。如果您的消息只有在及时时才有意义,那么可能不需要事务。听起来好像你不能容忍丢弃的消息,所以你使用列表的方法很好。如果您需要为消息实现优先级队列,您可以使用排序集(Z命令)存储消息,使用它们的优先级作为分数值,以及轮询消费者。我所做的是使用排序集,使用时间戳作为分数,使用数据键作为成员值。我使用最后一个项目的分数来检索接下来的几个项目,然后获取关键点。工作完成后,我将zrem和del都包装在MULTI/EXEC事务中 基本上就像Edward说的,但是由于我的消息可能非常大,所以需要将密钥存储在排序集中
希望这有帮助 > P>如果你想要一个Pub/Sub系统,用户在死亡时不会丢失消息,考虑使用ReDIS PUB/SUB/< Redis Streams有自己的体系结构和Redis发布/订阅的优缺点。使用Redis Streams,订阅者可以发出以下命令: 我收到的最后一条消息是X,现在给我下一条消息; 如果没有新消息,请等待消息到达
上面链接的安特里兹的文章很好地介绍了Redis streams,并提供了更多信息。以下是我为此编写的一个类:
import logging
from redis import StrictRedis
# Defaults
CONNECT_TIMEOUT_SECS = 5.0 # Control how long to wait while establishing a connection
REQUEST_TIMEOUT_SECS = 120.0 # Request socket timeout
class RedisBaseClient(object):
def __init__(self, config=None, connect_timeout_secs=CONNECT_TIMEOUT_SECS,
request_timeout_secs=REQUEST_TIMEOUT_SECS):
"""
Load config
:param config: dict, config
:param connect_timeout_secs: float, re-connect timeout seconds
:param request_timeout_secs: float, timeout seconds
"""
self.read_conn = None
self.write_conn = None
self.config = config or {}
self.CONNECT_TIMEOUT_SECS = connect_timeout_secs
self.REQUEST_TIMEOUT_SECS = request_timeout_secs
self.read_connection()
def _connect(self, host, port):
return StrictRedis(host=host,
port=port,
socket_keepalive=False,
retry_on_timeout=True,
socket_timeout=self.REQUEST_TIMEOUT_SECS,
socket_connect_timeout=self.CONNECT_TIMEOUT_SECS)
def read_connection(self):
"""
Returns a read connection to redis cache
"""
if not self.read_conn:
try:
self.read_conn = self._connect(self.config['read_host'], self.config['read_port'])
except KeyError:
logging.error("RedisCache.read_connection invalid configuration")
raise
except Exception as e:
logging.exception("RedisCache.read_connection unhandled exception {}".format(e))
raise
return self.read_conn
def write_connection(self):
"""
Returns a write connection to redis cache
"""
if not self.write_conn:
try:
self.write_conn = self._connect(self.config['write_host'], self.config['write_port'])
except KeyError:
logging.error("RedisCache.write_connection invalid configuration")
raise
except Exception as e:
logging.exception("RedisCache.write_connection unhandled exception {}".format(e))
raise
return self.write_conn
class RedisQueue(RedisBaseClient):
def get_queue_msg_count(self, q_name):
"""
Get queue message count
Return the size of the queue (list).
:param q_name: str, redis key (queue name)
:return:
"""
try:
msg_count = self.read_connection().llen(q_name)
except Exception as e: # pragma: no cover
msg_count = 0
logging.warning("RedisQueue.get_queue_msg_count no data for queue {}. {}".format(q_name, e))
return msg_count
def is_empty(self, q_name):
"""
Return True if the queue is empty, False otherwise.
:param q_name: str, queue name
:return: bool, is empty
"""
return self.get_queue_msg_count(q_name) == 0
def publish(self, q_name, data):
"""
Publish msg/item to queue.
:param q_name: str, queue name
:param data: str, data (message)
:return: bool, success
"""
try:
self.write_connection().rpush(q_name, data)
except Exception as e: # pragma: no cover
logging.warning("RedisQueue.publish for queue {}, msg {}. {}".format(q_name, data, e))
return False
return True
def publish_multiple(self, q_name, data_list):
"""
Publish multiple msg/items to queue.
:param q_name: str, queue name
:param data_list: list of str, data (message)
:return: bool, success
"""
try:
self.write_connection().rpush(q_name, *data_list)
except Exception as e: # pragma: no cover
logging.warning("RedisQueue.publish_multiple for queue {}. {}".format(q_name, e))
return False
return True
def flush_queue(self, q_name):
"""
Flush a queue to clear work for consumer
:param q_name:
:return:
"""
try:
self.write_connection().delete(q_name)
except Exception as e: # pragma: no cover
logging.exception("RedisQueue.flush_queue {} error {}".format(q_name, e))
return False
return True
def flush_queues(self, q_names):
"""
Flush all queues
:return: bool, success
"""
try:
self.write_connection().delete(*q_names)
except Exception as e: # pragma: no cover
logging.exception("RedisQueue.flush_queues {} error {}".format(q_names, e))
return False
return True
def get_messages(self, q_name, prefetch_count=100):
"""
Get messages from queue
:param q_name: str, queue name
:param prefetch_count: int, number of msgs to prefetch
for consumer (default 1000)
"""
pipe = self.write_connection().pipeline()
pipe.lrange(q_name, 0, prefetch_count - 1) # Get msgs (w/o pop)
pipe.ltrim(q_name, prefetch_count, -1) # Trim (pop) list to new value
messages, trim_success = pipe.execute()
return messages
def get_message(self, q_name, timeout=None):
"""
Pop and return an msg/item from the queue.
If optional args timeout is not None (the default), block
if necessary until an item is available.
Allows for blocking via timeout if queue
does not exist.
:param q_name: str, queue name
:param timeout: int, timeout wait seconds (blocking get)
:return: str, message
"""
if timeout is not None:
msg = self.read_connection().blpop(q_name, timeout=timeout)
if msg:
msg = msg[1]
else:
msg = self.read_connection().lpop(q_name)
return msg
def get_message_safe(self, q_name, timeout=0, processing_prefix='processing'):
"""
Retrieve a message but also send it to
a processing queue for later acking
:param q_name: str, queue name
:param timeout:
:param processing_prefix:
:return:
"""
# Too bad blpoplpush does not exist
# item = self.read_connection().brpoplpush(q_name, "{}:{}".format(q_name, processing_prefix), timeout=timeout)
msg = self.get_message(q_name=q_name, timeout=timeout)
if msg:
self.write_connection().lpush("{}:{}".format(q_name, processing_prefix), msg)
return msg
def ack_message_safe(self, q_name, message, processing_prefix='processing'):
"""
Acknowledge a message has been processed
:param q_name: str, queue name
:param message: str, message value
:param processing_prefix: str, prefix of processing queue name
:return: bool, success
"""
self.read_connection().lrem("{}:{}".format(q_name, processing_prefix), -1, message)
return True
def requeue_message_safe(self, q_name, processing_prefix='processing'):
"""
Move unprocessed messages from processing queue
to original queue for re-processing
:param q_name:
:param processing_prefix:
:return: bool, success
"""
msgs = self.write_connection().lrange("{}:{}".format(q_name, processing_prefix), 0, -1) # Get all msgs
if msgs:
msgs = msgs[::-1] # Reverse order
pipe = self.write_connection().pipeline()
pipe.rpush(q_name, *msgs)
pipe.ltrim("{}:{}".format(q_name, processing_prefix), 0, -1) # Cleanup
pipe.execute()
return True
只需初始化
RedisQueue
并使用函数即可。我想这就是你想要的。你可以检查Redis模式以获得可靠的队列,我也有同样的问题。。。我想向客户端发送位置更新。。。。一旦它们断开连接,我不知道如何在客户端和服务器之间同步数据。。。你解决问题了吗?如果是,如何??