Python 如何使用pyarrow将数据帧设置/获取到Redis中

Python 如何使用pyarrow将数据帧设置/获取到Redis中,python,pandas,redis,pyarrow,py-redis,Python,Pandas,Redis,Pyarrow,Py Redis,使用 在0.25之前,下面的操作有效 dd = {'ID': ['H576','H577','H578','H600', 'H700'], 'CD': ['AAAAAAA', 'BBBBB', 'CCCCCC','DDDDDD', 'EEEEEEE']} df = pd.DataFrame(dd) 现在,有一些不推荐使用的警告 set: redisConn.set("key", df.to_msgpack(compress='zlib')) get: pd.read_msgpa

使用

在0.25之前,下面的操作有效

dd = {'ID': ['H576','H577','H578','H600', 'H700'],
      'CD': ['AAAAAAA', 'BBBBB', 'CCCCCC','DDDDDD', 'EEEEEEE']}
df = pd.DataFrame(dd)
现在,有一些不推荐使用的警告

set:  redisConn.set("key", df.to_msgpack(compress='zlib'))
get:  pd.read_msgpack(redisConn.get("key"))
pyarrow是如何工作的?还有,我如何将pyarrow对象从Redis进出

参考:

下面是一个完整的示例,可以使用pyarrow对要存储在redis中的熊猫数据帧进行序列化

FutureWarning: to_msgpack is deprecated and will be removed in a future version.
It is recommended to use pyarrow for on-the-wire transmission of pandas objects.

The read_msgpack is deprecated and will be removed in a future version.
It is recommended to use pyarrow for on-the-wire transmission of pandas objects.
然后是python

apt-get install python3 python3-pip redis-server
pip3 install pandas pyarrow redis
我刚刚向pandas提交了这个pyarrow示例,以便将其包含在文档中

参考文件:


我是这样做的,因为默认的_序列化_上下文已被弃用,而且事情更简单一些:

import pandas as pd
import pyarrow as pa
import redis

df=pd.DataFrame({'A':[1,2,3]})
r = redis.Redis(host='localhost', port=6379, db=0)

context = pa.default_serialization_context()
r.set("key", context.serialize(df).to_buffer().to_pybytes())
context.deserialize(r.get("key"))
   A
0  1
1  2
2  3

如果您想压缩Redis中的数据,可以使用parquet&gzip的内置支持

import pyarrow as pa
import redis

pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
r = redis.Redis(connection_pool=pool)

def storeInRedis(alias, df):
    df_compressed = pa.serialize(df).to_buffer().to_pybytes()
    res = r.set(alias,df_compressed)
    if res == True:
        print(f'{alias} cached')

def loadFromRedis(alias):
    data = r.get(alias)
    try:
        return pa.deserialize(data)
    except:
        print("No data")


storeInRedis('locations', locdf)

loadFromRedis('locations')

这真是太好了。我假设防御性程序员应该在推到Redis之前检查数据帧的大小,因为据我所知512MB的限制仍然存在@BrifordWylie:在将数据推送到Redis之前,我使用
bz2
包来压缩数据。我在以下位置得到错误:context.deserialize(r.get(“key”))UnicodeDecodeError:“utf-8”编解码器无法解码位置16处的字节0xff:无效开始byte@sumonc你用
r.get(“key”)能得到什么
单独使用?上述答案是否进行了任何压缩?在to_pybytes()中,pyarrow似乎在2.0.0中不推荐使用此选项
def openRedisCon():
   pool = redis.ConnectionPool(host=REDIS_HOST, port=REDIS_PORT, db=0)
   r = redis.Redis(connection_pool=pool)
   return r

def storeDFInRedis(alias, df):
    """Store the dataframe object in Redis
    """

    buffer = io.BytesIO()
    df.to_parquet(buffer, compression='gzip')
    buffer.seek(0) # re-set the pointer to the beginning after reading
    r = openRedisCon()
    res = r.set(alias,buffer.read())

def loadDFFromRedis(alias, useStale: bool = False):
    """Load the named key from Redis into a DataFrame and return the DF object
    """

    r = openRedisCon()

    try:
        buffer = io.BytesIO(r.get(alias))
        buffer.seek(0)
        df = pd.read_parquet(buffer)
        return df
    except:
        return None