Python 在Spark中创建二元直方图

Python 在Spark中创建二元直方图,python,pandas,apache-spark,histogram,pyspark,Python,Pandas,Apache Spark,Histogram,Pyspark,假设我有一个数据帧(df)(Pandas)或RDD(Spark),其中包含以下两列: timestamp, data 12345.0 10 12346.0 12 在熊猫中,我可以很容易地创建不同箱子长度的装箱柱状图。例如,要创建超过1小时的直方图,我将执行以下操作: df = df[ ['timestamp', 'data'] ].set_index('timestamp') df.resample('1H',how=sum).dropna() 从Spark RDD迁移到Pa

假设我有一个数据帧(df)(Pandas)或RDD(Spark),其中包含以下两列:

timestamp, data
12345.0    10 
12346.0    12
在熊猫中,我可以很容易地创建不同箱子长度的装箱柱状图。例如,要创建超过1小时的直方图,我将执行以下操作:

df =  df[ ['timestamp', 'data'] ].set_index('timestamp')
df.resample('1H',how=sum).dropna()
从Spark RDD迁移到Pandas df对我来说非常昂贵(考虑到数据集)。因此,我更喜欢尽可能地呆在Spark领域内


有没有办法在Spark RDD或数据帧中实现等效功能?

Spark>=2.0

您可以使用
窗口
功能

from pyspark.sql.functions import window

(df
    .groupBy(window("timestamp", "3 minute").alias("ts"))
    .sum()
    .orderBy("ts")
    .show())
## +--------------------+---------+
## |                  ts|sum(data)|
## +--------------------+---------+
## |{2000-01-01 00:00...|        3|
## |{2000-01-01 00:03...|       12|
## |{2000-01-01 00:06...|       21|
## +--------------------+---------+

(df
    .groupBy(window("timestamp", "3 minute").alias("ts"))
    .sum()
    .orderBy("ts")
    .show())

## +--------------------+---------+
## |                  ts|sum(data)|
## +--------------------+---------+
## |{2000-01-01 00:00...|       36|
## +--------------------+---------+
火花<2.0

在这种情况下,您只需要Unix时间戳和基本算法:

from pyspark.sql.functions import timestamp_seconds

def resample_to_minute(c, interval=1):
    t = 60 * interval
    # For Spark < 3.1 
    # return (floor(c / t) * t).cast("timestamp")
    return timestamp_seconds(floor(c / t) * t)

def resample_to_hour(c, interval=1):
    return resample_to_minute(c, 60 * interval)

df = sc.parallelize([
    ("2000-01-01 00:00:00", 0), ("2000-01-01 00:01:00", 1),
    ("2000-01-01 00:02:00", 2), ("2000-01-01 00:03:00", 3),
    ("2000-01-01 00:04:00", 4), ("2000-01-01 00:05:00", 5),
    ("2000-01-01 00:06:00", 6), ("2000-01-01 00:07:00", 7),
    ("2000-01-01 00:08:00", 8)
]).toDF(["timestamp", "data"])

(df.groupBy(resample_to_minute(unix_timestamp("timestamp"), 3).alias("ts"))
    .sum().orderBy("ts").show(3, False))

## +---------------------+---------+
## |ts                   |sum(data)|
## +---------------------+---------+
## |2000-01-01 00:00:00.0|3        |
## |2000-01-01 00:03:00.0|12       |
## |2000-01-01 00:06:00.0|21       |
## +---------------------+---------+

(df.groupBy(resample_to_hour(unix_timestamp("timestamp")).alias("ts"))
    .sum().orderBy("ts").show(3, False))
## +---------------------+---------+
## |ts                   |sum(data)|
## +---------------------+---------+
## |2000-01-01 00:00:00.0|36       |
## +---------------------+---------+
从pyspark.sql.functions导入时间戳\u秒
def重新采样至分钟(c,间隔=1):
t=60*间隔
#对于火花<3.1
#返回(楼层(c/t)*t.转换(“时间戳”)
返回时间戳×秒(楼层(c/t)*t)
def重采样至小时(c,间隔=1):
将重采样返回到分钟(c,60*间隔)
df=sc.parallelize([
("2000-01-01 00:00:00", 0), ("2000-01-01 00:01:00", 1),
("2000-01-01 00:02:00", 2), ("2000-01-01 00:03:00", 3),
("2000-01-01 00:04:00", 4), ("2000-01-01 00:05:00", 5),
("2000-01-01 00:06:00", 6), ("2000-01-01 00:07:00", 7),
("2000-01-01 00:08:00", 8)
]).toDF([“时间戳”,“数据”])
(df.groupBy(重新采样到分钟(unix时间戳(“时间戳”),3)。别名(“ts”))
.sum().orderBy(“ts”).show(3,False))
## +---------------------+---------+
##| ts |总和(数据)|
## +---------------------+---------+
## |2000-01-01 00:00:00.0|3        |
## |2000-01-01 00:03:00.0|12       |
## |2000-01-01 00:06:00.0|21       |
## +---------------------+---------+
(df.groupBy(重新采样到小时(unix时间戳(“时间戳”))。别名(“ts”))
.sum().orderBy(“ts”).show(3,False))
## +---------------------+---------+
##| ts |总和(数据)|
## +---------------------+---------+
## |2000-01-01 00:00:00.0|36       |
## +---------------------+---------+
示例数据来自


一般情况下,请参见以下使用RDD而非数据帧的答案:

# Generating some data to test with 
import random
import datetime

startTS = 12345.0
array = [(startTS+60*k, random.randrange(10, 20)) for k in range(150)]

# Initializing a RDD
rdd = sc.parallelize(array)

# I first map the timestamps to datetime objects so I can use the datetime.replace 
# method to round the times
formattedRDD = (rdd
                .map(lambda (ts, data): (datetime.fromtimestamp(int(ts)), data))
                .cache())

# Putting the minute and second fields to zero in datetime objects is 
# exactly like rounding per hour. You can then reduceByKey to aggregate bins.
hourlyRDD = (formattedRDD
             .map(lambda (time, msg): (time.replace(minute=0, second=0), 1))
             .reduceByKey(lambda a, b : a + b))

hourlyHisto = hourlyRDD.collect()
print hourlyHisto
> [(datetime.datetime(1970, 1, 1, 4, 0), 60), (datetime.datetime(1970, 1, 1, 5, 0), 55), (datetime.datetime(1970, 1, 1, 3, 0), 35)]

为了进行每日聚合,可以使用time.date()而不是time.replace(…)。此外,对于从非整数日期时间对象开始的每小时bin,您可以通过增量将原始时间增加到最近的整数小时。

Spark RDD或DataFrame没有索引,Spark也不提供低级别操作,因为绝对没有对ts进行重新采样。时间序列上最近有一个Cloudera Spark软件包,它还有Python文档。我不知道它是否是你想要的,但它确实说它是时间序列的熊猫式功能。WoodChopper:你说的“没有索引”是什么意思?您指的是熊猫中可用的“设置索引”功能吗?