Apache spark 如何在PySpark中的分组对象内插值列?

Apache spark 如何在PySpark中的分组对象内插值列?,apache-spark,pyspark,apache-spark-sql,interpolation,Apache Spark,Pyspark,Apache Spark Sql,Interpolation,如何在分组数据中插入PySpark数据帧 例如: 我有一个PySpark数据框架,包含以下列: +--------+-------------------+--------+ |webID |timestamp |counts | +--------+-------------------+--------+ |John |2018-02-01 03:00:00|60 | |John |2018-02-01 03:03:00|66 | |J

如何在分组数据中插入PySpark数据帧

例如:

我有一个PySpark数据框架,包含以下列:

+--------+-------------------+--------+
|webID   |timestamp          |counts  |
+--------+-------------------+--------+
|John    |2018-02-01 03:00:00|60      |
|John    |2018-02-01 03:03:00|66      |
|John    |2018-02-01 03:05:00|70      |
|John    |2018-02-01 03:08:00|76      |
|Mo      |2017-06-04 01:05:00|10      |
|Mo      |2017-06-04 01:07:00|20      |
|Mo      |2017-06-04 01:10:00|35      |
|Mo      |2017-06-04 01:11:00|40      |
+--------+----------------- -+--------+
我需要在John和Mo各自的时间间隔内,每分钟将其计数数据插值到一个数据点。我对任何简单的线性插值都持开放态度,但请注意,我的实际数据是每隔几秒钟进行一次插值,我希望每秒钟进行一次插值

因此,结果应该是:

+--------+-------------------+--------+
|webID   |timestamp          |counts  |
+--------+-------------------+--------+
|John    |2018-02-01 03:00:00|60      |
|John    |2018-02-01 03:01:00|62      |
|John    |2018-02-01 03:02:00|64      |
|John    |2018-02-01 03:03:00|66      |
|John    |2018-02-01 03:04:00|68      |
|John    |2018-02-01 03:05:00|70      |
|John    |2018-02-01 03:06:00|72      |
|John    |2018-02-01 03:07:00|74      |
|John    |2018-02-01 03:08:00|76      |
|Mo      |2017-06-04 01:05:00|10      |
|Mo      |2017-06-04 01:06:00|15      |
|Mo      |2017-06-04 01:07:00|20      |
|Mo      |2017-06-04 01:08:00|25      |
|Mo      |2017-06-04 01:09:00|30      |
|Mo      |2017-06-04 01:10:00|35      |
|Mo      |2017-06-04 01:11:00|40      |
+--------+----------------- -+--------+
需要将新行添加到原始数据帧中。
正在寻找PySpark解决方案。

这不是一个
Python
解决方案,但我认为下面的
Scala
解决方案可以使用
Python
中类似的方法实现。它涉及使用
lag
窗口函数在每一行中创建一个时间范围,以及一个UDF,该UDF通过
java.time
API将时间范围扩展为每分钟
时间序列和插值计数的列表,然后使用Spark的
explode
方法将其展平:

import org.apache.spark.sql.functions_
导入org.apache.spark.sql.expressions.Window
导入spark.implicits_
val df=Seq(
(“约翰”,“2018-02-01 03:00:00”,60),
(“约翰”,“2018-02-01 03:03:00”,66),
(“约翰”,“2018-02-01 03:05:00”,70),
(“Mo”,“2017-06-04 01:07:00”,20),
(“Mo”,“2017-06-04 01:10:00”,35),
(“Mo”,“2017-06-04 01:11:00”,40)
).toDF(“webID”、“时间戳”、“计数”)
val winSpec=Window.partitionBy($“webID”).orderBy($“timestamp”)
def minuteList(timePattern:String)=udf{(ts1:String,ts2:String,c1:Int,c2:Int)=>
导入java.time.LocalDateTime
导入java.time.format.DateTimeFormatter
val timeFormat=DateTimeFormatter.of模式(timePattern)
val permits=if(ts1==ts2)向量(ts1)else{
val t1=LocalDateTime.parse(ts1,timeFormat)
val t2=LocalDateTime.parse(ts2,timeFormat)
Iterator.iterate(t1.plusMinutes(1))(u.plusMinutes(1)).takeWhile(!u.isAfter(t2))。
映射(u.format(timeFormat))。
toVector
}
val sz=许可证大小

val perMinCount=for{i如果使用Python,完成任务的最短方法是使用
GROUPED\u MAP
udf重新使用现有的Pandas函数:

从操作员导入属性
从pyspark.sql.types导入StructType
从pyspark.sql.functions导入pandasuudf,PandasUDFType
def重采样(模式、频率、时间戳_col=“timestamp”**kwargs):
@熊猫队(
StructType(已排序(schema,key=attrgetter(“name”)),
PandasUDFType.GROUPED\u映射)
定义(pdf):
pdf.set_索引(timestamp_col,inplace=True)
pdf=pdf.resample(freq).interpolate()
pdf.ffill(inplace=True)
重置索引(drop=False,inplace=True)
排序索引(axis=1,inplace=True)
返回pdf
返回_
应用于您的数据:

从pyspark.sql.functions导入到\u时间戳
df=spark.createDataFrame([
(“约翰”,“2018-02-01 03:00:00”,60),
(“约翰”,“2018-02-01 03:03:00”,66),
(“约翰”,“2018-02-01 03:05:00”,70),
(“约翰”,“2018-02-01 03:08:00”,76),
(“Mo”,“2017-06-04 01:05:00”,10),
(“Mo”,“2017-06-04 01:07:00”,20),
(“Mo”,“2017-06-04 01:10:00”,35),
(“Mo”,“2017-06-04 01:11:00”,40),
],(“webID”、“时间戳”、“计数”)。带列(
“时间戳”,改为时间戳(“时间戳”)
)
groupBy(“webID”).apply(重采样(df.schema,“60S”)).show()
它产生

+------+-------------------+-----+
|计数|时间戳| webID|
+------+-------------------+-----+
|60 | 2018-02-01 03:00:00 |约翰|
|62 | 2018-02-01 03:01:00 |约翰|
|64 | 2018-02-01 03:02:00 |约翰|
|66 | 2018-02-01 03:03:00 |约翰|
|68 | 2018-02-01 03:04:00 |约翰|
|70 | 2018-02-01 03:05:00 |约翰|
|72 | 2018-02-01 03:06:00 |约翰|
|74 | 2018-02-01 03:07:00 |约翰|
|76 | 2018-02-01 03:08:00 |约翰|
|10 | 2017-06-04 01:05:00 | Mo|
|2017年6月15日01:06:00|
|20 | 2017-06-04 01:07:00 | Mo|
|2017年6月25日01:08:00|
|30 | 2017-06-04 01:09:00 | Mo|
|35 | 2017-06-04 01:10:00 | Mo|
|40 | 2017-06-04 01:11:00 | Mo|
+------+-------------------+-----+
这是在假设单个
webID
的输入数据和插值数据都可以放入单个节点的内存中的情况下工作的(通常,其他精确的、非迭代的解决方案必须做出类似的假设)。如果不是这种情况,您可以通过采用重叠窗口轻松地进行近似

partial=(df
.groupBy(“webID”,窗口(“时间戳”、“5分钟”、“3分钟”)[“开始”])
.apply(重新采样(df.schema,“60S”))
并汇总最终结果

来自pyspark.sql.functions
(部分
.groupBy(“webID”、“时间戳”)
.agg(平均值(“计数”)
.别名(“计数”))
#按键和时间戳排序,仅用于一致的表示
.orderBy(“webId”、“时间戳”)
.show())
这当然要昂贵得多(有两次洗牌,一些值将被多次计算),但如果重叠不足以包含下一次观察,也会留下间隙

+-----+-------------------+------+
|webID|          timestamp|counts|
+-----+-------------------+------+
| John|2018-02-01 03:00:00|  60.0|
| John|2018-02-01 03:01:00|  62.0|
| John|2018-02-01 03:02:00|  64.0|
| John|2018-02-01 03:03:00|  66.0|
| John|2018-02-01 03:04:00|  68.0|
| John|2018-02-01 03:05:00|  70.0|
| John|2018-02-01 03:08:00|  76.0|
|   Mo|2017-06-04 01:05:00|  10.0|
|   Mo|2017-06-04 01:06:00|  15.0|
|   Mo|2017-06-04 01:07:00|  20.0|
|   Mo|2017-06-04 01:08:00|  25.0|
|   Mo|2017-06-04 01:09:00|  30.0|
|   Mo|2017-06-04 01:10:00|  35.0|
|   Mo|2017-06-04 01:11:00|  40.0|
+-----+-------------------+------+

解决此问题的本机pyspark实现(无udf)是:

import pyspark.sql.functions as F
resample_interval = 1  # Resample interval size in seconds

df_interpolated = (
  df_data
  # Get timestamp and Counts of previous measurement via window function
  .selectExpr(
    "webID",
    "LAG(Timestamp) OVER (PARTITION BY webID ORDER BY Timestamp ASC) as PreviousTimestamp",
    "Timestamp as NextTimestamp",
    "LAG(Counts) OVER (PARTITION BY webID ORDER BY Timestamp ASC) as PreviousCounts",
    "Counts as NextCounts",
  )
  # To determine resample interval round up start and round down end timeinterval to nearest interval boundary
  .withColumn("PreviousTimestampRoundUp", F.expr(f"to_timestamp(ceil(unix_timestamp(PreviousTimestamp)/{resample_interval})*{resample_interval})"))
  .withColumn("NextTimestampRoundDown", F.expr(f"to_timestamp(floor(unix_timestamp(NextTimestamp)/{resample_interval})*{resample_interval})"))
  # Make sure we don't get any negative intervals (whole interval is within resample interval)
  .filter("PreviousTimestampRoundUp<=NextTimestampRoundDown")
  # Create resampled time axis by creating all "interval" timestamps between previous and next timestamp
  .withColumn("Timestamp", F.expr(f"explode(sequence(PreviousTimestampRoundUp, NextTimestampRoundDown, interval {resample_interval} second)) as Timestamp"))
  # Sequence has inclusive boundaries for both start and stop. Filter out duplicate Counts if original timestamp is exactly a boundary.
  .filter("Timestamp<NextTimestamp")
  # Interpolate Counts between previous and next
  .selectExpr(
    "webID",
    "Timestamp", 
    """(unix_timestamp(Timestamp)-unix_timestamp(PreviousTimestamp))
        /(unix_timestamp(NextTimestamp)-unix_timestamp(PreviousTimestamp))
        *(NextCounts-PreviousCounts) 
        +PreviousCounts
        as Counts"""
  )
)
导入pyspark.sql.F函数
重采样间隔=1#重采样间隔大小(秒)
df_插值=(
df_数据
#通过窗口函数获取上一次测量的时间戳和计数
.selectExpr(
“网络ID”,
“滞后(时间戳)超过(按webID划分顺序按时间戳ASC)作为上一个时间戳”,
“时间戳为NextTimestamp”,
“滞后(计数)超过(按webID划分顺序按时间戳ASC划分)作为以前的计数”,
“算作下一个计数”,
)
#要确定重采样间隔,请向上取整开始和向下取整结束时间间隔至最近的间隔边界
.withColumn(“PreviousTimestamp汇总”,F.expr(F“to_timestamp(ceil)(unix_timestamp(PreviousTimestamp)/{resample_interval})*{resample_interval})))
具有