如何确保data.table使用GForce

如何确保data.table使用GForce,r,data.table,R,Data.table,我正在使用data.table运行以下代码,我想更好地理解触发GForce的条件是什么 DT = data.table(date = rep(seq(Sys.Date(), by = "-1 day", length.out = 1000), 10), x = runif(10000), id = rep(1:10, each = 1000)) 对于以下情况,我可以看到它正在工作: DT[, .(max(x), min(

我正在使用data.table运行以下代码,我想更好地理解触发GForce的条件是什么

DT = data.table(date = rep(seq(Sys.Date(), by = "-1 day", length.out = 1000), 10),
                x    = runif(10000),
                id   = rep(1:10, each = 1000))
对于以下情况,我可以看到它正在工作:

DT[, .(max(x), min(x), mean(x)), by = id, verbose = T]

Detected that j uses these columns: x 
Finding groups using forderv ... 0 sec
Finding group sizes from the positions (can be avoided to save RAM) ... 0 sec
lapply optimization is on, j unchanged as 'list(max(x), min(x), mean(x))'
GForce optimized j to 'list(gmax(x), gmin(x), gmean(x))'
Making each group and running j (GForce TRUE) ... 0 secs
但对于我的用例来说,情况并非如此

window1 <- Sys.Date() - 50
window2 <- Sys.Date() - 150
window3 <- Sys.Date() - 550

DT[, .(max(x[date > Sys.Date() - 50]), max(x[date > Sys.Date() - 150]), 
       max(x[date > Sys.Date() - 550])), by = id, verbose = T]

Detected that j uses these columns: x,date 
Finding groups using forderv ... 0 sec
Finding group sizes from the positions (can be avoided to save RAM) ... 0 sec
lapply optimization is on, j unchanged as 'list(max(x[date > Sys.Date() - 50]), max(x[date > Sys.Date() - 150]), max(x[date > Sys.Date() - 550]))'
GForce is on, left j unchanged
Old mean optimization is on, left j unchanged.
Making each group and running j (GForce FALSE) ...
  memcpy contiguous groups took 0.000s for 10 groups
  eval(j) took 0.005s for 10 calls
0.005 secs
window1 Sys.Date()-550])),by=id,verbose=T]
检测到j使用以下列:x,日期
正在使用forderv查找组。。。0秒
从位置查找组大小(可以避免保存RAM)。。。0秒
lapply优化处于启用状态,j不变为“列表(最大值(x[date>Sys.date()-50]),最大值(x[date>Sys.date()-150]),最大值(x[date>Sys.date()-550])”
G力开启,左j不变
“旧平均值优化”处于启用状态,左侧j保持不变。
使每组和运行j(GForce FALSE)。。。
memcpy连续组为10个组花费0.000s
eval(j)10次通话花费0.005s
0.005秒

我唯一想到的是,max函数中的每个向量都有不同的长度。

我会做一个非等连接:

# convert to IDate for speed
DT[, date := as.IDate(date)]

mDT = CJ(id = unique(DT$id), days_ago = c(50L, 150L, 550L))
mDT[, date_dn := as.IDate(Sys.Date()) - days_ago]

res = DT[mDT, on=.(id, date > date_dn), .(
  days_ago = first(days_ago), 
  m = mean(x)
), by=.EACHI, verbose=TRUE]
这个打印出来

Non-equi join operators detected ... 
  forder took ... 0 secs
  Generating group lengths ... done in 0 secs
  Generating non-equi group ids ... done in 0.01 secs
  Found 1 non-equi group(s) ...
Starting bmerge ...done in 0 secs
Detected that j uses these columns: days_ago,x 
lapply optimization is on, j unchanged as 'list(first(days_ago), mean(x))'
Old mean optimization changed j from 'list(first(days_ago), mean(x))' to 'list(first(days_ago), .External(Cfastmean, x, FALSE))'
Making each group and running j (GForce FALSE) ... 
  collecting discontiguous groups took 0.000s for 30 groups
  eval(j) took 0.000s for 30 calls
0 secs
因此,出于某种原因,这使用了另一种形式的优化,而不是GForce

结果看起来像

    id       date days_ago         m
 1:  1 2017-12-19       50 0.4435722
 2:  1 2017-09-10      150 0.4842963
 3:  1 2016-08-06      550 0.4775890
 4:  2 2017-12-19       50 0.4838715
 5:  2 2017-09-10      150 0.5150688
 6:  2 2016-08-06      550 0.5141174
 7:  3 2017-12-19       50 0.4804182
 8:  3 2017-09-10      150 0.4910027
 9:  3 2016-08-06      550 0.4901343
10:  4 2017-12-19       50 0.4644922
11:  4 2017-09-10      150 0.4902132
12:  4 2016-08-06      550 0.4810129
13:  5 2017-12-19       50 0.4666715
14:  5 2017-09-10      150 0.5193629
15:  5 2016-08-06      550 0.4850173
16:  6 2017-12-19       50 0.5318109
17:  6 2017-09-10      150 0.5481641
18:  6 2016-08-06      550 0.5216787
19:  7 2017-12-19       50 0.4500243
20:  7 2017-09-10      150 0.4915983
21:  7 2016-08-06      550 0.5055563
22:  8 2017-12-19       50 0.4958809
23:  8 2017-09-10      150 0.4915432
24:  8 2016-08-06      550 0.4981277
25:  9 2017-12-19       50 0.5833083
26:  9 2017-09-10      150 0.5160464
27:  9 2016-08-06      550 0.5091702
28: 10 2017-12-19       50 0.4946466
29: 10 2017-09-10      150 0.4798743
30: 10 2016-08-06      550 0.5030687
    id       date days_ago         m

据我所知,只有当函数(
mean
这里)的参数是一个简单的列,如
x
,而不是一个表达式,如
x[date>Sys.date()-50]
时,这种优化才会起作用。我运行了@Frank建议的解决方案,并得到了以下结果

DT[, date := as.IDate(date)]

mDT = CJ(id = unique(DT$id), days_ago = c(50L, 150L, 550L))
mDT[, date_dn := as.IDate(Sys.Date()) - days_ago]

cDT <- copy(DT) # To make sure we run different methods on different datasets

window1 <- Sys.Date() - 50
window2 <- Sys.Date() - 150
window3 <- Sys.Date() - 550

microbenchmark(
    cDT[mDT, on=.(id, date > date_dn), .(days_ago = first(days_ago), m = mean(x)), by=.EACHI],
    DT[, .(mean(x[date > window1]), mean(x[date > window2]), mean(x[date > window3])), by = id]
)

Unit: microseconds

expr      
cDT[mDT, on = .(id, date > date_dn), .(days_ago = first(days_ago), m = mean(x)), by = .EACHI] 
DT[, .(mean(x[date > window1]), mean(x[date > window2]), mean(x[date > window3])), by = id]  
min       lq     mean      median       uq      max neval cld
822.451 1462.756 1708.083  2481.601 2875.785 4459.506   100   b
1948.851 2313.842 2626.432 1565.562 1710.693 8717.868   100  a  
DT[,日期:=as.IDate(日期)]
mDT=CJ(id=unique(DT$id),日前=c(50L、150L、550L))
mDT[,date\u dn:=as.IDate(Sys.date())-天之前]
cDT window3])),由=id]
)
单位:微秒
expr
cDT[mDT,on=(id,date>date\u dn),(days\u ago=first(days\u ago),m=mean(x)),by=.EACHI]
DT[,(平均值(x[日期>窗口1]),平均值(x[日期>窗口2]),平均值(x[日期>窗口3]),by=id]
最小lq平均uq最大neval cld中值
822.451 1462.756 1708.083 2481.601 2875.785 4459.506 100 b
1948.851 2313.842 2626.432 1565.562 1710.693 8717.868 100 a

如果join的成本比mean更高,我也不会感到惊讶。mean在寻找如何强制GForce打开时遇到了这篇文章

mtd3
包含一种为此特定OP打开GForce的方法。但它仍然不比OP的方法快

mtd1 <- function() {
    mDT = CJ(id = unique(DT1$id), days_ago = c(50L, 150L, 550L))
    mDT[, date_dn := as.IDate(Sys.Date()) - days_ago]

    res = DT1[mDT, on=.(id, date > date_dn), .(
        days_ago = first(days_ago), 
        m = mean(x)
    ), by=.EACHI]   
}

mtd2 <- function() {
    DT2[, .(
            max(x[date > window1]), 
            max(x[date > window2]), 
            max(x[date > window3])
        ), by = id]
}

mtd3 <- function() {
    #Reduce(function(x, y) x[y, on="id"], 
       lapply(c(window1, window2, window3),
           function(d) DT3[date > d, .(max(x)), by = id, verbose=T])
    #)
}

library(microbenchmark)
microbenchmark(mtd1(), mtd2(), mtd3(), times=1L)
时间:

Unit: milliseconds
   expr      min       lq     mean   median       uq      max neval
 mtd1() 323.3229 323.3229 323.3229 323.3229 323.3229 323.3229     1
 mtd2() 249.8188 249.8188 249.8188 249.8188 249.8188 249.8188     1
 mtd3() 479.5279 479.5279 479.5279 479.5279 479.5279 479.5279     1
数据:

库(data.table)

n函数参数必须是简单列。在这种情况下,使用非等联接是可以实现的,但我认为如果您制作了一个可复制的示例,则更容易说明这一点。我添加了test DT。我认为,因为您也在这里首先调用它,所以它不使用GForce。@kismsu我也尝试过,但没有看到同样的情况。仅供参考,
first
也由GForce优化。扩展您的示例
DT[,(max(x),min(x),mean(x),first(x)),by=id,verbose=T]
创建
mDT
的成本可能也很高。顺便说一句,另一种方法是采用累积平均值,然后过滤到感兴趣的日期。我怀疑这是否很快,因为很多计算最后都没有用到
Unit: milliseconds
   expr      min       lq     mean   median       uq      max neval
 mtd1() 323.3229 323.3229 323.3229 323.3229 323.3229 323.3229     1
 mtd2() 249.8188 249.8188 249.8188 249.8188 249.8188 249.8188     1
 mtd3() 479.5279 479.5279 479.5279 479.5279 479.5279 479.5279     1
library(data.table)
n <- 1e7
m <- 10
DT = data.table(
    id=sample(1:m, n/m, replace=TRUE),
    date=sample(seq(Sys.Date(), by="-1 day", length.out=1000), n, replace=TRUE),
    x=runif(n))
window1 <- Sys.Date() - 50
window2 <- Sys.Date() - 150
window3 <- Sys.Date() - 550
DT[, date := as.IDate(date)]
setorder(DT, id, date)
DT1 <- copy(DT)
DT2 <- copy(DT)
DT3 <- copy(DT)