Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/81.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/url/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
R 为什么data.table运行时间会随着特定数据集行数的平方而增长?_R_Data.table - Fatal编程技术网

R 为什么data.table运行时间会随着特定数据集行数的平方而增长?

R 为什么data.table运行时间会随着特定数据集行数的平方而增长?,r,data.table,R,Data.table,我的目标是在data.table中取消对列的测试。原始data.table有超过800k行,下面是5k行的示例 但是,我注意到,取消测试此数据集所需的时间随着行数的平方而增长,而不是像我预期的那样大致呈线性增长: # Subset for 500 rows > item_res <- item[1:500] > microbenchmark(item_res[, lance[[1]], by = item_id], times = 5L) Unit: millisec

我的目标是在data.table中取消对列的测试。原始data.table有超过800k行,下面是5k行的示例

但是,我注意到,取消测试此数据集所需的时间随着行数的平方而增长,而不是像我预期的那样大致呈线性增长:

# Subset for 500 rows    
> item_res <- item[1:500] 
> microbenchmark(item_res[, lance[[1]], by = item_id], times = 5L)
Unit: milliseconds
                                                  expr      min       lq     mean   median       uq     max neval
 item_int <- item_res[, lance[[1]], by = item_id] 281.3878 282.2426 286.9925 284.4111 286.1291 300.792     5

# Subset for 5000 rows
> item_res <- item[1:5000] 
> microbenchmark(item_res[, lance[[1]], by = item_id], times = 5L)
Unit: seconds
                                                  expr      min       lq     mean   median      uq     max neval
 item_int <- item_res[, lance[[1]], by = item_id] 44.35222 47.21508 47.40021 47.38034 47.9733 50.0801     5
set.seed(1234)
n <- 5E4
n_nested <- 40

v1 <- data.table(val = as.numeric( 1:n_nested)        , ll = letters[sample(1:20, size = n_nested, replace = T)])
v2 <- data.table(val = as.numeric(1:n_nested *2)     , ll = letters[sample(1:20, size = n_nested, replace = T)])
v3 <- data.table(val = as.numeric(1:n_nested *2+1)   , ll = letters[sample(1:20, size = n_nested, replace = T)])
char_1 <- as.character(1:n)
char_2 <- as.character(sample(1:n,n))
out <- data.table(char_1 = char_1,char_2 = char_2, value = list(v1,v2,v3))

microbenchmark(out[, value[[1]], by = .(char_1, char_2)]  , times = 5L)
对于行数=5E5:

Unit: seconds
                                      expr      min       lq     mean   median       uq      max neval
 out[, value[[1]], by = .(char_1, char_2)] 2.137035 2.152496 2.359902 2.178358 2.324148 3.007475     5
对于行数=5E6:

Unit: seconds
                                      expr      min       lq     mean   median       uq      max neval
 out[, value[[1]], by = .(char_1, char_2)] 38.49398 40.88233 47.28661 41.20114 44.95406 70.90152     5
2-我使用的英特尔I7内存为16GB,所有R、RStudio和data.table软件包都已更新(RStudio版本1.3.1056、R 4.0.2、data.table 1.13.0)。在此过程中,计算机从未将内存分页到磁盘中

3-我还尝试了其他令人不安的实现(上面讨论的选择的实现是最快的):

4-根据Chirico的要求,每个版本的verbose和sessionInfo()

V 1.12.8

> item_int <- item[, unlist(lance, recursive = F ), by = unnest_names, verbose = TRUE ] 
Detected that j uses these columns: lance 
Finding groups using forderv ... forder.c received 872581 rows and 11 columns
0.150s elapsed (0.170s cpu) 
Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
lapply optimization is on, j unchanged as 'unlist(lance, recursive = F)'
GForce is on, left j unchanged
Old mean optimization is on, left j unchanged.
Making each group and running j (GForce FALSE) ... dogroups: growing from 872581 to 18513352 rows
Wrote less rows (16070070) than allocated (18513352).

  memcpy contiguous groups took 0.048s for 872581 groups
  eval(j) took 1.560s for 872581 calls
14.3s elapsed (11.1s cpu) 
> sessionInfo()
R version 4.0.2 (2020-06-22)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18363)

Matrix products: default

locale:
[1] LC_COLLATE=Portuguese_Brazil.1252  LC_CTYPE=Portuguese_Brazil.1252    LC_MONETARY=Portuguese_Brazil.1252
[4] LC_NUMERIC=C                       LC_TIME=Portuguese_Brazil.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] microbenchmark_1.4-7 data.table_1.12.8    lubridate_1.7.9      stringi_1.4.6        runner_0.3.7         e1071_1.7-3         
[7] ggplot2_3.3.2        stringr_1.4.0        magrittr_1.5        

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.5       pillar_1.4.6     compiler_4.0.2   class_7.3-17     tools_4.0.2      digest_0.6.25    packrat_0.5.0    evaluate_0.14   
 [9] lifecycle_0.2.0  tibble_3.0.3     gtable_0.3.0     pkgconfig_2.0.3  rlang_0.4.7      rstudioapi_0.11  yaml_2.2.1       xfun_0.16       
[17] withr_2.2.0      dplyr_1.0.0      knitr_1.29       generics_0.0.2   vctrs_0.3.2      grid_4.0.2       tidyselect_1.1.0 glue_1.4.1      
[25] R6_2.4.1         rmarkdown_2.3    purrr_0.3.4      scales_1.1.1     ellipsis_0.3.1   htmltools_0.5.0  colorspace_1.4-1 tinytex_0.25    
[33] munsell_0.5.0    crayon_1.3.4  

解决了!!我将data.table版本从1.13.0更改为1.12.8,处理所有800k行数据集只需4秒钟。

您能在1.13.0中使用
verbose=TRUE
内部
[
,1.12.8版也一样吗?嗨,米歇尔,我编辑了问题并添加了你要求的信息。我觉得新版本1.13.0增加了一个新步骤来预处理数据。尊敬的。谢谢…没有代表性模拟数据的可复制示例,我们现在必须继续在边缘上搜索…你能共享
表格(长度)吗(物品$lance))
?也许还有
str(物品$lance,1L)
?亲爱的Michael,我在问题中添加了表格命令结果。注意1)我为5k行提供了一个具有数据结构的链接,2)如果使用v1.13.0运行此数据集,则需要43秒,而使用v1.12.8需要8毫秒,3)查看最后一行的详细信息[v1.13.0 htat我附加了,它说“…为每个组一次又一次地创建相同的名称是非常低效的…防止加速(考虑更改为:=)。此消息将来可能会升级为警告。”非常感谢RDS文件和报告。将在您提交的GitHub上跟进此问题,谢谢:
item_res[, lance[[1]], by = unnest_names]                        # Chosen one
item_res[, unlist(lance, recursive = FALSE), by = unnest_names]  # A little bit slower than above
item_res[, rbindlist(lance), by = unnest_names]                  # much slower than above
> item_int <- item[, unlist(lance, recursive = F ), by = unnest_names, verbose = TRUE ] 
Detected that j uses these columns: lance 
Finding groups using forderv ... forder.c received 872581 rows and 11 columns
0.150s elapsed (0.170s cpu) 
Finding group sizes from the positions (can be avoided to save RAM) ... 0.000s elapsed (0.000s cpu) 
lapply optimization is on, j unchanged as 'unlist(lance, recursive = F)'
GForce is on, left j unchanged
Old mean optimization is on, left j unchanged.
Making each group and running j (GForce FALSE) ... dogroups: growing from 872581 to 18513352 rows
Wrote less rows (16070070) than allocated (18513352).

  memcpy contiguous groups took 0.048s for 872581 groups
  eval(j) took 1.560s for 872581 calls
14.3s elapsed (11.1s cpu) 
> sessionInfo()
R version 4.0.2 (2020-06-22)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18363)

Matrix products: default

locale:
[1] LC_COLLATE=Portuguese_Brazil.1252  LC_CTYPE=Portuguese_Brazil.1252    LC_MONETARY=Portuguese_Brazil.1252
[4] LC_NUMERIC=C                       LC_TIME=Portuguese_Brazil.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] microbenchmark_1.4-7 data.table_1.12.8    lubridate_1.7.9      stringi_1.4.6        runner_0.3.7         e1071_1.7-3         
[7] ggplot2_3.3.2        stringr_1.4.0        magrittr_1.5        

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.5       pillar_1.4.6     compiler_4.0.2   class_7.3-17     tools_4.0.2      digest_0.6.25    packrat_0.5.0    evaluate_0.14   
 [9] lifecycle_0.2.0  tibble_3.0.3     gtable_0.3.0     pkgconfig_2.0.3  rlang_0.4.7      rstudioapi_0.11  yaml_2.2.1       xfun_0.16       
[17] withr_2.2.0      dplyr_1.0.0      knitr_1.29       generics_0.0.2   vctrs_0.3.2      grid_4.0.2       tidyselect_1.1.0 glue_1.4.1      
[25] R6_2.4.1         rmarkdown_2.3    purrr_0.3.4      scales_1.1.1     ellipsis_0.3.1   htmltools_0.5.0  colorspace_1.4-1 tinytex_0.25    
[33] munsell_0.5.0    crayon_1.3.4  
> sessionInfo()
R version 4.0.2 (2020-06-22)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18363)

Matrix products: default

locale:
[1] LC_COLLATE=Portuguese_Brazil.1252  LC_CTYPE=Portuguese_Brazil.1252    LC_MONETARY=Portuguese_Brazil.1252
[4] LC_NUMERIC=C                       LC_TIME=Portuguese_Brazil.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] lubridate_1.7.9   stringi_1.4.6     runner_0.3.7      e1071_1.7-3       ggplot2_3.3.2     stringr_1.4.0     magrittr_1.5     
[8] data.table_1.13.0

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.5       pillar_1.4.6     compiler_4.0.2   class_7.3-17     tools_4.0.2      digest_0.6.25    packrat_0.5.0    evaluate_0.14   
 [9] lifecycle_0.2.0  tibble_3.0.3     gtable_0.3.0     pkgconfig_2.0.3  rlang_0.4.7      rstudioapi_0.11  yaml_2.2.1       xfun_0.16       
[17] withr_2.2.0      dplyr_1.0.0      knitr_1.29       generics_0.0.2   vctrs_0.3.2      grid_4.0.2       tidyselect_1.1.0 glue_1.4.1      
[25] R6_2.4.1         rmarkdown_2.3    purrr_0.3.4      scales_1.1.1     ellipsis_0.3.1   htmltools_0.5.0  colorspace_1.4-1 tinytex_0.25    
[33] munsell_0.5.0    crayon_1.3.4    
> item_int <- item[, unlist(lance, recursive = F ), by = unnest_names, verbose = TRUE ] 
Detected that j uses these columns: lance 
Finding groups using forderv ... forder.c received 872581 rows and 11 columns
0.160s elapsed (0.250s cpu) 
Finding group sizes from the positions (can be avoided to save RAM) ... 0.020s elapsed (0.010s cpu) 
lapply optimization is on, j unchanged as 'unlist(lance, recursive = F)'
GForce is on, left j unchanged
Old mean optimization is on, left j unchanged.
Making each group and running j (GForce FALSE) ... The result of j is a named list. It's very inefficient to create the same names over and over again for each group. When j=list(...), any names are detected, removed and put back after grouping has completed, for efficiency. Using j=transform(), for example, prevents that speedup (consider changing to :=). This message may be upgraded to warning in future.
> table(lengths(item$lance))

     0      8 
 75171 797410