R/data.table:优化;递归的;子句

R/data.table:优化;递归的;子句,r,data.table,grouping,large-data,R,Data.table,Grouping,Large Data,我正在处理一个包含基因组数据的大型数据表(1e6-10e6行,10列)。我希望通过将组减少为单行来减少数据。此减少取决于多个列,但是连续的步骤。示例数据如下所示: dt.tmp <- data.table(str1=paste0("A",sample(1:100, 2000, replace=TRUE)), str2=paste0("B",sample(1:5, 2000, replace=TRUE)),

我正在处理一个包含基因组数据的大型数据表(1e6-10e6行,10列)。我希望通过将组减少为单行来减少数据。此减少取决于多个列,但是连续的步骤。示例数据如下所示:

dt.tmp <- data.table(str1=paste0("A",sample(1:100, 2000, replace=TRUE)),
                     str2=paste0("B",sample(1:5, 2000, replace=TRUE)),
                     c1=sample(1:3,2000, replace=T), 
                     c2=sample(1:3,2000,replace=T),
                     d1=sample(1:2,2000,replace=T),
                     d2=sample(1:2,2000,replace=TRUE))

我最后一次尝试使用.SD最小化:

dt.tmp[,':='(c=c1+c2, d=d1+d2, rnd=sample.int(.N))
     ][,':='(n=.N,cmaxidx=(c==max(c))),by=.(str1,str2)
     ][,':='(nmaxidx=(n==max(n))),by=str1
     ][,':='(dmaxidx=(d==max(d))),by=.(str1,str2,c)
     ][,.SD[dmaxidx&cmaxidx&nmaxidx
     ][rnd==max(rnd)], by=str1
     ][,':='(c=NULL,d=NULL,nmaxidx=NULL,cmaxidx=NULL,dmaxidx=NULL,n=NULL,rnd=NULL)][,.SD]
(后面的操作仅用于清理和打印)
我一点也不喜欢数据表。我是否可以对上述问题/代码进行明显的优化,以减少执行时间(目前我需要200-300个CPU小时左右,在我们的服务器上使用最多24个内核,最多需要14个时钟小时左右)。
实际数据如下所示:

Classes 'data.table' and 'data.frame':  50259993 obs. of  26 variables:
 $ BC         : chr  "AAAAAAAAAAAACAAGGTCG" "AAAAAAAAAAAACTACCGTG" "AAAAAAAAAAAAGCACTGAG" "AAAAAAAAAAAAGCACTGAG" ...
 $ chrom      : chr  "chr2L" "chr2R" "chr2R" "chr2R" ...
 $ start      : int  22371281 12477441 8323580 8323580 17304870 31837917 24897443 22469324 22469324 18294732 ...
 $ end        : int  22371463 12477734 8323924 8323924 17305040 31838183 24897665 22469723 22469723 18295044 ...
 $ strand     : chr  "+" "+" "-" "-" ...
 $ MAPQ1      : int  1 40 42 42 42 42 24 1 1 42 ...
 $ MAPQ2      : int  1 40 42 42 42 42 24 1 1 42 ...
 $ AS1        : int  -3 -33 0 -3 -12 -6 -39 0 0 0 ...
 $ AS2        : int  -12 -3 -18 -15 0 0 -3 -5 -20 -6 ...
 $ XS1        : num  -3 NA NA NA NA NA NA 0 0 NA ...
 $ XS2        : num  -12 NA NA NA NA NA NA 0 -15 NA ...
 $ SNP_ABS_POS: chr  "22371329,22371329,22371356,22371356,22371437" "12477460,12477500,12477524,12477707,12477719" "8323582,8323583,8323588,8323750,8323759,8323791,8323868,8323878" "8323582,8323583,8323588,8323750,8323759,8323791,8323868,8323878" ...
 $ SNP_REL_POS: chr  "48,48,75,75,156" "19,59,83,266,278" "2,3,8,170,179,211,288,298" "2,3,8,170,179,211,288,298" ...
 $ SNP_ID     : chr  ".,.,.,.,." ".,.,.,.,." ".,.,.,.,.,.,.,." ".,.,.,.,.,.,.,." ...
 $ SNP_SEQ    : chr  "CCCTTCATCGCACGAATGTGTGCGT,CCCTTCATCGCACGAATGTGAGCGT,A,A,T" "T,G,ACCGGCATCCATCCATCCAT,T,C" "T,T,ACG,A,G,G,C,T" "T,T,ACG,A,G,G,C,T" ...
 $ SNP_VAR    : chr  "-3,-3,0,0,0" "0,-1,-2,-1,0" "1,1,-3,-2,-2,-2,-1,-1" "1,1,-3,-2,-2,-2,-1,-1" ...
 $ SNP_PARENT : chr  "unexpected,unexpected,expected,expected,expected" "expected,non_parental_allele,unread,non_parental_allele,expected" "expected,expected,unexpected,unread,unread,unread,non_parental_allele,non_parental_allele" "expected,expected,unexpected,unread,unread,unread,non_parental_allele,non_parental_allele" ...
 $ SNP_TYPE   : chr  "indel,indel,snp,snp,snp" "snp,snp,indel,snp,snp" "snp,indel,indel,snp,snp,snp,snp,snp" "snp,indel,indel,snp,snp,snp,snp,snp" ...
 $ SNP_SUBTYPE: chr  "del,del,ts,ts,tv" "tv,tv,del,tv,ts" "tv,del,ins,tv,tv,tv,ts,tv" "tv,del,ins,tv,tv,tv,ts,tv" ...
 - attr(*, ".internal.selfref")=<externalptr> 
 - attr(*, "sorted")= chr  "BC" "chrom" "start" "end"

将其分为几个步骤:

# Within group defined by str1 create groups based on str2 and select the largest group(s)
combinations2keep <- dt.tmp[, .N, by = .(str1, str2)
                            ][, .SD[N == max(N)], by = str1
                              ][, !"N"]
dt.tmp <- dt.tmp[combinations2keep, on = .(str1, str2)]

# In resulting group(s) select group(s) with max (c1+c2)
dt.tmp <- dt.tmp[, .SD[c1+c2 == max(c1+c2)], by = str1]

# In resulting group(s) select group(s) with max (d1+d2)
dt.tmp <- dt.tmp[, .SD[d1+d2 == max(d1+d2)], by = str1]

# In resulting group(s) select a random row
dt.tmp <- dt.tmp[, .SD[sample(.N, size = 1)], by = str1]

@sindri_baldur:我对你的答案做了进一步优化。在大约一半的情况下,第一个分组给出的分组只有一行。通过将数据中的第一个分组拆分为一行和其余的分组,一半的数据不需要进一步分组。它可以额外节省10-20%的计算时间

dt.tmp.N <- dt.tmp[, .N, by = .(BC, chrom,start,end)
                   ][, .SD[N == max(N)], by = BC]
dt.tmp.1 <- dt.tmp[dt.tmp.N[N==1],on = .(BC, chrom,start,end)
                   ][, .SD[sample(.N,1)], by = BC][,!"N"]
dt.tmp.Ng1 <- dt.tmp[dt.tmp.N[N>1],on = .(BC, chrom,start,end) 
                     ][, .SD[MAPQ1+MAPQ2 == max(MAPQ1+MAPQ2)], by = BC
                       ][, .SD[AS1+AS2 == max(AS1+AS2)], by = BC
                         ][, .SD[sample(.N,1)], by = BC
                           ][,!"N"]
rbindlist(list(dt.tmp.1,dt.tmp.Ng1))

dt.tmp.N谢谢!我一直在比较您的解决方案和我的一些解决方案,使用真实数据(10000行),与我最快的版本相比,您的代码占用了66%的时间。我还使用扩展名
ifelse(N==1,1,sample(.N,1))
尝试了您的版本,但这似乎让它变得更糟(令我惊讶)。最后,我还更好地理解了
dt[dt[,foo(),by=…],on=…]
。这是让你的版本更快的主要改变吗?再次感谢。@Ludo关键之一是,在每一步中,我们都在减少行数,因此每一步都应该比前一步快得多。我做了进一步的优化。在大约一半的情况下,第一个分组给出的分组只有一行。通过将数据中的第一个分组拆分为一行,剩下的一半数据无需进一步分组:
# Within group defined by str1 create groups based on str2 and select the largest group(s)
combinations2keep <- dt.tmp[, .N, by = .(str1, str2)
                            ][, .SD[N == max(N)], by = str1
                              ][, !"N"]
dt.tmp <- dt.tmp[combinations2keep, on = .(str1, str2)]

# In resulting group(s) select group(s) with max (c1+c2)
dt.tmp <- dt.tmp[, .SD[c1+c2 == max(c1+c2)], by = str1]

# In resulting group(s) select group(s) with max (d1+d2)
dt.tmp <- dt.tmp[, .SD[d1+d2 == max(d1+d2)], by = str1]

# In resulting group(s) select a random row
dt.tmp <- dt.tmp[, .SD[sample(.N, size = 1)], by = str1]
dt.tmp[dt.tmp[, .N, by = .(str1, str2)][, .SD[N == max(N)], by = str1],
       on = .(str1, str2)
       ][, .SD[c1+c2 == max(c1+c2)], by = str1
         ][, .SD[d1+d2 == max(d1+d2)], by = str1
           ][, .SD[sample(.N, size = 1)], by = str1
             ][, !"N"]
dt.tmp.N <- dt.tmp[, .N, by = .(BC, chrom,start,end)
                   ][, .SD[N == max(N)], by = BC]
dt.tmp.1 <- dt.tmp[dt.tmp.N[N==1],on = .(BC, chrom,start,end)
                   ][, .SD[sample(.N,1)], by = BC][,!"N"]
dt.tmp.Ng1 <- dt.tmp[dt.tmp.N[N>1],on = .(BC, chrom,start,end) 
                     ][, .SD[MAPQ1+MAPQ2 == max(MAPQ1+MAPQ2)], by = BC
                       ][, .SD[AS1+AS2 == max(AS1+AS2)], by = BC
                         ][, .SD[sample(.N,1)], by = BC
                           ][,!"N"]
rbindlist(list(dt.tmp.1,dt.tmp.Ng1))