R:将数据表拆分为多个块,并对每个块应用函数

R:将数据表拆分为多个块,并对每个块应用函数,r,data.table,plyr,R,Data.table,Plyr,我试图将一个大型csv文件作为数据表读取,根据字段“sample_name”将其拆分为64个块,并以并行方式对每个块应用函数“myfunction” library(data.table) library(plyr) library(doMC) registerDoMC(5) #assign 5 cores #read large csv file with 6485845 rows, 13 columns dt = fread('~/data/samples.csv') #example

我试图将一个大型csv文件作为数据表读取,根据字段“sample_name”将其拆分为64个块,并以并行方式对每个块应用函数“myfunction”

library(data.table)
library(plyr)
library(doMC)

registerDoMC(5) #assign 5 cores

#read large csv file with 6485845 rows, 13 columns
dt = fread('~/data/samples.csv')

#example subset of dt (I am showing only 3 columns)
#sample_name    snpprobeset_id  snp_strand
#C00060 exm1002141  +
#C00060 exm1002260  -
#C00060 exm1002276  +
#C00075 exm1002434  -
#C00075 exm1002585  -
#C00150 exm1002721  -
#C00150 exm1004566  -
#C00154 exm100481   +
#C00154 exm1004821  -

#split into 64 chunks based on column 'sample_name'.
#each chunk is passed as an argument to a function 'myfunction'
ddply(dt,.(sample_name),myfunction,.parallel=TRUE)

#function definition
myfunction <- function(arg1)
{
    #arg1 <- data.table(arg1)   
    #write columns 9,11,12 to a tab-limited bed file named 'sample_name.bed' for e.g. C00060.bed, C00075.bed and so on. 64 bed files for 64 chunks would be written out.
    write.table(arg1[,c(9,11,12)],paste("~/Desktop/",paste(unique(arg1$sample_name),".bed",sep=""),sep=""),row.names=F,quote=F,sep="\t",col.names=F)
    #execute a system command for bam-readcount (bioinformatics program)
    #build command
    p1 <- paste(unique(arg1$sample_name),".bed",sep="")
    p2 <- paste("bam-readcount -b 20 -f hg19.fa -l",p1,sep=" ")
    p3 <- paste(unique(arg1$sample_name),".bam",sep="")
    p4 <- paste(p2,p3,sep=" ")
    p5 <- paste(unique(arg1$sample_name),"_output.txt",sep="")
    p6 <- paste(p4,p5,sep=" > ")
    system(p6) #execute system command
    #executes something like this, for sample_name=C00060
    #bam-readcount -b 20 -f hg19.fa -l C00060.bed C00060.bam > C00060_output.txt
    #read back in C00060_output.txt file
    #manipulate the file..multiple steps
    #write output to another file
}
库(data.table)
图书馆(plyr)
图书馆(doMC)
寄存器DOMC(5)#分配5个核
#读取包含6485845行13列的大型csv文件
dt=fread(“~/data/samples.csv”)
#dt的子集示例(我只显示了3列)
#样本\u名称snpprobeset\u id snp\u串
#C00060 exm1002141+
#C00060 exm1002260-
#C00060 exm1002276+
#C00075 exm1002434-
#C00075 exm1002585-
#C00150 exm1002721-
#C00150 exm1004566-
#C00154 exm100481+
#C00154 exm1004821-
#根据列“sample_name”将其拆分为64个块。
#每个区块都作为参数传递给函数“myfunction”
ddply(dt,(示例名称),myfunction,.parallel=TRUE)
#函数定义

myfunction几乎可以肯定是的,但如果不确切知道您需要做什么处理,就不可能准确地说出这是什么。好的,我将添加一些细节并更新我的问题。我使用了dt[,myfunction,by=sample\u name],但它显示了一个错误:
[.data.table
(dt,myfunction,by=sample\u name)中的错误:vector allocationplyr中的无效类型/长度(closure/64)自2014年以来已过时,请改用dplyr。plyr会阻塞高基数的拆分,因为它会尝试预先分配所有子数据帧,而不管这是否会耗尽内存。