Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/73.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
将sparkr收集到数据帧中_R_Apache Spark_Sparkr - Fatal编程技术网

将sparkr收集到数据帧中

将sparkr收集到数据帧中,r,apache-spark,sparkr,R,Apache Spark,Sparkr,我正在将一些数据加载到sparkR(Spark版本1.4.0,在fedora21上运行)中,在上面运行一些算法,生成三个不同的数字。我的算法需要一组参数,我想在同一个数据上运行不同的参数设置。输出格式应为数据帧(或csv列表),其列为算法参数和我的算法计算的三个数字,即 mypar1, mypar2, mypar3, myres1, myres2, myres3 1 1.5 1.2 5.6 8.212 5.9 2 1.8 1

我正在将一些数据加载到sparkR(Spark版本1.4.0,在fedora21上运行)中,在上面运行一些算法,生成三个不同的数字。我的算法需要一组参数,我想在同一个数据上运行不同的参数设置。输出格式应为数据帧(或csv列表),其列为算法参数和我的算法计算的三个数字,即

  mypar1, mypar2, mypar3, myres1, myres2, myres3
  1       1.5     1.2     5.6      8.212  5.9
  2       1.8     1.7     5.1      7.78   8.34
将是两个不同参数设置的输出。 我编写了下面的脚本,该脚本并行运行不同的paramater设置:它将一个带有参数值的输入文件作为参数,在上面的示例中如下所示:

 1,1.5,1.2
 2,1.8,1.7
所以每行一个参数组合

我的问题是:不是每个参数设置一个,而是将所有数字合并到一个长列表中。函数cv_spark返回data.frame(基本上是一行)。我如何告诉spark将cv_spark的输出合并到一个数据帧(即执行类似rbind的操作)或列表列表中

#!/home/myname/Spark/spark-1.4.0/bin/sparkR

library(SparkR)

sparkcontext <- sparkR.init("local[3]","cvspark",sparkEnvir=list(spark.executor.memory="1g"))

cv_spark <- function(indata) {
   cv_params <- strsplit(indata, split=",")[[1]]
   param.par1 = as.integer(cv_params[1])
   param.par2 = as.numeric(cv_params[2])
   param.par3 = as.numeric(cv_params[3])
   predictions <- rep(NA, 1)
   ## here I run some calculation on some data that I load to my SparkR session, 
   ## but for illustration purpose I'm just filling up with some random numbers
   mypred = base:::sample(seq(5,10,by=0.01),3)
   predictions <- cbind(param.par1, param.par2, param.par3,mypred[1],mypred[2],mypred[3])
   return(as.data.frame(predictions))
}

args <- commandArgs(trailingOnly=TRUE)
print(paste("args ", args))
cvpar = readLines(args[[1]])

rdd <- SparkR:::parallelize(sparkcontext, coll=cvpar, numSlices=4)
myerr <- SparkR:::flatMap(rdd,cv_spark)
output <- SparkR:::collect(myerr)
print("final output")
print(output)

outfile = "spark_output.csv"
write.csv(output,outfile,quote=FALSE,row.names=FALSE)
#/home/myname/Spark/Spark-1.4.0/bin/sparkR
图书馆(SparkR)

sparkcontext我通过使用
flatMapValues
而不是
flatMap
,并通过创建各种参数设置的
(键,值)
对(基本上键是参数输入文件中的行号,值是该行上的参数),获得了我想要的东西。然后我调用
reduceByKey
,它实际上每个键保存一行。修改后的脚本如下所示:

#!/home/myname/Spark/spark-1.4.0/bin/sparkR

library(SparkR)

sparkcontext <- sparkR.init("local[4]","cvspark",sparkEnvir=list(spark.executor.memory="1g"))

cv_spark <- function(indata) {
   cv_params <- unlist(strsplit(indata[[1]], split=","))
   param.par1 = as.integer(cv_params[1])
   param.par2 = as.numeric(cv_params[2])
   param.par3 = as.integer(cv_params[3])
   predictions <- rep(NA, 1)
   ## here I run some calculation on some data that I load to my SparkR session, 
   ## but for illustration purpose I'm just filling up with some random numbers
   mypred = base:::sample(seq(5,10,by=0.01),3)
   predictions <- cbind(param.par1, param.par2, param.par3,mypred[1],mypred[2],mypred[3])
   return(as.data.frame(predictions))
}

args <- commandArgs(trailingOnly=TRUE)
print(paste("args ", args))
cvpar = readLines(args[[1]])
## Creates (key, value) pairs
cvpar <- Map(list,seq(1,length(cvpar)),cvpar)

rdd <- SparkR:::parallelize(sparkcontext, coll=cvpar, numSlices=1)
myerr <- SparkR:::flatMapValues(rdd,cv_spark)
myerr <- SparkR:::reduceByKey(myerr,"c", 2L)
output <- SparkR:::collect(myerr)

myres <- sapply(output,`[`,2)
df_res <- do.call("rbind",myres)
colnames(df_res) <- c("Element","sigdf","sigq","err","err.sse","err.mse")

outfile = "spark_output.csv"
write.csv(df_res,outfile,quote=FALSE,row.names=FALSE)
#/home/myname/Spark/Spark-1.4.0/bin/sparkR
图书馆(SparkR)

sparkcontext@Vijay_Shinde
/myexample.R myparameterfile.txt
其中myexample.R是上面的脚本。确保你在脚本中修复了shebang。myparameterfile.txt每行包含3个逗号分隔的数字。