Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon ec2 火花代码不';不要在ec2和x27上工作;s火花_Amazon Ec2_Apache Spark_Rdd - Fatal编程技术网

Amazon ec2 火花代码不';不要在ec2和x27上工作;s火花

Amazon ec2 火花代码不';不要在ec2和x27上工作;s火花,amazon-ec2,apache-spark,rdd,Amazon Ec2,Apache Spark,Rdd,我正在学习如何使用spark,并编写了如下代码: object TestSpark { def main (args: Array[String]) { val conf = new SparkConf().setAppName("Test").setMaster("local") val sc = new SparkContext(conf) val matrix = rddTypeChange(sc.textFile(args(0))) val ord

我正在学习如何使用spark,并编写了如下代码:

object TestSpark {

  def main (args: Array[String]) {
    val conf = new SparkConf().setAppName("Test").setMaster("local")
    val sc = new SparkContext(conf)
    val matrix = rddTypeChange(sc.textFile(args(0)))

    val order=matrix.map(s=>s.length).reduce(_+_)
    println(order)
  }

  def typeChange(str:Array[String]):Array[Double]={
    val array:Array[Double]=new Array(str.length)
    for(i<-0 until str.length)
      array(i)=str(i).toDouble
    array
  }

  def rddTypeChange(rdd:RDD[String]):RDD[Array[Double]]={
    rdd.map(data=>typeChange(data.split("\t")))
  }
}
objecttestspark{
def main(参数:数组[字符串]){
val conf=new SparkConf().setAppName(“测试”).setMaster(“本地”)
val sc=新的SparkContext(配置)
val矩阵=rddTypeChange(sc.textFile(args(0)))
val order=matrix.map(s=>s.length).reduce(u+u)
println(订单)
}
def typeChange(str:Array[String]):Array[Double]={
val数组:数组[Double]=新数组(str.length)
对于(itypeChange(data.split(“\t”))
}
}
我有一个名为matrix.txt的文件,如下所示:

object TestSpark {

  def main (args: Array[String]) {
    val conf = new SparkConf().setAppName("Test").setMaster("local")
    val sc = new SparkContext(conf)
    val matrix = rddTypeChange(sc.textFile(args(0)))

    val order=matrix.map(s=>s.length).reduce(_+_)
    println(order)
  }

  def typeChange(str:Array[String]):Array[Double]={
    val array:Array[Double]=new Array(str.length)
    for(i<-0 until str.length)
      array(i)=str(i).toDouble
    array
  }

  def rddTypeChange(rdd:RDD[String]):RDD[Array[Double]]={
    rdd.map(data=>typeChange(data.split("\t")))
  }
}
1 2

34

arg(0)是matrix.txt

结果是4(打印),当我在IDE(idea)中运行它时,它工作了。但是如果我导出一个jar并在ec2上运行jar,它就不工作了。在ec2上,matrix.txt在hdfs上


为什么?

请在问题中粘贴堆栈跟踪。如何获取堆栈跟踪?