Apache spark 我正在尝试建立spark来运行程序,但是它似乎不起作用

Apache spark 我正在尝试建立spark来运行程序,但是它似乎不起作用,apache-spark,Apache Spark,我正在尝试建立spark来运行程序,但是它似乎不起作用。 这就是我尝试在spark中运行示例程序时发生的情况: hduser_@ankit-sve14137cnb:/usr/local/spark$ ./bin/run-example SparkPi 10 Failed to find Spark examples assembly in /usr/local/spark/lib or /usr/local/spark/examples/target You need to b

我正在尝试建立spark来运行程序,但是它似乎不起作用。 这就是我尝试在spark中运行示例程序时发生的情况:

 hduser_@ankit-sve14137cnb:/usr/local/spark$ ./bin/run-example SparkPi 10
    Failed to find Spark examples assembly in /usr/local/spark/lib or /usr/local/spark/examples/target
    You need to build Spark before running this program
    hduser_@ankit-sve14137cnb:/usr/local/spark$ sudo build/mvn -e -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests clean package
    Using `mvn` from path: /usr/bin/mvn
    Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
    [INFO] Error stacktraces are turned on.
    [INFO] Scanning for projects...
    [INFO] 

下面是我的bashrc文件中提到的路径:

export JAVA_HOME=/home/ankit/Downloads/jdk1.8.0_77

export HADOOP_HOME=/home/ankit/Downloads/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME 
export HADOOP_COMMON_HOME=$HADOOP_HOME 
export HADOOP_HDFS_HOME=$HADOOP_HOME 
export YARN_HOME=$HADOOP_HOME 
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 
#export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin 
export HADOOP_INSTALL=$HADOOP_HOME 
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib/native"
export CASSANDRA_HOME =$CASSANDRA_HOME:/home/hduser_/cassandra
#export PATH = $PATH:$CASSANDRA_HOME/bin
export SCALA_HOME = $SCALA_HOME:/usr/local/scala
export PATH = $SCALA_HOME/bin:$PATH

我是SOF新手,有人能告诉我吗?

在Maven 3中,如果您刚刚下载了一个失败的软件,并且已经修复了它(例如,通过将jar上传到存储库),它将缓存失败的软件。要强制刷新,请在命令行中添加-U。试试提神,让我知道会怎么样

如果您已经在需要使用maven 3强制刷新时失败了构建:命令应该是(注意
-U
选项):


它说“找不到必需的文件:scala-compiler-2.10.5.jar”。你确定mvn能够下载它吗?你能检查一下吗?@fathersson我该怎么检查?你可以查看一下你当地的maven回购协议。它通常在您的主目录下的.m2下,只需使用find实用程序查找它。为什么要编译您的spark发行版。您可以在这里轻松下载许多预构建的二进制文件中的一个@PinoSan我确实使用了预构建的版本:spark-1.6.1-bin-hadoop2.6.tgz我甚至尝试过使用“$sbt/sbt package”,但仍然无法启动spark shell。
hduser_@ankit-sve14137cnb:/usr/local$ mvn -version
Apache Maven 3.3.3
Maven home: /usr/share/maven
Java version: 1.8.0_77, vendor: Oracle Corporation
Java home: /home/ankit/Downloads/jdk1.8.0_77/jre
Default locale: en_IN, platform encoding: UTF-8
export JAVA_HOME=/home/ankit/Downloads/jdk1.8.0_77

export HADOOP_HOME=/home/ankit/Downloads/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME 
export HADOOP_COMMON_HOME=$HADOOP_HOME 
export HADOOP_HDFS_HOME=$HADOOP_HOME 
export YARN_HOME=$HADOOP_HOME 
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 
#export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin 
export HADOOP_INSTALL=$HADOOP_HOME 
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$HADOOP_HOME/lib/native"
export CASSANDRA_HOME =$CASSANDRA_HOME:/home/hduser_/cassandra
#export PATH = $PATH:$CASSANDRA_HOME/bin
export SCALA_HOME = $SCALA_HOME:/usr/local/scala
export PATH = $SCALA_HOME/bin:$PATH
mvn -U -DskipTests clean package