运行scala windows spark

运行scala windows spark,windows,scala,apache-spark,Windows,Scala,Apache Spark,我有spark 1.6.1和windows 8机器 当我运行命令c:\spark-1.6.1-bin-hadoop2.6\spark-1.6.1-bin-hadoop2.6>bin\spark shell时,有很多屏幕输出,然后它会给我以下消息。我怎样才能消除这个错误呢 Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true

我有spark 1.6.1和windows 8机器

当我运行命令
c:\spark-1.6.1-bin-hadoop2.6\spark-1.6.1-bin-hadoop2.6>bin\spark shell
时,有很多屏幕输出,然后它会给我以下消息。我怎样才能消除这个错误呢

Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
16/03/15 08:48:27 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/03/15 08:48:27 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NullPointerException
        at java.lang.ProcessBuilder.start(Unknown Source)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
        at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
        at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
        at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:582)
        at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
        at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
        at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
        ... 62 more
<console>:16: error: not found: value sqlContext
         import sqlContext.implicits._
                ^
<console>:16: error: not found: value sqlContext
         import sqlContext.sql
                ^
Picked\u JAVA\u选项:-Djava.net.preferIPv4Stack=true
拾取的JAVA选项:-Djava.net.preferIPv4Stack=true
16/03/15 08:48:27警告NativeCodeLoader:无法为您的平台加载本机hadoop库。。。在适用的情况下使用内置java类
16/03/15 08:48:27错误Shell:未能在hadoop二进制路径中找到winutils二进制文件
java.io.IOException:在Hadoop二进制文件中找不到可执行文件null\bin\winutils.exe。
位于org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
原因:java.lang.NullPointerException
位于java.lang.ProcessBuilder.start(未知源)
位于org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
位于org.apache.hadoop.util.Shell.run(Shell.java:455)
位于org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
位于org.apache.hadoop.util.Shell.execCommand(Shell.java:808)
位于org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
位于org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
位于org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:582)
位于org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:557)
位于org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
位于org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
位于org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
... 62多
:16:错误:未找到:值sqlContext
导入sqlContext.implicits_
^
:16:错误:未找到:值sqlContext
导入sqlContext.sql
^

页面底部提供的链接和stackoverflow链接解决了问题

可能重复的问题,但我没有hadoop群集。我有一台windows机器。没关系,它没有解析到winutils.exe的路径。解决方案应该是相同的。您需要设置一个名为HADOOP_HOME的环境变量,其中的值指向包含winutilsI下载的文件
HADOOP-common-2.2.0-bin-master.zip的路径。我必须解开它吗?我也将环境变量hadooop_设置为home