Apache Spark在Hadoop上工作时出现问题

Apache Spark在Hadoop上工作时出现问题,hadoop,apache-spark,hdfs,Hadoop,Apache Spark,Hdfs,我对大数据非常陌生,尤其是apachespark/Hadoop-warn 为了进行一些尝试,我在虚拟机中安装了Hadoop单节点,并添加了Spark 我认为环境安装良好,因为我可以访问: ->Hadoop概述 ->Spark概述 然后,我创建了一个pythonic文件,用于计算单词: from pyspark import SparkConf, SparkContext from operator import add import sys ## Constants APP_NAME =

我对大数据非常陌生,尤其是
apachespark
/
Hadoop-warn

为了进行一些尝试,我在虚拟机中安装了Hadoop单节点,并添加了Spark

我认为环境安装良好,因为我可以访问:

  • ->Hadoop概述
  • ->Spark概述
然后,我创建了一个pythonic文件,用于计算单词:

from pyspark import SparkConf, SparkContext

from operator import add
import sys
## Constants
APP_NAME = " HelloWorld of Big Data"
##OTHER FUNCTIONS/CLASSES

def main(sc,filename):
   textRDD = sc.textFile(filename)
   words = textRDD.flatMap(lambda x: x.split(' ')).map(lambda x: (x, 1))
   wordcount = words.reduceByKey(add).collect()
   for wc in wordcount:
      print wc[0],wc[1]

if __name__ == "__main__":

   # Configure Spark
   conf = SparkConf().setAppName(APP_NAME)
   conf = conf.setMaster("local[*]")
   sc   = SparkContext(conf=conf)
   filename = sys.argv[1]
   # Execute Main functionality
   main(sc, filename)
我有一个名为data.txt的文本文件。我通过以下方式将此文件上载到HDFS:

hadoop fs -put data.txt hdfs://localhost:9000
我的文件位于:
hdfs://localhost:9000/user/hduser

因此,我希望借助Spark/Hadoop来执行我的pythonic脚本

我做了:
/bin/spark submit/home/hduser/count.py/home/hduser/data.txt

但我得到:

Traceback (most recent call last):
  File "/home/hduser/count.py", line 25, in <module>
    main(sc, filename)
  File "/home/hduser/count.py", line 13, in main
    wordcount = words.reduceByKey(add).collect()
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1623, in reduceByKey
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1849, in combineByKey
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2259, in _defaultReducePartitions
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2455, in getNumPartitions
  File "/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
  File "/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o21.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/home/hduser/data.txt
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
    at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61)
    at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)
回溯(最近一次呼叫最后一次):
文件“/home/hduser/count.py”,第25行,在
主(sc,文件名)
文件“/home/hduser/count.py”,第13行,主目录
wordcount=words.reduceByKey(add.collect())
文件“/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py”,第1623行,在reduceByKey中
combineByKey中的文件“/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py”,第1849行
文件“/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py”,第2259行,在_defaultReducePartitions中
getNumPartitions中的文件“/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py”,第2455行
文件“/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py”,第1160行,在__
文件“/usr/local/spark/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py”,第320行,在get_return_值中
py4j.protocol.Py4JJavaError:调用o21.0分区时出错。
:org.apache.hadoop.mapred.InvalidInputException:输入路径不存在:hdfs://localhost:9000/home/hduser/data.txt
位于org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
位于org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
位于org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313)
位于org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:253)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:251)
位于scala.Option.getOrElse(Option.scala:121)
位于org.apache.spark.rdd.rdd.partitions(rdd.scala:251)
位于org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:253)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:251)
位于scala.Option.getOrElse(Option.scala:121)
位于org.apache.spark.rdd.rdd.partitions(rdd.scala:251)
位于org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61)
位于org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:282)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:214)
运行(Thread.java:748)
这很奇怪,因为我的data.txt文件是HDFS文件,但我有一个问题:
输入路径不存在:hdfs://localhost:9000/home/hduser/data.txt


知道吗?

您的URL无效。HDFS中没有
home
文件夹。请尝试以下方法:

./bin/spark-submit /home/hduser/count.py /user/hduser/data.txt

确保在SPARK-env.sh中设置了HADOOP_HOME和SPARK_HOME路径变量。这样,您就可以使用HDFS执行I/O操作,并将作业提交到集群