Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/webpack/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark PySpark错误:“0”;“输入路径不存在”;_Apache Spark_Pyspark - Fatal编程技术网

Apache spark PySpark错误:“0”;“输入路径不存在”;

Apache spark PySpark错误:“0”;“输入路径不存在”;,apache-spark,pyspark,Apache Spark,Pyspark,我是Spark的新手,我用Python编写代码 按照我的“学习Spark”指导原则,我看到“运行Spark不需要安装Hadoop” 然而,当我尝试使用Pyspark计算一个文件中的行数时,我得到了以下错误。我错过了什么 >>> lines = sc.textFile("README.md") 15/02/01 13:27:12 INFO MemoryStore: ensureFreeSpace(32728) called with curMem=0, maxMem=27801

我是Spark的新手,我用Python编写代码

按照我的“学习Spark”指导原则,我看到“运行Spark不需要安装Hadoop”

然而,当我尝试使用Pyspark计算一个文件中的行数时,我得到了以下错误。我错过了什么

>>> lines = sc.textFile("README.md")
15/02/01 13:27:12 INFO MemoryStore: ensureFreeSpace(32728) called with curMem=0,
 maxMem=278019440
15/02/01 13:27:12 INFO MemoryStore: Block broadcast_0 stored as values in memory
 (estimated size 32.0 KB, free 265.1 MB)
>>> lines.count()
15/02/01 13:27:18 WARN NativeCodeLoader: Unable to load native-hadoop library fo
r your platform... using builtin-java classes where applicable
15/02/01 13:27:18 WARN LoadSnappy: Snappy native library not loaded
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 847, in co
unt
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 838, in su
m
    return self.mapPartitions(lambda x: [sum(x)]).reduce(operator.add)
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 759, in re
duce
    vals = self.mapPartitions(func).collect()
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 723, in co
llect
    bytesInJava = self._jrdd.collect().iterator()
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\ja
va_gateway.py", line 538, in __call__
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\pr
otocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o26.collect.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: fil
e:/C:/Spark/spark-1.1.0-bin-hadoop1/bin/README.md
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
ava:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja
va:208)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:5
6)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:774)
        at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala
:305)
        at org.apache.spark.api.java.JavaRDD.collect(JavaRDD.scala:32)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Unknown Source)

>>> lines.first()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 1167, in f
irst
    return self.take(1)[0]
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py", line 1126, in t
ake
    totalParts = self._jrdd.partitions().size()
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\ja
va_gateway.py", line 538, in __call__
  File "C:\Spark\spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\pr
otocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o20.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: fil
e:/C:/Spark/spark-1.1.0-bin-hadoop1/bin/README.md
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
ava:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja
va:208)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
        at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.sc
ala:50)
        at org.apache.spark.api.java.JavaRDD.partitions(JavaRDD.scala:32)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Unknown Source)

>>>
>lines=sc.textFile(“README.md”)
15/02/01 13:27:12信息内存存储:使用curMem=0调用EnsureRefreeSpace(32728),
maxMem=278019440
15/02/01 13:27:12信息存储器存储:块广播0存储为存储器中的值
(估计大小32.0 KB,免费265.1 MB)
>>>行数()
15/02/01 13:27:18警告NativeCodeLoader:无法加载本机hadoop库
r你的平台。。。在适用的情况下使用内置java类
15/02/01 13:27:18警告加载Snappy:未加载Snappy本机库
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
文件“C:\Spark\Spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py”,第847行,在co中
unt
返回self.mapPartitions(lambda i:[sum(i中的u为1)]).sum()
文件“C:\Spark\Spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py”,第838行,su格式
M
返回self.mapPartitions(lambda x:[sum(x)]).reduce(operator.add)
文件“C:\Spark\Spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py”,第759行,在re中
公爵
vals=self.mapPartitions(func.collect())
文件“C:\Spark\Spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py”,第723行,在co中
llect
bytesInJava=self.\u jrdd.collect().iterator()
文件“C:\Spark\Spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\ja
va_gateway.py”,第538行,呼叫中__
文件“C:\Spark\Spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\pr
“otocol.py”,第300行,在获取返回值中
py4j.protocol.Py4JJavaError:调用o26.collect时出错。
:org.apache.hadoop.mapred.InvalidInputException:输入路径不存在:fil
e:/C:/Spark/Spark-1.1.0-bin-hadoop1/bin/README.md
位于org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
ava:197)
在org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja
弗吉尼亚州:208)
位于org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:204)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:202)
在scala.Option.getOrElse(Option.scala:120)
位于org.apache.spark.rdd.rdd.partitions(rdd.scala:202)
位于org.apache.spark.rdd.mapperdd.getPartitions(mapperdd.scala:28)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:204)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:202)
在scala.Option.getOrElse(Option.scala:120)
位于org.apache.spark.rdd.rdd.partitions(rdd.scala:202)
位于org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:5
6)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:204)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:202)
在scala.Option.getOrElse(Option.scala:120)
位于org.apache.spark.rdd.rdd.partitions(rdd.scala:202)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
位于org.apache.spark.rdd.rdd.collect(rdd.scala:774)
位于org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala
:305)
位于org.apache.spark.api.java.JavaRDD.collect(JavaRDD.scala:32)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(未知源)
在sun.reflect.DelegatingMethodAccessorImpl.invoke处(未知源)
位于java.lang.reflect.Method.invoke(未知源)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
在py4j.Gateway.invoke处(Gateway.java:259)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:207)
位于java.lang.Thread.run(未知源)
>>>第一行()
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
文件“C:\Spark\Spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py”,第1167行,在f中
第一
返回自我。获取(1)[0]
文件“C:\Spark\Spark-1.1.0-bin-hadoop1\python\pyspark\rdd.py”,第1126行,在t中
阿克
totalParts=self.\u jrdd.partitions().size()
文件“C:\Spark\Spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\ja
va_gateway.py”,第538行,呼叫中__
文件“C:\Spark\Spark-1.1.0-bin-hadoop1\python\lib\py4j-0.8.2.1-src.zip\py4j\pr
“otocol.py”,第300行,在获取返回值中
py4j.protocol.Py4JJavaError:调用o20.0分区时出错。
:org.apache.hadoop.mapred.InvalidInputException:输入路径不存在:fil
e:/C:/Spark/Spark-1.1.0-bin-hadoop1/bin/README.md
位于org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.j
ava:197)
在org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.ja
弗吉尼亚州:208)
位于org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:204)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:202)
在scala.Option.getOrElse(Option.scala:120)
位于org.apache.spark.rdd.rdd.partitions(rdd.scala:202)
位于org.apache.spark.rdd.mapperdd.getPartitions(mapperdd.scala:28)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:204)
位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:202)
在scala.Option.getOrElse(Option.scala:120)
位于org.apache.spark.rdd.rdd.partitions(rdd.scala:202)
位于org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.sc
阿拉:50)
位于org.apache.spark.api.java.JavaRDD.partitions(JavaRDD.scala:32)
在阳光下
~/ephemeral-hdfs/bin/hadoop fs -put /dir/filename.txt filename.txt
data = sc.textFile("wasb:///HdiSamples/SensorSampleData/hvac/HVAC.csv")