Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark PYSPARK:读取RDD时出错_Apache Spark_Pyspark - Fatal编程技术网

Apache spark PYSPARK:读取RDD时出错

Apache spark PYSPARK:读取RDD时出错,apache-spark,pyspark,Apache Spark,Pyspark,我试图从我的RDD中读取,但出现以下错误。 请告知。 该文件存在于HDFS中。我使用hadoop文件系统命令将该文件移动到HDFS中 代码: baby_names = sc.textFile("/user/rahul/baby_names.csv") rows = baby_names.map(lambda line:line.split(",")) for row in rows.take(rows.count()):print(row[1]) Py4JJavaError

我试图从我的RDD中读取,但出现以下错误。 请告知。 该文件存在于HDFS中。我使用hadoop文件系统命令将该文件移动到HDFS中

代码:

baby_names = sc.textFile("/user/rahul/baby_names.csv")

rows = baby_names.map(lambda line:line.split(","))

for row in rows.take(rows.count()):print(row[1])
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-7-b9dcd91a9f1c> in <module>()
----> 1 for row in rows.take(rows.count()):print(row[1])

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in count(self)
   1039         3
   1040         """
-> 1041         return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
   1042 
   1043     def stats(self):

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in sum(self)
   1030         6.0
   1031         """
-> 1032         return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
   1033 
   1034     def count(self):

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in fold(self, zeroValue, op)
    904         # zeroValue provided to each partition is unique from the one provided
    905         # to the final reduce call
--> 906         vals = self.mapPartitions(func).collect()
    907         return reduce(op, vals, zeroValue)
    908 

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in collect(self)
    807         """
    808         with SCCallSiteSync(self.context) as css:
--> 809             port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    810         return list(_load_from_socket(port, self._jrdd_deserializer))
    811 

/home/rahul/Hadoop/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:

/home/rahul/Hadoop/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/home/rahul/Hadoop/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    317                 raise Py4JJavaError(
    318                     "An error occurred while calling {0}{1}{2}.\n".
--> 319                     format(target_id, ".", name), value)
    320             else:
    321                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/user/rahul/baby_names.csv
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:53)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453)
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)
错误:

baby_names = sc.textFile("/user/rahul/baby_names.csv")

rows = baby_names.map(lambda line:line.split(","))

for row in rows.take(rows.count()):print(row[1])
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-7-b9dcd91a9f1c> in <module>()
----> 1 for row in rows.take(rows.count()):print(row[1])

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in count(self)
   1039         3
   1040         """
-> 1041         return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
   1042 
   1043     def stats(self):

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in sum(self)
   1030         6.0
   1031         """
-> 1032         return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
   1033 
   1034     def count(self):

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in fold(self, zeroValue, op)
    904         # zeroValue provided to each partition is unique from the one provided
    905         # to the final reduce call
--> 906         vals = self.mapPartitions(func).collect()
    907         return reduce(op, vals, zeroValue)
    908 

/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in collect(self)
    807         """
    808         with SCCallSiteSync(self.context) as css:
--> 809             port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    810         return list(_load_from_socket(port, self._jrdd_deserializer))
    811 

/home/rahul/Hadoop/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:

/home/rahul/Hadoop/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/home/rahul/Hadoop/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    317                 raise Py4JJavaError(
    318                     "An error occurred while calling {0}{1}{2}.\n".
--> 319                     format(target_id, ".", name), value)
    320             else:
    321                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/user/rahul/baby_names.csv
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:53)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453)
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)
Py4JJavaError回溯(最近一次调用)
在()
---->1表示行中的行。take(rows.count()):print(行[1])
/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc计数(self)
1039         3
1040         """
->1041返回self.mapPartitions(lambda i:[sum(1代表i中的u)]).sum()
1042
1043 def状态(自身):
/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc的总和(self)
1030         6.0
1031         """
->1032返回self.mapPartitions(lambda x:[求和(x)]).fold(0,运算符.add)
1033
1034 def计数(自身):
/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc(self,zeroValue,op)
904#提供给每个分区的zeroValue与提供的分区是唯一的
905#到最后的reduce通话
-->906 VAL=self.mapPartitions(func.collect())
907返回减少(op、VAL、零值)
908
/home/rahul/Hadoop/spark/python/pyspark/rdd.pyc in collect(self)
807         """
808使用SCCallSiteSync(self.context)作为css:
-->809 port=self.ctx.\u jvm.PythonRDD.collectAndServe(self.\u jrdd.rdd())
810返回列表(_从_套接字加载(端口,self._jrdd_反序列化器))
811
/home/rahul/Hadoop/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in u__调用(self,*args)
1131 answer=self.gateway\u client.send\u命令(command)
1132返回值=获取返回值(
->1133应答,self.gateway\u客户端,self.target\u id,self.name)
1134
1135对于临时参数中的临时参数:
/装饰中的home/rahul/Hadoop/spark/python/pyspark/sql/utils.pyc(*a,**kw)
61 def装饰(*a,**千瓦):
62尝试:
--->63返回f(*a,**kw)
64除py4j.protocol.Py4JJavaError外的其他错误为e:
65 s=e.java_exception.toString()
/获取返回值中的home/rahul/Hadoop/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py(答案、网关客户端、目标id、名称)
317 raise Py4JJavaError(
318“调用{0}{1}{2}时出错。\n”。
-->319格式(目标id,“.”,名称),值)
320其他:
321升起Py4JError(
Py4JJavaError:调用z:org.apache.spark.api.python.PythonRDD.collectAndServe时出错。

:org.apache.hadoop.mapred.InvalidInputException:输入路径不存在:文件:/user/rahul/baby_names.csv 位于org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287) 位于org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229) 位于org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315) 位于org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:252) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:250) 位于scala.Option.getOrElse(Option.scala:121) 位于org.apache.spark.rdd.rdd.partitions(rdd.scala:250) 位于org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:252) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:250) 位于scala.Option.getOrElse(Option.scala:121) 位于org.apache.spark.rdd.rdd.partitions(rdd.scala:250) 位于org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:53) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:252) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:250) 位于scala.Option.getOrElse(Option.scala:121) 位于org.apache.spark.rdd.rdd.partitions(rdd.scala:250) 位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1958) 位于org.apache.spark.rdd.rdd$$anonfun$collect$1.apply(rdd.scala:935) 位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 位于org.apache.spark.rdd.rdd.withScope(rdd.scala:362) 位于org.apache.spark.rdd.rdd.collect(rdd.scala:934) 位于org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453) 位于org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) 在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处 位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中 位于java.lang.reflect.Method.invoke(Method.java:498) 位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 在py4j.Gateway.invoke处(Gateway.java:280) 位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 在py4j.commands.CallCommand.execute(CallCommand.java:79) 在py4j.GatewayConnection.run处(GatewayConnection.java:214) 运行(Thread.java:745)
如果有任何有关spark配置更改的链接,请共享。

如果要读取所有行,为什么不使用
collect()

baby_names = sc.textFile("/user/rahul/baby_names.csv")

rows = baby_names.map(lambda line:line.split(",")) \
                 .filter(lambda line: len(line)>1) \
                 .map(lambda line: (line[0],line[1]))

for row in rows.collect():print(row)


collect()-以数组的形式返回数据集的所有元素 驱动程序。这通常是有用的后,过滤器或其他 返回足够小的数据子集的操作。

count()-返回数据集中的元素数。

take(n)-返回 包含数据集前n个元素的数组


org.apache.hadoop.mapred.InvalidInputException:输入路径不存在:文件:/user/rahul/baby_names.csvlooks就像您提供的本地路径一样-您在哪里找到的