Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 在Spark上发布Google云存储连接器_Apache Spark_Google Hadoop - Fatal编程技术网

Apache spark 在Spark上发布Google云存储连接器

Apache spark 在Spark上发布Google云存储连接器,apache-spark,google-hadoop,Apache Spark,Google Hadoop,我正试图在Mac OS上的Spark上安装Google云存储,以便对我的Spark应用程序进行本地测试。我已阅读了以下文件()。我在spark/lib文件夹中添加了“gcs-connector-latest-hadoop2.jar”。我还在spark/conf目录中添加了core-data.xml文件 当我运行pyspark shell时,我得到一个错误: >>> sc.textFile("gs://mybucket/test.csv").count() Traceba

我正试图在Mac OS上的Spark上安装Google云存储,以便对我的Spark应用程序进行本地测试。我已阅读了以下文件()。我在spark/lib文件夹中添加了“gcs-connector-latest-hadoop2.jar”。我还在spark/conf目录中添加了core-data.xml文件

当我运行pyspark shell时,我得到一个错误:

>>> sc.textFile("gs://mybucket/test.csv").count()
    Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py", line 847, in count
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py", line 838, in sum
    return self.mapPartitions(lambda x: [sum(x)]).reduce(operator.add)
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py", line 759, in reduce
    vals = self.mapPartitions(func).collect()
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py", line 723, in collect
    bytesInJava = self._jrdd.collect().iterator()
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/Users/poiuytrez/Documents/DataBerries/programs/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o26.collect.
: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem not found
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1895)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2379)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2392)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:304)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
    at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
    at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:56)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:774)
    at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:305)
    at org.apache.spark.api.java.JavaRDD.collect(JavaRDD.scala:32)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:207)
    at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.ClassNotFoundException: Class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem not found
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1801)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1893)
    ... 40 more
sc.textFile(“gs://mybucket/test.csv”).count() 回溯(最近一次呼叫最后一次): 文件“”,第1行,在 文件“/Users/pouytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py”,第847行,计数 返回self.mapPartitions(lambda i:[sum(i中的u为1)]).sum() 文件“/Users/pouytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py”,第838行,总计 返回self.mapPartitions(lambda x:[sum(x)]).reduce(operator.add) 文件“/Users/pouytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py”,reduce中的第759行 vals=self.mapPartitions(func.collect()) collect中的文件“/Users/poiuytrez/Documents/DataBerries/programs/spark/python/pyspark/rdd.py”,第723行 bytesInJava=self.\u jrdd.collect().iterator() 文件“/Users/poiuytrez/Documents/DataBerries/programs/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py”,第538行,在__ 文件“/Users/poiuytrez/Documents/DataBerries/programs/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py”,第300行,在get\u返回值中 py4j.protocol.Py4JJavaError:调用o26.collect时出错。 :java.lang.RuntimeException:java.lang.ClassNotFoundException:Class com.google.cloud.hadoop.fs.gcs.GoogleHadoop未找到文件系统 位于org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1895) 位于org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2379) 位于org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2392) 位于org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89) 位于org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431) 位于org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413) 位于org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368) 位于org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) 位于org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) 位于org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) 位于org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:304) 位于org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:204) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:202) 在scala.Option.getOrElse(Option.scala:120) 位于org.apache.spark.rdd.rdd.partitions(rdd.scala:202) 位于org.apache.spark.rdd.mapperdd.getPartitions(mapperdd.scala:28) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:204) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:202) 在scala.Option.getOrElse(Option.scala:120) 位于org.apache.spark.rdd.rdd.partitions(rdd.scala:202) 位于org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:56) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:204) 位于org.apache.spark.rdd.rdd$$anonfun$partitions$2.apply(rdd.scala:202) 在scala.Option.getOrElse(Option.scala:120) 位于org.apache.spark.rdd.rdd.partitions(rdd.scala:202) 位于org.apache.spark.SparkContext.runJob(SparkContext.scala:1135) 位于org.apache.spark.rdd.rdd.collect(rdd.scala:774) 位于org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:305) 位于org.apache.spark.api.java.JavaRDD.collect(JavaRDD.scala:32) 在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处 在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)中 在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中 位于java.lang.reflect.Method.invoke(Method.java:606) 位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379) 在py4j.Gateway.invoke处(Gateway.java:259) 位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 在py4j.commands.CallCommand.execute(CallCommand.java:79) 在py4j.GatewayConnection.run处(GatewayConnection.java:207) 运行(Thread.java:744) 原因:java.lang.ClassNotFoundException:Class com.google.cloud.hadoop.fs.gcs.GoogleHadoop未找到文件系统 位于org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1801) 位于org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1893) ... 40多
我不知道下一步该去哪里。

不同版本的Spark对它的要求可能有所不同,但是如果你看看里面的
bdutil-0.35.2/extensions/Spark/install_Spark.sh
你就会看到我们使用
bdutil
的“Spark+Hadoop on GCE”设置是如何工作的;它包括您提到的项目,将连接器添加到spark/lib文件夹中,并将core-site.xml文件添加到spark/conf目录中,但另外还有一行添加到
spark/conf/spark env.sh

export SPARK_CLASSPATH=\$SPARK_CLASSPATH:${LOCAL_GCS_JAR}

其中,
${LOCAL\u GCS\u JAR}
将是添加到spark/lib的JAR文件的绝对路径。尝试将其添加到您的
spark/conf/spark env.sh
中,ClassNotFoundException应该会消失。

我发现:这在spark 1.0+中已被弃用。请改为使用:-./spark submit with--driver class path来扩充驱动程序类路径-spark.executor.extraClassPath来扩充执行器类路径,但我在尝试访问存储时遇到另一个错误。我将创建一个新的SO问题。我遇到了一个元数据服务器错误,我使用您对该问题的回答解决了该错误:在SPARK-env.sh中将$HADOOP\u类路径添加到$SPARK\u类路径将解决该问题。(在lea)