Apache spark “如何修复”;方案没有文件系统:gs“;在派斯帕克?

Apache spark “如何修复”;方案没有文件系统:gs“;在派斯帕克?,apache-spark,google-cloud-platform,pyspark,google-cloud-storage,Apache Spark,Google Cloud Platform,Pyspark,Google Cloud Storage,我正在尝试将一个json文件从google bucket读取到本地spark机器上的pyspark数据帧中。代码如下: import pandas as pd import numpy as np from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession, SQLContext conf = SparkConf().setAll([('spark.executor.memory', '16

我正在尝试将一个json文件从google bucket读取到本地spark机器上的pyspark数据帧中。代码如下:

import pandas as pd
import numpy as np

from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession, SQLContext

conf = SparkConf().setAll([('spark.executor.memory', '16g'),
                        ('spark.executor.cores','4'),
                         ('spark.cores.max','4')]).setMaster('local[*]')


spark = (SparkSession.
              builder.
              config(conf=conf).
              getOrCreate())


sc = spark.sparkContext

import glob
import bz2
import json
import pickle


bucket_path = "gs://<SOME_PATH>/"
client = storage.Client(project='<SOME_PROJECT>')
bucket = client.get_bucket ('<SOME_PATH>')
blobs = bucket.list_blobs()

theframes = []

for blob in blobs:
    print(blob.name)        
    testspark = spark.read.json(bucket_path + blob.name).cache()
    theframes.append(testspark) 
将熊猫作为pd导入
将numpy作为np导入
从pyspark导入SparkContext,SparkConf
从pyspark.sql导入SparkSession,SQLContext
conf=SparkConf().setAll([('spark.executor.memory','16g'),
('spark.executor.cores','4'),
('spark.cores.max','4')).setMaster('local[*]'))
火花=(火花会话)。
建设者
配置(conf=conf)。
getOrCreate())
sc=spark.sparkContext
导入glob
进口bz2
导入json
进口泡菜
bucket_path=“gs://”
客户端=存储。客户端(项目=“”)
bucket=client.get_bucket(“”)
blobs=bucket.list_blobs()
框架=[]
对于blob中的blob:
打印(blob.name)
testspark=spark.read.json(bucket\u path+blob.name).cache()
theframes.append(testspark)
它从bucket读取文件很好(我可以看到blob.name的打印输出),但随后崩溃如下:

 Traceback (most recent call last):
 File "test_code.py", line 66, in <module>
   testspark = spark.read.json(bucket_path + blob.name).cache()
 File "/home/anaconda3/envs/py37base/lib/python3.6/site-packages/pyspark/sql/readwriter.py", line 274, in json
return self._df(self._jreader.json(self._spark._sc._jvm.PythonUtils.toSeq(path)))
 File "/home/anaconda3/envs/py37base/lib/python3.6/site-packages/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
 File "/home/anaconda3/envs/py37base/lib/python3.6/site-packages/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
 File "/home/anaconda3/envs/py37base/lib/python3.6/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o51.json.
: java.io.IOException: No FileSystem for scheme: gs
回溯(最近一次呼叫最后一次):
文件“test_code.py”,第66行,在
testspark=spark.read.json(bucket\u path+blob.name).cache()
json格式的文件“/home/anaconda3/envs/py37base/lib/python3.6/site packages/pyspark/sql/readwriter.py”,第274行
返回self.\u df(self.\u jreader.json(self.\u spark.\u sc.\u jvm.PythonUtils.toSeq(path)))
文件“/home/anaconda3/envs/py37base/lib/python3.6/site packages/py4j/java_gateway.py”,第1257行,in_u调用__
回答,self.gateway\u客户端,self.target\u id,self.name)
文件“/home/anaconda3/envs/py37base/lib/python3.6/site-packages/pyspark/sql/utils.py”,第63行,deco格式
返回f(*a,**kw)
文件“/home/anaconda3/envs/py37base/lib/python3.6/site packages/py4j/protocol.py”,第328行,在get_return_值中
格式(目标id,“.”,名称),值)
py4j.protocol.Py4JJavaError:调用o51.json时出错。
:java.io.IOException:没有scheme:gs的文件系统
我曾在stackoverflow上讨论过这种类型的错误,但在我使用pyspark时,大多数解决方案似乎都在Scala中,和/或涉及到弄乱core-site.xml,我这样做毫无效果

我使用的是spark 2.4.1和python 3.6.7


非常感谢您的帮助

需要一些配置参数才能将“gs”识别为分布式文件系统

将此设置用于google云存储连接器gcs-connector-hadoop2-latest.jar

spark = SparkSession \
        .builder \
        .config("spark.jars", "/path/to/gcs-connector-hadoop2-latest.jar") \
        .getOrCreate()
可从pyspark设置的其他配置

spark._jsc.hadoopConfiguration().set('fs.gs.impl', 'com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem')
# This is required if you are using service account and set true, 
spark._jsc.hadoopConfiguration().set('fs.gs.auth.service.account.enable', 'true')
spark._jsc.hadoopConfiguration().set('google.cloud.auth.service.account.json.keyfile', "/path/to/keyfile")
# Following are required if you are using oAuth
spark._jsc.hadoopConfiguration().set('fs.gs.auth.client.id', 'YOUR_OAUTH_CLIENT_ID')
spark._jsc.hadoopConfiguration().set('fs.gs.auth.client.secret', 'OAUTH_SECRET')
或者,您可以在core-site.xml或spark-defaults.conf中设置这些配置

命令行上的Hadoop配置 您还可以使用
spark.hadoop
-前缀配置属性来设置
pyspark
(或通常情况下
spark submit
)时的设置,例如

--conf spark.hadoop.fs.gs.impl=com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem