Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/docker/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何从Jupyter Pyspark笔记本访问MinIO存储桶?_Python_Docker_Apache Spark_Pyspark_Jupyter Notebook - Fatal编程技术网

Python 如何从Jupyter Pyspark笔记本访问MinIO存储桶?

Python 如何从Jupyter Pyspark笔记本访问MinIO存储桶?,python,docker,apache-spark,pyspark,jupyter-notebook,Python,Docker,Apache Spark,Pyspark,Jupyter Notebook,我有MinIO和Jupyter-Pyspark笔记本的实例在单独的docker容器上本地运行。我能够使用minio Python包在minio中查看木桶和对象,但是,当我尝试使用Pyspark从木桶加载拼花地板时,我得到以下结果: 代码: 错误: Py4JJavaError: An error occurred while calling o22.load. : java.lang.RuntimeException: java.lang.ClassNotFoundException: Class

我有MinIO和Jupyter-Pyspark笔记本的实例在单独的docker容器上本地运行。我能够使用minio Python包在minio中查看木桶和对象,但是,当我尝试使用Pyspark从木桶加载拼花地板时,我得到以下结果:

代码:

错误:

Py4JJavaError: An error occurred while calling o22.load.
: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:547)
    at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.immutable.List.flatMap(List.scala:355)
    at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
    ... 30 more

我正试图编写一个脚本,使这些容器旋转起来,运行一些测试,然后将它们拆下。是否有一些配置需要包含在某个地方?

您可以展示一下如何安装Spark并初始化它吗?看起来您必须下载org.apache.hadoop.fs.s3a.S3AFileSystem的Java库。您能否确保安装了
hadoop aws
aws java sdk
JAR?我使用和。

请参见此处的评论:

具体而言:

import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.amazonaws:aws-java-sdk:1.10.34,org.apache.hadoop:hadoop-aws:2.6.0,com.databricks:spark-csv_2.11:1.4.0 pyspark-shell'

这有助于消除该类未找到错误

请执行以下步骤:

相关aws JAR 确保已安装以下
JAR

您可以运行
pyspark
应用程序,如下所示:

pyspark--jars“aws-java-sdk-1.7.4.jar,hadoop-aws-2.7.3
(或来自docker
CMD

在笔记本中,配置“hadoop配置” 请查看完整的参数列表

现在应该能够从
minio
查询数据,例如:

sc.textFile(“s3a://”)

import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.amazonaws:aws-java-sdk:1.10.34,org.apache.hadoop:hadoop-aws:2.6.0,com.databricks:spark-csv_2.11:1.4.0 pyspark-shell'
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", "access_key")
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "secret_key")
sc._jsc.hadoopConfiguration().set("fs.s3a.proxy.host", "minio")
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "minio")
sc._jsc.hadoopConfiguration().set("fs.s3a.proxy.port", "9000")
sc._jsc.hadoopConfiguration().set("fs.s3a.path.style.access", "true")
sc._jsc.hadoopConfiguration().set("fs.s3a.connection.ssl.enabled", "false")
sc._jsc.hadoopConfiguration().set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")