Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/365.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 无法在azure DataRicks中使用spark read读取csv文件_Python_Pandas_Apache Spark_Pyspark_Azure Databricks - Fatal编程技术网

Python 无法在azure DataRicks中使用spark read读取csv文件

Python 无法在azure DataRicks中使用spark read读取csv文件,python,pandas,apache-spark,pyspark,azure-databricks,Python,Pandas,Apache Spark,Pyspark,Azure Databricks,我的数据位于azure cosmos DB中,我已将数据集装载在azure DataRicks上 我可以使用pandas读取csv文件并将其加载到spark数据帧 df = pd.read_csv('/dbfs/mnt/ajviswan/forest_efficiency/2020-04-26_2020-05-26.csv') sdf = spark.createDataFrame(df) sdf.head() 这与控制台的以下输出一起工作,我可以使用此数据帧进行进一步处理 (1) Spark

我的数据位于azure cosmos DB中,我已将数据集装载在azure DataRicks上

我可以使用pandas读取csv文件并将其加载到spark数据帧

df = pd.read_csv('/dbfs/mnt/ajviswan/forest_efficiency/2020-04-26_2020-05-26.csv')
sdf = spark.createDataFrame(df)
sdf.head()
这与控制台的以下输出一起工作,我可以使用此数据帧进行进一步处理

(1) Spark Jobs
sdf:pyspark.sql.dataframe.DataFrame = [Forest: string, LoadBalanceMoveReason: string ... 4 more fields]
Out[34]: Row(Forest='AUSP282', LoadBalanceMoveReason='DefaultEncryption', CompletionDate='5/26/2020 12:00:00 AM', efficiencyRopCount=None, efficiencySize=0.9966470723725392, efficiencyIOPS=None)
但是,当我尝试使用spark dataframe直接读取文件时,由于读取错误而失败

df = spark.read.csv('/dbfs/mnt/ajviswan/forest_efficiency/2020-04-26_2020-05-26.csv')
df
返回

Py4JJavaError                             Traceback (most recent call last)
<command-4117735793908621> in <module>
----> 1 df = spark.read.csv('/dbfs/mnt/ajviswan/forest_efficiency/2020-04-26_2020-05-26.csv')
      2 df

/databricks/spark/python/pyspark/sql/readwriter.py in csv(self, path, schema, sep, encoding, quote, escape, comment, header, inferSchema, ignoreLeadingWhiteSpace, ignoreTrailingWhiteSpace, nullValue, nanValue, positiveInf, negativeInf, dateFormat, timestampFormat, maxColumns, maxCharsPerColumn, maxMalformedLogPerPartition, mode, columnNameOfCorruptRecord, multiLine, charToEscapeQuoteEscaping, samplingRatio, enforceSchema, emptyValue, locale, lineSep, pathGlobFilter, recursiveFileLookup)
    533             path = [path]
    534         if type(path) == list:
--> 535             return self._df(self._jreader.csv(self._spark._sc._jvm.PythonUtils.toSeq(path)))
    536         elif isinstance(path, RDD):
    537             def func(iterator):

/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1303         answer = self.gateway_client.send_command(command)
   1304         return_value = get_return_value(
-> 1305             answer, self.gateway_client, self.target_id, self.name)
   1306 
   1307         for temp_arg in temp_args:

/databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     96     def deco(*a, **kw):
     97         try:
---> 98             return f(*a, **kw)
     99         except py4j.protocol.Py4JJavaError as e:
    100             converted = convert_exception(e.java_exception)

/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o3781.csv.
: java.lang.NoClassDefFoundError: org/apache/spark/sql/sources/v2/ReadSupport
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    at com.databricks.backend.daemon.driver.ClassLoaders$ReplWrappingClassLoader.loadClass(ClassLoaders.scala:65)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:405)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:370)
    at java.util.ServiceLoader$LazyIterator.access$700(ServiceLoader.java:323)
    at java.util.ServiceLoader$LazyIterator$2.run(ServiceLoader.java:407)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:409)
    at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
    at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:44)
    at scala.collection.Iterator.foreach(Iterator.scala:941)
    at scala.collection.Iterator.foreach$(Iterator.scala:941)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
    at scala.collection.IterableLike.foreach(IterableLike.scala:74)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
    at scala.collection.TraversableLike.filterImpl(TraversableLike.scala:255)
    at scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:249)
    at scala.collection.AbstractTraversable.filterImpl(Traversable.scala:108)
    at scala.collection.TraversableLike.filter(TraversableLike.scala:347)
    at scala.collection.TraversableLike.filter$(TraversableLike.scala:347)
    at scala.collection.AbstractTraversable.filter(Traversable.scala:108)
    at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:696)
    at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSourceV2(DataSource.scala:780)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:317)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:807)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
    at py4j.Gateway.invoke(Gateway.java:295)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:251)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.sources.v2.ReadSupport
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
    at com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
Py4JJavaError回溯(最近一次调用)
在里面
---->1 df=spark.read.csv('/dbfs/mnt/ajviswan/forest\u efficiency/2020-04-26\u 2020-05-26.csv'))
2 df
/csv格式的databricks/spark/python/pyspark/sql/readwriter.py(self、path、schema、sep、encoding、quote、escape、comment、header、inferSchema、ignoreLeadingWhiteSpace、ignoreTrailingWhiteSpace、nullValue、nanValue、positiveInf、negativeInf、dateFormat、timesFormat、maxColumns、maxCharsPerColumn、maxMalformedLogPerPartition、mode、ColumnNameOfCrruptRecord、多行、charToEscapeQuoteEscaping、samplingRa)tio、enforceSchema、emptyValue、区域设置、lineSep、pathGlobFilter、recursiveFileLookup)
533路径=[路径]
534如果类型(路径)=列表:
-->535返回self.\u df(self.\u jreader.csv(self.\u spark.\u sc.\u jvm.PythonUtils.toSeq(path)))
536 elif isinstance(路径,RDD):
537 def func(迭代器):
/调用中的databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py(self,*args)
1303 answer=self.gateway\u client.send\u命令(command)
1304返回值=获取返回值(
->1305应答,self.gateway\u客户端,self.target\u id,self.name)
1306
1307对于临时参数中的临时参数:
/deco中的databricks/spark/python/pyspark/sql/utils.py(*a,**kw)
96 def装饰(*a,**千瓦):
97尝试:
--->98返回f(*a,**kw)
99除py4j.protocol.Py4JJavaError外,错误为e:
100 converted=convert\u异常(例如java\u异常)
/获取返回值中的databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py(答案、网关客户端、目标id、名称)
326 raise Py4JJavaError(
327“调用{0}{1}{2}时出错。\n”。
-->328格式(目标id,“.”,名称),值)
329其他:
330升起Py4JError(
Py4JJavaError:调用o3781.csv时出错。
:java.lang.NoClassDefFoundError:org/apache/spark/sql/sources/v2/ReadSupport
位于java.lang.ClassLoader.defineClass1(本机方法)
位于java.lang.ClassLoader.defineClass(ClassLoader.java:756)
位于java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
位于java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
在java.net.URLClassLoader.access$100(URLClassLoader.java:74)
在java.net.URLClassLoader$1.run(URLClassLoader.java:369)
在java.net.URLClassLoader$1.run(URLClassLoader.java:363)
位于java.security.AccessController.doPrivileged(本机方法)
位于java.net.URLClassLoader.findClass(URLClassLoader.java:362)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:418)
位于com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:351)
位于java.lang.ClassLoader.defineClass1(本机方法)
位于java.lang.ClassLoader.defineClass(ClassLoader.java:756)
位于java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
位于java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
在java.net.URLClassLoader.access$100(URLClassLoader.java:74)
在java.net.URLClassLoader$1.run(URLClassLoader.java:369)
在java.net.URLClassLoader$1.run(URLClassLoader.java:363)
位于java.security.AccessController.doPrivileged(本机方法)
位于java.net.URLClassLoader.findClass(URLClassLoader.java:362)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:418)
位于com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:351)
位于com.databricks.backend.daemon.driver.ClassLoaders$ReplWrappingClassLoader.loadClass(ClassLoaders.scala:65)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:405)
位于java.lang.ClassLoader.loadClass(ClassLoader.java:351)
位于java.lang.Class.forName0(本机方法)
位于java.lang.Class.forName(Class.java:348)
位于java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:370)
位于java.util.ServiceLoader$LazyIterator.access$700(ServiceLoader.java:323)
位于java.util.ServiceLoader$LazyIterator$2.run(ServiceLoader.java:407)
位于java.security.AccessController.doPrivileged(本机方法)
位于java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:409)
位于java.util.ServiceLoader$1.next(ServiceLoader.java:480)
位于scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:44)
位于scala.collection.Iterator.foreach(Iterator.scala:941)
位于scala.collection.Iterator.foreach$(Iterator.scala:941)
位于scala.collection.AbstractIterator.foreach(迭代器.scala:1429)
位于scala.collection.IterableLike.foreach(IterableLike.scala:74)
位于scala.collection.IterableLike.foreach$(IterableLike.scala:73)
位于scala.collection.AbstractIterable.foreach(Iterable.scala:56)
位于scala.collection.TraversableLike.filterImpl(TraversableLike.scala:255)
位于scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:249)
位于scala.collection.AbstractTraversable.filterImpl(Traversable.scala:108)
位于scala.collection.TraversableLike.filter(TraversableLike.scala:347)
位于scala.collection.TraversableLike.filter$(TraversableLike.scala:347)
位于scala.collection.AbstractTraversable.filter(Traversable.scala:108)
位于org.apache.spark.sql.execution.datasources.DataSource$.lookUpdateSource(DataSource.scala:696)
在org.apache.spar上
df = spark.read.csv('/mnt/ajviswan/forest_efficiency/2020-04-26_2020-05-26.csv')
df=spark.read.format("com.databricks.spark.csv").option("header", "true").option("inferschema", "true").option("mode", "DROPMALFORMED").load("/dbfs/mnt/ajviswan/forest_efficiency/2020-04-26_2020-05-26.csv")
df.show()
readConfig = {
  "Endpoint" : "https://<cosmos_end_point_name>.documents.azure.com:443/",
  "Masterkey" : "<master_key_value>",
  "Database" : "<database_name>",
  "preferredRegions" : "East US",
  "Collection": "<collection_name>",
  "schema_samplesize" : "1000",
  "query_pagesize" : "200000",
  "query_custom" : "SELECT * FROM c"
}
df = spark.read.format("com.microsoft.azure.cosmosdb.spark").options(**readConfig).load()