I';我在pyspark中加载csv时遇到了一个错误

I';我在pyspark中加载csv时遇到了一个错误,pyspark,Pyspark,我已经导入mmlspark来使用LightGBM,如果我不这样做,任何事情都是好的 spark = pyspark.sql.SparkSession.builder.appName("MyApp") \ .config("spark.jars.packages", "com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3") \ .config("spark

我已经导入mmlspark来使用LightGBM,如果我不这样做,任何事情都是好的

spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
        .config("spark.jars.packages", "com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3") \
        .config("spark.jars.repositories", "https://mmlspark.azureedge.net/maven") \
        .getOrCreate()
train_df = spark.read.csv('/content/drive/My Drive/BDCproj/train.csv', header=True, inferSchema=True)
test_df = spark.read.csv('/content/drive/My Drive/BDCproj/test.csv', header=True, inferSchema=True)
那么我的错误是:

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-55-ba0da364400e> in <module>()
----> 1 train_df = spark.read.csv('/content/drive/My Drive/BDCproj/train.csv', header=True, inferSchema=True)
      2 test_df = spark.read.csv('/content/drive/My Drive/BDCproj/test.csv', header=True, inferSchema=True)

3 frames
/usr/local/lib/python3.6/dist-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o214.csv.
: java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.avro.AvroFileFormat could not be instantiated
    at java.base/java.util.ServiceLoader.fail(ServiceLoader.java:581)
    at java.base/java.util.ServiceLoader$ProviderImpl.newInstance(ServiceLoader.java:803)
    at java.base/java.util.ServiceLoader$ProviderImpl.get(ServiceLoader.java:721)
    at java.base/java.util.ServiceLoader$3.next(ServiceLoader.java:1394)
    at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:44)
    at scala.collection.Iterator.foreach(Iterator.scala:941)
    at scala.collection.Iterator.foreach$(Iterator.scala:941)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
    at scala.collection.IterableLike.foreach(IterableLike.scala:74)
    at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
    at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
    at scala.collection.TraversableLike.filterImpl(TraversableLike.scala:255)
    at scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:249)
    at scala.collection.AbstractTraversable.filterImpl(Traversable.scala:108)
    at scala.collection.TraversableLike.filter(TraversableLike.scala:347)
    at scala.collection.TraversableLike.filter$(TraversableLike.scala:347)
    at scala.collection.AbstractTraversable.filter(Traversable.scala:108)
    at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:649)
    at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSourceV2(DataSource.scala:733)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:248)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:723)
    at jdk.internal.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.NoClassDefFoundError: org/apache/spark/sql/execution/datasources/FileFormat$class
    at org.apache.spark.sql.avro.AvroFileFormat.<init>(AvroFileFormat.scala:44)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
    at java.base/java.util.ServiceLoader$ProviderImpl.newInstance(ServiceLoader.java:779)
    ... 29 more
Py4JJavaError回溯(最近一次调用)
在()
---->1 train_df=spark.read.csv('/content/drive/My drive/BDCproj/train.csv',header=True,inferSchema=True)
2 test_df=spark.read.csv('/content/drive/My drive/BDCproj/test.csv',header=True,inferSchema=True)
3帧
/获取返回值(应答、网关客户端、目标id、名称)中的usr/local/lib/python3.6/dist-packages/py4j/protocol.py
326 raise Py4JJavaError(
327“调用{0}{1}{2}时出错。\n”。
-->328格式(目标id,“.”,名称),值)
329其他:
330升起Py4JError(
Py4JJavaError:调用o214.csv时出错。
:java.util.ServiceConfigurationError:org.apache.spark.sql.sources.DataSourceRegister:Provider org.apache.spark.sql.avro.AvroFileFormat无法实例化
位于java.base/java.util.ServiceLoader.fail(ServiceLoader.java:581)
位于java.base/java.util.ServiceLoader$ProviderImpl.newInstance(ServiceLoader.java:803)
位于java.base/java.util.ServiceLoader$ProviderImpl.get(ServiceLoader.java:721)
位于java.base/java.util.ServiceLoader$3.next(ServiceLoader.java:1394)
位于scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:44)
位于scala.collection.Iterator.foreach(Iterator.scala:941)
位于scala.collection.Iterator.foreach$(Iterator.scala:941)
位于scala.collection.AbstractIterator.foreach(迭代器.scala:1429)
位于scala.collection.IterableLike.foreach(IterableLike.scala:74)
位于scala.collection.IterableLike.foreach$(IterableLike.scala:73)
位于scala.collection.AbstractIterable.foreach(Iterable.scala:56)
位于scala.collection.TraversableLike.filterImpl(TraversableLike.scala:255)
位于scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:249)
位于scala.collection.AbstractTraversable.filterImpl(Traversable.scala:108)
位于scala.collection.TraversableLike.filter(TraversableLike.scala:347)
位于scala.collection.TraversableLike.filter$(TraversableLike.scala:347)
位于scala.collection.AbstractTraversable.filter(Traversable.scala:108)
位于org.apache.spark.sql.execution.datasources.DataSource$.lookUpdateSource(DataSource.scala:649)
位于org.apache.spark.sql.execution.datasources.DataSource$.lookUpdateAsourcev2(DataSource.scala:733)
位于org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:248)
位于org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:723)
位于jdk.internal.reflect.GeneratedMethodAccessor16.invoke(未知源)
位于java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
位于java.base/java.lang.reflect.Method.invoke(Method.java:566)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:282)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:238)
位于java.base/java.lang.Thread.run(Thread.java:834)
原因:java.lang.NoClassDefFoundError:org/apache/spark/sql/execution/datasources/FileFormat$class
位于org.apache.spark.sql.avro.AvroFileFormat.(AvroFileFormat.scala:44)
位于java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(本机方法)
位于java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
位于java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
位于java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
位于java.base/java.util.ServiceLoader$ProviderImpl.newInstance(ServiceLoader.java:779)
…还有29个

我的spark是3.0.1

请尝试使用此语法一次。如果有帮助,请给它一个绿色复选框

from pyspark.sql import SparkSession
from pyspark.sql.functions import *

spark = SparkSession.builder.appName("MyApp").config("spark.jars.packages","com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3").config("spark.jars.repositories", "https://mmlspark.azureedge.net/maven").getOrCreate()



train = spark.read.option("header",True).csv("/complete/path/to/train.csv")
test = spark.read.option("header",True).csv("/complete/path/to/test.csv")

希望这能起作用!

尝试使用此语法一次。如果有帮助,请进行绿色检查

from pyspark.sql import SparkSession
from pyspark.sql.functions import *

spark = SparkSession.builder.appName("MyApp").config("spark.jars.packages","com.microsoft.ml.spark:mmlspark_2.11:1.0.0-rc3").config("spark.jars.repositories", "https://mmlspark.azureedge.net/maven").getOrCreate()



train = spark.read.option("header",True).csv("/complete/path/to/train.csv")
test = spark.read.option("header",True).csv("/complete/path/to/test.csv")

希望这能起作用!

thx,但它不起作用。我刚刚发现只有spark2能起作用。@Yangeng我发送的代码是用spark 3.0编写的抱歉,我应该提到版本TX,但它不起作用。我刚刚发现只有spark2能起作用。@Yangeng我发送的代码是用spark 3.0编写的抱歉,我应该提到版本