Apache spark Spark 2.0+;Kryo序列化程序+;Avro->;NullPointerException?

Apache spark Spark 2.0+;Kryo序列化程序+;Avro->;NullPointerException?,apache-spark,kryo,Apache Spark,Kryo,我有一个简单的pyspark程序: from pyspark import SQLContext from pyspark import SparkConf from pyspark import SparkContext if __name__ == "__main__": spark_settings = { "spark.serializer": 'org.apache.spark.serializer.KryoSerializer' } con

我有一个简单的pyspark程序:

from pyspark import SQLContext
from pyspark import SparkConf
from pyspark import SparkContext

if __name__ == "__main__":
    spark_settings = {
        "spark.serializer": 'org.apache.spark.serializer.KryoSerializer'
    }

    conf = SparkConf()
    conf.setAll(spark_settings.items())
    spark_context = SparkContext(appName="test app", conf=conf)
    spark_sql_context = SQLContext(spark_context)

    source_path = "s3n://my_bucket/data.avro"
    data_frame = spark_sql_context.read.load(source_path, format="com.databricks.spark.avro")
    # The schema comes back correctly.
    data_frame.printSchema()
    # This count() call fails. A call to head() triggers the same error.
    data_frame.count()
我和你一起跑步

$SPARK_HOME/bin/spark-submit --master yarn \
  --packages com.databricks:spark-avro_2.11:3.0.0 \
    bug_isolation.py
它失败,出现以下异常和堆栈跟踪

如果我切换到
--master local
,它会工作。如果我禁用KryoSerializer选项,它就会工作。或者,如果我使用拼花地板源而不是Avro源,它就可以工作

结合使用
--master warn
和KryoSerializer以及Avro源会触发下面列出的异常和堆栈跟踪

我怀疑我可能需要手动注册一些Avro插件类与KryoSerializer的工作?我需要注册哪些课程

  File "/usr/lib/spark/python/lib/py4j-0.10.1-src.zip/py4j/protocol.py", line 312, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o58.count.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 0.0 failed 4 times, most recent failure: Lost task 3.3 in stage 0.0 (TID 9, ip-172-31-97-24.us-west-2.compute.internal): java.lang.NullPointerException
    at com.databricks.spark.avro.DefaultSource$$anonfun$buildReader$1.apply(DefaultSource.scala:151)
    at com.databricks.spark.avro.DefaultSource$$anonfun$buildReader$1.apply(DefaultSource.scala:143)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(fileSourceInterfaces.scala:279)
    at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(fileSourceInterfaces.scala:263)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:116)
    at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

如果调用
data\u frame.head()
,则会出现相同的错误。很明显,数据不是空的,因为它在没有Kryo序列化的情况下或者在本地模式下运行的情况下工作得很好。请问您为什么使用Kryo?如果它真的是Spark 2.0,你应该使用SparkSession(而不是SparkContext和SQLContext),尽管这对这个例子没有多大影响。我们使用Kryo来提高速度。官方的星火文件推荐它。我知道2.0中的SparkSession,但我的团队目前应该编写1.6兼容代码,即使我在运行时使用Spark 2.0。如果调用
data\u frame.head()
,我也会遇到同样的错误。很明显,数据不是空的,因为它在没有Kryo序列化的情况下或者在本地模式下运行的情况下工作得很好。请问您为什么使用Kryo?如果它真的是Spark 2.0,你应该使用SparkSession(而不是SparkContext和SQLContext),尽管这对这个例子没有多大影响。我们使用Kryo来提高速度。官方的星火文件推荐它。我知道2.0中的SparkSession,但我的团队目前应该编写1.6兼容的代码,即使我在运行时使用Spark2.0。