Scala 某些RDD操作因Java IllegalArgumentException而失败

Scala 某些RDD操作因Java IllegalArgumentException而失败,scala,apache-spark,Scala,Apache Spark,出于某种原因,在对任何类型的RDD执行某些(但不是全部)RDD操作时,会抛出Java IllegalArgumentException:不支持的类文件主版本x。奇怪的是,这只会影响某些操作(例如收集、采取、先采取等),而不会影响其他操作(例如采样、采取等)。有什么想法吗 安装的Spark版本是2.4.3,我已将JDK/JRE从11升级到12,以防出现问题 Welcome to ____ __ / __/__ ___ _____/ /__

出于某种原因,在对任何类型的RDD执行某些(但不是全部)RDD操作时,会抛出Java IllegalArgumentException:不支持的类文件主版本x。奇怪的是,这只会影响某些操作(例如收集、采取、先采取等),而不会影响其他操作(例如采样、采取等)。有什么想法吗

安装的Spark版本是2.4.3,我已将JDK/JRE从11升级到12,以防出现问题

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.3
      /_/

Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 12.0.1)
以前的版本抛出了“不受支持的类文件主版本55”,现在升级的版本是相同的,但版本是56(所以升级显然成功了,但问题没有解决)

下面是一个非常简单的RDD创建的输出,显示RDD正在为某些操作工作:

val seqNum = sc.parallelize(0 to 1000)
seqNum: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[37] at 
parallelize at <console>:24

seqNum.count
res30: Long = 1001

seqNum.sample(false, 0.01).foreach(println)
355
385
392
402
505
569
585
val seqNum=sc.parallelize(0到1000)
seqNum:org.apache.spark.rdd.rdd[Int]=ParallelCollectionRDD[37]at
并行化时间:24
seqNum.count
res30:Long=1001
seqNum.sample(false,0.01).foreach(println)
355
385
392
402
505
569
585
因此,RDD被创建并按其应该的方式工作。下面是使用完全相同的RDD和take action时发生的情况:

seqNum.take(10).foreach(println)
java.lang.IllegalArgumentException: Unsupported class file major version 56
  at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:166)
  at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:148)
  at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:136)
  at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:237)
  at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:49)
  at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:517)
  at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:500)
  at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
  at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
  at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
  at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
  at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
  at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134)
  at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
  at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:500)
  at org.apache.xbean.asm6.ClassReader.readCode(ClassReader.java:2175)
  at org.apache.xbean.asm6.ClassReader.readMethod(ClassReader.java:1238)
  at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:631)
  at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:355)
  at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:307)
  at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:306)
  at scala.collection.immutable.List.foreach(List.scala:392)
  at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:306)
  at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
  at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2100)
  at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1364)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
  at org.apache.spark.rdd.RDD.take(RDD.scala:1337)
  ... 49 elided</code>
seqNum.take(10.foreach)(println)
java.lang.IllegalArgumentException:不支持的类文件主版本56
位于org.apache.xbean.asm6.ClassReader(ClassReader.java:166)
位于org.apache.xbean.asm6.ClassReader(ClassReader.java:148)
位于org.apache.xbean.asm6.ClassReader(ClassReader.java:136)
位于org.apache.xbean.asm6.ClassReader(ClassReader.java:237)
位于org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:49)
在org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply上(ClosureCleaner.scala:517)
在org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply上(ClosureCleaner.scala:500)
在scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply处(TraversableLike.scala:733)
位于scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
位于scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
位于scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
位于scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
位于scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134)
位于scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
在org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn上(ClosureCleaner.scala:500)
位于org.apache.xbean.asm6.ClassReader.readCode(ClassReader.java:2175)
位于org.apache.xbean.asm6.ClassReader.readMethod(ClassReader.java:1238)
位于org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:631)
位于org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:355)
在org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:307)
在org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:306)
位于scala.collection.immutable.List.foreach(List.scala:392)
位于org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:306)
位于org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
位于org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
位于org.apache.spark.SparkContext.runJob(SparkContext.scala:2100)
在org.apache.spark.rdd.rdd$$anonfun$take$1.apply上(rdd.scala:1364)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
位于org.apache.spark.rdd.rdd.withScope(rdd.scala:363)
位于org.apache.spark.rdd.rdd.take(rdd.scala:1337)
... 49省略

由于RDD的创建是正确的,并且一些操作可以正常工作,我希望所有操作都可以正常工作-知道问题出在哪里吗?

看起来Spark 2.4目前不支持Java 10/11。检查Jira链接是否存在相同的错误
为了确保作业正常运行,您可能希望使用JDK 8查看该链接,它显然表明错误是Spark不支持较新的JRE造成的,所以我想这就是答案。这很奇怪,因为我(显然错误地)将错误解释为表明Spark类中的错误不受JRE支持,而不是相反。。。谢谢你的帮助!公正或记录;我安装了JDK 8,更新了配置并重新启动了Spark,现在一切都正常了。谢谢你的帮助!