Scala 找不到类:org.apache.spark.h2o.package$StringHolder

Scala 找不到类:org.apache.spark.h2o.package$StringHolder,scala,apache-spark,h2o,sparkling-water,Scala,Apache Spark,H2o,Sparkling Water,我正在尝试简单的水滴https://github.com/h2oai/sparkling-water程序,但我无法使用spark submit使其成功运行 我使用了示例代码中使用的起泡水1.6.4 spark-submit --jars sparkling-water-assembly-1.6.4-all.jar swtest_2.10-1.0.jar 我没有使用示例代码中提供的gradel方式。我只是使用了非常简单的sbt构建 name := "SWTest" version := "1

我正在尝试简单的水滴
https://github.com/h2oai/sparkling-water
程序,但我无法使用spark submit使其成功运行

我使用了示例代码中使用的起泡水1.6.4

 spark-submit --jars sparkling-water-assembly-1.6.4-all.jar swtest_2.10-1.0.jar
我没有使用示例代码中提供的gradel方式。我只是使用了非常简单的sbt构建

name := "SWTest"

version := "1.0"

scalaVersion := "2.10.4"

libraryDependencies += "ai.h2o" % "sparkling-water-core_2.10" % "1.6.4"
libraryDependencies += "ai.h2o" % "sparkling-water-examples_2.10" % "1.6.4"
程序运行正常,直到达到:

val trainRDD = h2oContext.asRDD[StringHolder](irisData('class))
val predictRDD = h2oContext.asRDD[StringHolder](predict)    

val numMispredictions = trainRDD.zip(predictRDD).filter( i => {
      val act = i._1
      val pred = i._2
      act.result != pred.result
    }).collect()

It looks like the as.RDD needs a generic type, and here is "StringHolder"
但是,它报告错误“找不到类:org.apache.spark.h2o.package$StringHolder”:

我想我包括了sparkling-water-assembly-1.6.4-all.jar,它应该包含所有内容

有人会提出任何想法吗?

谢谢你的报告

事实上,你在起泡水中发现了一只虫子。修复程序已经在这里,将进入下一个版本

同时,简单的解决方法是在创建
sparkConf
之前在
sparkConf
上设置
conf.set(“spark.ext.h2o.repl.enabled”,“false”)
,正如Mateusz指出的那样(如果您没有从Flow UI运行Scala代码)

感谢您的报告

事实上,你在起泡水中发现了一只虫子。修复程序已经在这里,将进入下一个版本


同时,简单的解决方法是在创建
sparkConf
之前在
sparkConf
上设置
conf.set(“spark.ext.h2o.repl.enabled”,“false”)
,就像Mateusz指出的那样(如果你没有从Flow UI运行Scala代码)

你从哪里得到
程序集
jar?您使用的是哪种Spark版本?您是否正在设置
MASTER
env变量?你的火花代码看起来如何(你使用的是未经修改的液滴代码吗)?我运行了droplet代码,在添加了
conf.set(“spark.ext.h2o.repl.enabled”,“false”)
之后,在
newsparkContext(conf)
之前,它运行起来没有任何问题(我和你一样使用了
--jars
)?您使用的是哪种Spark版本?您是否正在设置
MASTER
env变量?你的火花代码看起来如何(你使用的是未经修改的液滴代码吗)?我运行了droplet代码,在添加了
conf.set(“spark.ext.h2o.repl.enabled”,“false”)
之后,在
新的SparkContext(conf)
之前,它运行起来没有任何问题(我使用了
--jars
,就像你一样)。
12-06 15:03:53.442 127.0.0.1:54321       489    FJ-1-3    INFO:  Number of Trees Model Size in Bytes Min. Depth Max. Depth Mean Depth Min. Leaves Max. Leaves Mean Leaves
12-06 15:03:53.442 127.0.0.1:54321       489    FJ-1-3    INFO:               15                2176          1          5    4.20000           2           9     7.20000
12-06 15:03:53.442 127.0.0.1:54321       489    FJ-1-3    INFO: Scoring History:
12-06 15:03:53.442 127.0.0.1:54321       489    FJ-1-3    INFO:            Timestamp   Duration Number of Trees Training MSE Training LogLoss Training Classification Error
12-06 15:03:53.442 127.0.0.1:54321       489    FJ-1-3    INFO:  2016-12-06 15:03:50  0.261 sec               0      0.44444          1.09861                       0.64000
12-06 15:03:53.442 127.0.0.1:54321       489    FJ-1-3    INFO:  2016-12-06 15:03:51  1.607 sec               1      0.36474          0.92664                       0.04000
12-06 15:03:53.442 127.0.0.1:54321       489    FJ-1-3    INFO:  2016-12-06 15:03:52  1.987 sec               2      0.29854          0.79143                       0.04667
12-06 15:03:53.442 127.0.0.1:54321       489    FJ-1-3    INFO:  2016-12-06 15:03:52  2.364 sec               3      0.24482          0.68353                       0.04667
12-06 15:03:53.442 127.0.0.1:54321       489    FJ-1-3    INFO:  2016-12-06 15:03:53  2.668 sec               4      0.20083          0.59453                       0.04667
12-06 15:03:53.442 127.0.0.1:54321       489    FJ-1-3    INFO:  2016-12-06 15:03:53  3.007 sec               5      0.16523          0.52069                       0.04667
gbm prediction
12-06 15:03:53.846 127.0.0.1:54321       489    main      INFO: Confusion Matrix (vertical: actual; across: predicted):
12-06 15:03:53.846 127.0.0.1:54321       489    main      INFO:                 Iris-setosa Iris-versicolor Iris-virginica  Error      Rate
12-06 15:03:53.846 127.0.0.1:54321       489    main      INFO:     Iris-setosa          50               0              0 0.0000 =  0 / 50
12-06 15:03:53.846 127.0.0.1:54321       489    main      INFO: Iris-versicolor           0              48              2 0.0400 =  2 / 50
12-06 15:03:53.846 127.0.0.1:54321       489    main      INFO:  Iris-virginica           0               5             45 0.1000 =  5 / 50
12-06 15:03:53.846 127.0.0.1:54321       489    main      INFO:          Totals          50              53             47 0.0467 = 7 / 150
12-06 15:03:53.847 127.0.0.1:54321       489    main      INFO: Top-3 Hit Ratios:
12-06 15:03:53.847 127.0.0.1:54321       489    main      INFO: K  Hit Ratio
12-06 15:03:53.847 127.0.0.1:54321       489    main      INFO: 1   0.953333
12-06 15:03:53.847 127.0.0.1:54321       489    main      INFO: 2   1.000000
12-06 15:03:53.847 127.0.0.1:54321       489    main      INFO: 3   1.000000
computer number of mispredictions
computer number of mispredictions
16/12/06 15:03:55 ERROR TaskResultGetter: Exception while getting task result
com.esotericsoftware.kryo.KryoException: Unable to find class: org.apache.spark.h2o.package$StringHolder
    at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
    at com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
    at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:610)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:721)
    at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:41)
    at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:33)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
    at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:338)
    at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.read(DefaultArraySerializers.java:293)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
    at org.apache.spark.serializer.KryoSerializerInstance.deserialize(KryoSerializer.scala:311)
    at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:97)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:60)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1765)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:50)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.h2o.package$StringHolder
    at java.lang.ClassLoader.findClass(ClassLoader.java:531)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at org.apache.spark.repl.h2o.InterpreterClassLoader.loadClass(InterpreterClassLoader.scala:37)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:274)
    at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
    ... 19 more
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Exception while getting task result: com.esotericsoftware.kryo.KryoException: Unable to find class: org.apache.spark.h2o.package$StringHolder
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
    at swtest$.main(swtest.scala:68)
    at swtest.main(swtest.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:735)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)