Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/unit-testing/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/facebook/8.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 在本地机器上进行火花试验_Scala_Unit Testing_Apache Spark - Fatal编程技术网

Scala 在本地机器上进行火花试验

Scala 在本地机器上进行火花试验,scala,unit-testing,apache-spark,Scala,Unit Testing,Apache Spark,我使用sbt测试在Spark 1.3.1上运行单元测试,除了单元测试速度非常慢之外,我还不断遇到java.lang.ClassNotFoundException:org.apache.Spark.storage.RDDBlockId问题。通常这意味着依赖性问题,但我不知道从哪里开始。尝试在一台新机器上安装所有东西,包括新的hadoop、新的ivy2,但我仍然遇到同样的问题 非常感谢您的帮助 例外情况: Exception in thread "Driver Heartbeater" java.l

我使用sbt测试在Spark 1.3.1上运行单元测试,除了单元测试速度非常慢之外,我还不断遇到java.lang.ClassNotFoundException:org.apache.Spark.storage.RDDBlockId问题。通常这意味着依赖性问题,但我不知道从哪里开始。尝试在一台新机器上安装所有东西,包括新的hadoop、新的ivy2,但我仍然遇到同样的问题

非常感谢您的帮助

例外情况:

Exception in thread "Driver Heartbeater" java.lang.ClassNotFoundException: 
    org.apache.spark.storage.RDDBlockId
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:270)
My build.sbt:

libraryDependencies ++=  Seq( 
  "org.scalaz"              %% "scalaz-core" % "7.1.2" excludeAll ExclusionRule(organization = "org.slf4j"), 
  "com.typesafe.play"       %% "play-json" % "2.3.4" excludeAll ExclusionRule(organization = "org.slf4j"), 
  "org.apache.spark"        %% "spark-core" % "1.3.1" % "provided"  withSources() excludeAll (ExclusionRule(organization = "org.slf4j"), ExclusionRule("org.spark-project.akka", "akka-actor_2.10")), 
  "org.apache.spark"        %% "spark-graphx" % "1.3.1" % "provided" withSources() excludeAll (ExclusionRule(organization = "org.slf4j"), ExclusionRule("org.spark-project.akka", "akka-actor_2.10")), 
  "org.apache.cassandra"    % "cassandra-all" % "2.1.6", 
  "org.apache.cassandra"    % "cassandra-thrift" % "2.1.6", 
  "com.typesafe.akka" %% "akka-actor" % "2.3.11", 
  "com.datastax.cassandra"  % "cassandra-driver-core" % "2.1.6" withSources() withJavadoc() excludeAll (ExclusionRule(organization = "org.slf4j"),ExclusionRule(organization = "org.apache.spark"),ExclusionRule(organization = "com.twitter",name = "parquet-hadoop-bundle")), 
  "com.github.nscala-time"  %% "nscala-time" % "1.2.0" excludeAll ExclusionRule(organization = "org.slf4j") withSources(), 
  "com.datastax.spark"      %% "spark-cassandra-connector-embedded" % "1.3.0-M2" excludeAll (ExclusionRule(organization = "org.slf4j"),ExclusionRule(organization = "org.apache.spark"),ExclusionRule(organization = "com.twitter",name = "parquet-hadoop-bundle")), 
  "com.datastax.spark"      %% "spark-cassandra-connector" % "1.3.0-M2" excludeAll (ExclusionRule(organization = "org.slf4j"),ExclusionRule(organization = "org.apache.spark"),ExclusionRule(organization = "com.twitter",name = "parquet-hadoop-bundle")), 
  "org.slf4j"               % "slf4j-api"            % "1.6.1", 
   "com.twitter"            % "jsr166e" % "1.1.0", 
  "org.slf4j"               % "slf4j-nop" % "1.6.1" % "test", 
  "org.scalatest"           %% "scalatest" % "2.2.1" % "test" excludeAll ExclusionRule(organization = "org.slf4j") 
) 
以及我的火花测试设置(我已禁用所有设置以测试火花测试)

(spark.kryo.registator,com.my.spark.myregistator)
(spark.eventLog.dir)
(火花驱动内存,16G)
(spark.kryoserializer.buffer.mb,512)
(spark.akka.框架尺寸,5)
(火花、洗牌、溢出、假)
(spark.default.parallelism,8)
(spark.shuffle.consolidateFiles,false)
(spark.serializer,org.apache.spark.serializer.KryoSerializer)
(火花、洗牌、溢出、压缩,错误)
(spark.driver.host,10.10.68.66)
(spark.akka.timeout,300)
(火花驱动器端口,55328)
(spark.eventLog.enabled,false)
(spark.cassandra.connection.host,127.0.0.1)
(spark.cassandra.connection.ssl.enabled,false)
(spark.master,本地[8])
(spark.cassandra.connection.ssl.trustStore.password,password)
(spark.fileserver.uri,http://10.10.68.66:55329) 
(spark.cassandra.auth.username,username)
(spark.local.dir,/tmp/spark)
(spark.应用程序id,本地-1436229075894)
(spark.storage.blockManagerHeartBeatMs,300000)
(spark.executor.id.)
火花.存储.记忆分离,0.5
(spark.app.name,统计所有条目217885402)
(spark.shuffle.compress,false)

一个组装好的或者打包好的罐子可以发送到单机版或者mesos上!建议?

原因是广播变量很大。不确定原因(因为它适合内存),但将其从测试用例中删除使其工作。

我们在Spark 1.6.0中遇到了相同的问题(已经有相关报告) 我们通过切换到Kryo序列化程序(您无论如何都应该使用它)来修复它。 因此,它似乎是默认JavaSerializer中的一个bug

只需执行以下操作即可摆脱它:

new SparkConf().setAppName("Simple Application").set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")

我看到这个没有显式广播变量,但在数据帧上调用了
cache()
。我这边也有同样的问题,但没有广播变量和cache()
new SparkConf().setAppName("Simple Application").set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")