Scala 任务不可序列化错误:Spark
我有一个形式为Scala 任务不可序列化错误:Spark,scala,apache-spark,Scala,Apache Spark,我有一个形式为(String,(Int,Iterable[String])的RDD。对于RDD中的每个条目,整数值(我称之为distance)最初设置为10。 Iterable[String]中的每个元素在此RDD中都有自己的条目,它作为键(因此我们在单独的RDD条目中有Iterable[String]中每个元素的距离)。我的目的是执行以下操作: 1.如果列表(Iterable[String])包含一个元素“Bethan”,我会将其距离指定为1。 2.在此之后,我通过过滤创建了距离为1的所有关键
(String,(Int,Iterable[String])的RDD
。对于RDD中的每个条目,整数值(我称之为distance)最初设置为10。
Iterable[String]
中的每个元素在此RDD中都有自己的条目,它作为键(因此我们在单独的RDD条目中有Iterable[String]
中每个元素的距离)。我的目的是执行以下操作:1.如果列表(
Iterable[String]
)包含一个元素“Bethan”,我会将其距离指定为1。2.在此之后,我通过过滤创建了距离为1的所有关键点的列表。
3.在此之后,我将RDD转换为一个新的RDD,如果RDD自身列表中的任何元素的距离为1,则会将其距离值更新为2。
我有以下代码:
val disOneRdd = disRdd.map(x=> {if(x._2._2.toList.contains("Bethan")) (x._1,(1,x._2._2)) else x})
var lst = disRdd.filter(x=> x._2._1 == 1).keys.collect
val disTwoRdd = disRdd.map(x=> {
var b:Boolean = false
loop.breakable{
for (str <- x._2._2)
if (lst.contains(str)) //checks if it contains element with distance 1
b = true
loop.break
}
if (b)
(x._1,(2,x._2._2))
else
(x._1,(10,x._2._2))
})
列表中包含Beethan的每个元素的距离应为1。具有“距离为1的元素”(而不是Beethan)的每个元素都应具有距离2。输出的形式如下:
("abc",(2,List("efg","hij","klm")))
("efg",(1,List("jhg","Beethan","abc","ert")))
("Beethan",(0,List("efg","vcx","zse")))
("vcx",(1,List("czx","Beethan","abc")))
("zse",(1,List("efg","Beethan","nbh"))
("gvf",(10,List("vcsd","fdgd")))
...
错误消息:
[error] (run-main-0) org.apache.spark.SparkException: Task not serializable
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2037)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:366)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:365)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.RDD.map(RDD.scala:365)
at Bacon$.main(Bacon.scala:86)
at Bacon.main(Bacon.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: java.io.NotSerializableException: scala.util.control.Breaks
Serialization stack:
- object not serializable (class: scala.util.control.Breaks, value: scala.util.control.Breaks@78426203)
- field (class: Bacon$$anonfun$15, name: loop$1, type: class scala.util.control.Breaks)
- object (class Bacon$$anonfun$15, <function1>)
[error](run-main-0)org.apache.spark.SparkException:任务不可序列化
org.apache.spark.SparkException:任务不可序列化
位于org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
位于org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
位于org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
位于org.apache.spark.SparkContext.clean(SparkContext.scala:2037)
位于org.apache.spark.rdd.rdd$$anonfun$map$1.apply(rdd.scala:366)
位于org.apache.spark.rdd.rdd$$anonfun$map$1.apply(rdd.scala:365)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
位于org.apache.spark.rdd.rdd.withScope(rdd.scala:358)
位于org.apache.spark.rdd.rdd.map(rdd.scala:365)
在培根$.main(培根.斯卡拉:86)
培根大街(培根大街,斯卡拉大街)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
原因:java.io.NotSerializableException:scala.util.control.Breaks
序列化堆栈:
-对象不可序列化(类:scala.util.control.Breaks,值:scala.util.control)。Breaks@78426203)
-字段(类:Bacon$$anonfun$15,名称:loop$1,类型:class scala.util.control.Breaks)
-对象(类Bacon$$anonfun$15,)
或
导入scala.util.control.Breaks_
val disOneRdd=disdd.map(x=>{if(x._2._2.toList.contains(“Bethan”))(x._1,(1,x._2._2.)else x})
var lst=disdd.filter(x=>x.\u 2.\u 1==1).keys.collect
val disTwoRdd=disdd.map(x=>{
变量b:Boolean=false
易碎的{
例如(一个小示例(示例输入和预期输出)将有助于了解您想要实现的目标here@cheseaux请看edit@sarthak还请添加stacktrace-这是非常有用的,通常有哪些类导致错误的信息error@T.Gawęda你是指错误消息吗?我已经更新了问题中的错误消息。有关spark序列化的详细分析:
[error] (run-main-0) org.apache.spark.SparkException: Task not serializable
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2037)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:366)
at org.apache.spark.rdd.RDD$$anonfun$map$1.apply(RDD.scala:365)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
at org.apache.spark.rdd.RDD.map(RDD.scala:365)
at Bacon$.main(Bacon.scala:86)
at Bacon.main(Bacon.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
Caused by: java.io.NotSerializableException: scala.util.control.Breaks
Serialization stack:
- object not serializable (class: scala.util.control.Breaks, value: scala.util.control.Breaks@78426203)
- field (class: Bacon$$anonfun$15, name: loop$1, type: class scala.util.control.Breaks)
- object (class Bacon$$anonfun$15, <function1>)
val disOneRdd = disRdd.map(x=> {if(x._2._2.toList.contains("Bethan")) (x._1,(1,x._2._2)) else x})
var lst = disRdd.filter(x=> x._2._1 == 1).keys.collect
val disTwoRdd = disRdd.map(x=> {
var b:Boolean = x._._2.filter(y => lst.contains(y)).size() > 0
if (b)
(x._1,(2,x._2._2))
else
(x._1,(10,x._2._2))
})
import scala.util.control.Breaks._
val disOneRdd = disRdd.map(x=> {if(x._2._2.toList.contains("Bethan")) (x._1,(1,x._2._2)) else x})
var lst = disRdd.filter(x=> x._2._1 == 1).keys.collect
val disTwoRdd = disRdd.map(x=> {
var b:Boolean = false
breakable{
for (str <- x._2._2)
if (lst.contains(str)) //checks if it contains element with distance 1
b = true
break
}
if (b)
(x._1,(2,x._2._2))
else
(x._1,(10,x._2._2))
})