为什么spark抛出NotSerializableException org.apache.hadoop.io.NullWritable和序列文件

为什么spark抛出NotSerializableException org.apache.hadoop.io.NullWritable和序列文件,hadoop,io,hdfs,apache-spark,Hadoop,Io,Hdfs,Apache Spark,为什么spark会对序列文件抛出NotSerializableExceptionorg.apache.hadoop.io.NullWritable?我的代码(非常简单): 例外 org.apache.spark.SparkException: Job aborted: Task 1.0:66 had a not serializable result: java.io.NotSerializableException: org.apache.hadoop.io.NullWritable

为什么spark会对序列文件抛出
NotSerializableException
org.apache.hadoop.io.NullWritable
?我的代码(非常简单):

例外

org.apache.spark.SparkException: Job aborted: Task 1.0:66 had a not serializable result: java.io.NotSerializableException: org.apache.hadoop.io.NullWritable
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1028)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1026)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1026)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
    at akka.actor.ActorCell.invoke(ActorCell.scala:456)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

因此,可以将不可序列化的类型读入RDD,也就是说,有一个不可序列化的RDD(这似乎违反直觉)。但是,一旦您希望在RDD上执行一个要求对象可序列化的操作,比如
repartition
,它就需要可序列化。此外,事实证明,这些奇怪的类是可写的,尽管它们的发明只是为了实现序列化,但实际上并不是可序列化的:(。因此,您必须将这些东西映射到字节数组,然后再映射回来:

sc.sequenceFile[NullWritable, BytesWritable](in)
.map(_._2.copyBytes()).repartition(1000)
.map(a => (NullWritable.get(), new BytesWritable(a)))
.saveAsSequenceFile(out, None)

另请参见:

在spark中,如果尝试使用不可序列化的第三方类,则会引发NotSerializable异常。这是因为spark的闭包属性,即任何实例变量(在转换操作之外定义)您尝试在转换操作内部访问spark尝试序列化它以及该对象的所有依赖类。

在本例中,我不尝试在转换操作外部访问实例变量。事实上,我成功地在第三方类上执行了转换操作。通常我会看到此例外这一次我有点困惑。我学到的是RDD中的类不需要序列化,它们需要序列化,当且仅当程序要求它们的序列化在某个点实际发生时。如果输入是sequenceFile[BytesWritable,BytesWritable],那么我使用“map”要将[BytesWritable,BytesWritable]转换为[byte[],byte[],我想还原ByKey,但它返回一个错误“Default partitioner cannot partition array keys”。您有什么解决方案吗?@MichaelDinh
.toList
。map(p=>(p.\u 1.toList,p.\u 2))然后
reduceByKey
sc.sequenceFile[NullWritable, BytesWritable](in)
.map(_._2.copyBytes()).repartition(1000)
.map(a => (NullWritable.get(), new BytesWritable(a)))
.saveAsSequenceFile(out, None)