Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark aparch spark,NotSerializableException:org.apache.hadoop.io.Text_Apache Spark_Kryo_Notserializableexception - Fatal编程技术网

Apache spark aparch spark,NotSerializableException:org.apache.hadoop.io.Text

Apache spark aparch spark,NotSerializableException:org.apache.hadoop.io.Text,apache-spark,kryo,notserializableexception,Apache Spark,Kryo,Notserializableexception,这是我的密码: val bg = imageBundleRDD.first() //bg:[Text, BundleWritable] val res= imageBundleRDD.map(data => { val desBundle = colorToGray(bg._2) //lineA:NotSerializableException: org.apache.hadoop.io.Text

这是我的密码:

  val bg = imageBundleRDD.first()    //bg:[Text, BundleWritable]
  val res= imageBundleRDD.map(data => {
                                val desBundle = colorToGray(bg._2)        //lineA:NotSerializableException: org.apache.hadoop.io.Text
                                //val desBundle = colorToGray(data._2)    //lineB:everything is ok
                                (data._1, desBundle)
                             })
  println(res.count)
lineB进展顺利,但lineA显示:org.apache.spark.sparkeexception:作业中止:任务不可序列化:java.io.notserializableeexception:org.apache.hadoop.io.Text

我尝试使用Kryo解决我的问题,但似乎什么都没有改变:

import com.esotericsoftware.kryo.Kryo
import org.apache.spark.serializer.KryoRegistrator

class MyRegistrator extends KryoRegistrator {
    override def registerClasses(kryo: Kryo) {
       kryo.register(classOf[Text])
       kryo.register(classOf[BundleWritable])
  }
}

System.setProperty("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
System.setProperty("spark.kryo.registrator", "hequn.spark.reconstruction.MyRegistrator")
val sc = new SparkContext(...

谢谢

当我的Java代码读取包含文本键的序列文件时,我遇到了类似的问题。 我发现这篇文章很有帮助:

在我的例子中,我使用map将文本转换为字符串:

JavaPairRDD<String, VideoRecording> mapped = videos.map(new PairFunction<Tuple2<Text,VideoRecording>,String,VideoRecording>() {
    @Override
    public Tuple2<String, VideoRecording> call(
            Tuple2<Text, VideoRecording> kv) throws Exception {
        // Necessary to copy value as Hadoop chooses to reuse objects
        VideoRecording vr = new VideoRecording(kv._2);
        return new Tuple2(kv._1.toString(), vr);
    }
});
javapairdd-mapped=videos.map(新PairFunction(){
@凌驾
公共Tuple2调用(
Tuple2)抛出异常{
//在Hadoop选择重用对象时复制值是必需的
录像vr=新录像(千伏2);
返回新的Tuple2(kv._1.toString(),vr);
}
});
请注意JavaSparkContext中sequenceFile方法API中的注意事项:


注意:由于Hadoop的RecordReader类对每个记录重新使用相同的可写对象,因此直接缓存返回的RDD将创建对同一对象的多个引用。如果您计划直接缓存Hadoop可写对象,则应首先使用映射函数复制它们。

代码出现序列化问题的原因是您的Kryo设置在关闭时不太正确:

更改:

System.setProperty("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
System.setProperty("spark.kryo.registrator", "hequn.spark.reconstruction.MyRegistrator")
val sc = new SparkContext(...
致:


apachespark中,在处理序列文件时,我们必须遵循以下技术:

-- Use Java equivalent Data Types in place of Hadoop data types. -- Spark Automatically converts the Writables into Java equivalent Types. Ex:- We have a sequence file "xyz", here key type is say Text and value is LongWritable. When we use this file to create an RDD, we need use their java equivalent data types i.e., String and Long respectively. val mydata = = sc.sequenceFile[String, Long]("path/to/xyz") mydata.collect --使用Java等效数据类型代替Hadoop数据类型。 --Spark会自动将可写内容转换为Java等效类型。 我们有一个序列文件“xyz”,这里的键类型是文本和值 是可写的。当我们使用这个文件创建RDD时,我们需要使用它们的 java等效数据类型,即分别为String和Long。 val mydata==sc.sequenceFile[String,Long](“path/to/xyz”) mydata.collect 请使用此答案 -- Use Java equivalent Data Types in place of Hadoop data types. -- Spark Automatically converts the Writables into Java equivalent Types. Ex:- We have a sequence file "xyz", here key type is say Text and value is LongWritable. When we use this file to create an RDD, we need use their java equivalent data types i.e., String and Long respectively. val mydata = = sc.sequenceFile[String, Long]("path/to/xyz") mydata.collect