使用newAPIHadoopFile和FixedLengthinInputFormat创建javaRDD
我正在尝试使用NewApiHadoop文件和FixedLengthinInputFormat创建Spark javaRDD。这是我的代码snippit使用newAPIHadoopFile和FixedLengthinInputFormat创建javaRDD,java,hadoop,apache-spark,Java,Hadoop,Apache Spark,我正在尝试使用NewApiHadoop文件和FixedLengthinInputFormat创建Spark javaRDD。这是我的代码snippit Configuration config = new Configuration(); config.setInt(FixedLengthInputFormat.FIXED_RECORD_LENGTH, JPEG_INDEX_SIZE); config.set("fs.hdfs.impl", DistributedFileSystem.class
Configuration config = new Configuration();
config.setInt(FixedLengthInputFormat.FIXED_RECORD_LENGTH, JPEG_INDEX_SIZE);
config.set("fs.hdfs.impl", DistributedFileSystem.class.getName());
String fileFilter = config.get("fs.defaultFS") + "/A/B/C/*.idx";
JavaPairRDD<LongWritable, BytesWritable> inputRDD = sparkContext.newAPIHadoopFile(fileFilter, FixedLengthInputFormat.class, LongWritable.class, BytesWritable.class, config);
知道我做错了什么吗?我是新手。大卫
Error executing mapreduce job: com.fasterxml.jackson.databind.JsonMappingException: Infinite recursion (StackOverflowError)