Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/apache/8.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
具有不可序列化的结果:org.apache.hadoop.hbase.client.result private static javapairdd getcompanydata(JavaSparkContext sc)抛出IOException{ 返回sc.newAPIHadoopRDD(companyDAO.getCompnayDataConfiguration()、TableInputFormat.class、ImmutableBytesWritable.class、, Result.class).mapToPair(新的PairFunction(){ 公共Tuple2调用(tuple2t)引发异常{ System.out.println(“In-getcompanydata”+t.2); 字符串cknid=Bytes.toString(t._1.get()); System.out.println(“处理cknids为:+cknid”); 整数cknidInt=Integer.parseInt(cknid); Tuple2 returnTuple=新的Tuple2(cknidInt,t.。\u 2); 返回元组; } }); }_Apache_Hadoop - Fatal编程技术网

具有不可序列化的结果:org.apache.hadoop.hbase.client.result private static javapairdd getcompanydata(JavaSparkContext sc)抛出IOException{ 返回sc.newAPIHadoopRDD(companyDAO.getCompnayDataConfiguration()、TableInputFormat.class、ImmutableBytesWritable.class、, Result.class).mapToPair(新的PairFunction(){ 公共Tuple2调用(tuple2t)引发异常{ System.out.println(“In-getcompanydata”+t.2); 字符串cknid=Bytes.toString(t._1.get()); System.out.println(“处理cknids为:+cknid”); 整数cknidInt=Integer.parseInt(cknid); Tuple2 returnTuple=新的Tuple2(cknidInt,t.。\u 2); 返回元组; } }); }

具有不可序列化的结果:org.apache.hadoop.hbase.client.result private static javapairdd getcompanydata(JavaSparkContext sc)抛出IOException{ 返回sc.newAPIHadoopRDD(companyDAO.getCompnayDataConfiguration()、TableInputFormat.class、ImmutableBytesWritable.class、, Result.class).mapToPair(新的PairFunction(){ 公共Tuple2调用(tuple2t)引发异常{ System.out.println(“In-getcompanydata”+t.2); 字符串cknid=Bytes.toString(t._1.get()); System.out.println(“处理cknids为:+cknid”); 整数cknidInt=Integer.parseInt(cknid); Tuple2 returnTuple=新的Tuple2(cknidInt,t.。\u 2); 返回元组; } }); },apache,hadoop,Apache,Hadoop,我正在对mapToPair中的fetchint表进行扫描,结果是不可序列化的:org.apache.hadoop.hbase.client.result我遇到了问题,结果出现了非序列化异常。 我解决了 请试试这个 conf.set(“spark.serializer”、“org.apache.spark.serializer.KryoSerializer”) conf.registerKryoClasses(数组(classOf[org.apache.hadoop.hbase.client.Re

我正在对mapToPair中的fetchint表进行扫描,结果是不可序列化的:org.apache.hadoop.hbase.client.result

我遇到了问题,结果出现了非序列化异常。 我解决了

请试试这个

conf.set(“spark.serializer”、“org.apache.spark.serializer.KryoSerializer”) conf.registerKryoClasses(数组(classOf[org.apache.hadoop.hbase.client.Result]))

并尝试使用内存和磁盘进行持久化


请让我知道这对你有用

下面是错误java.io.NotSerializableException:org.apache.hadoop.hbase.client.Resulti下面是错误提供错误堆栈跟踪,请提供问题的更完整描述。你现在的那个缺了。
private static JavaPairRDD<Integer, Result> getCompanyDataRDD(JavaSparkContext sc) throws IOException {
        return sc.newAPIHadoopRDD(companyDAO.getCompnayDataConfiguration(), TableInputFormat.class, ImmutableBytesWritable.class,
            Result.class).mapToPair(new PairFunction<Tuple2<ImmutableBytesWritable, Result>, Integer, Result>() {

            public Tuple2<Integer, Result> call(Tuple2<ImmutableBytesWritable, Result> t) throws Exception {
                System.out.println("In getCompanyDataRDD"+t._2);

                String cknid = Bytes.toString(t._1.get());
                System.out.println("processing cknids is:"+cknid);
                Integer cknidInt = Integer.parseInt(cknid);
                Tuple2<Integer, Result> returnTuple = new Tuple2<Integer, Result>(cknidInt, t._2);
                return returnTuple;
            }
        });
    }