Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark Spark Cluster驱动程序出现错误-_Apache Spark_Streaming_Hbase_Cluster Computing - Fatal编程技术网

Apache spark Spark Cluster驱动程序出现错误-

Apache spark Spark Cluster驱动程序出现错误-,apache-spark,streaming,hbase,cluster-computing,Apache Spark,Streaming,Hbase,Cluster Computing,无法将scala.collection.immutable.List$SerializationProxy的实例分配给org.apache.spark.rdd.rdd.org$apache$spark$rdd$rdd$rdd$$dependencies\字段,该字段在org.apache.spark.rdd.MapPartitionsRDD的实例中为scala.collection.Seq类型 JavaPairInputStream消息=KafkaUtils.createDirectStream

无法将scala.collection.immutable.List$SerializationProxy的实例分配给org.apache.spark.rdd.rdd.org$apache$spark$rdd$rdd$rdd$$dependencies\字段,该字段在org.apache.spark.rdd.MapPartitionsRDD的实例中为scala.collection.Seq类型

JavaPairInputStream消息=KafkaUtils.createDirectStream( jssc, String.class, 字节[]。类, StringDecoder.class, DefaultDecoder.class, 卡夫卡帕拉姆斯, 主题集 );

     JavaDStream<CustomerActivityRequestModel> customerActivityStream = messages.map(new Function<Tuple2<String, byte[]>, CustomerActivityRequestModel>() {
            /**
         * 
         */
        private static final long serialVersionUID = -75093981513752762L;

            @Override
            public CustomerActivityRequestModel call(Tuple2<String, byte[]> tuple2) throws IOException, ClassNotFoundException {

                 CustomerActivityRequestModel  x = NearbuySessionWorkerHelper.unmarshal(CustomerActivityRequestModel.class, tuple2._2);
                 LOGGER.info(x.getActionLink());
                 LOGGER.info(x.getAppVersion());
                 return x;
            }
        });




     customerActivityStream.foreachRDD(new VoidFunction<JavaRDD<CustomerActivityRequestModel>>() {



        /**
         * 
         */
        private static final long serialVersionUID = -9045343297759771559L;

        @Override
        public void call(JavaRDD<CustomerActivityRequestModel> customerRDD) throws Exception {
            Configuration hconf = HBaseConfiguration.create();
            hconf.set("hbase.zookeeper.quorum", "localhost");
            hconf.set("hbase.zookeeper.property.clientPort", "2181");
            //hconf.set(TableOutputFormat.OUTPUT_TABLE, hbaseTableName);
            hconf.set(TableInputFormat.INPUT_TABLE, hbaseTableName);
            Job newAPIJobConfiguration1 = Job.getInstance(hconf);
            newAPIJobConfiguration1.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, hbaseTableName);
            newAPIJobConfiguration1.setOutputFormatClass(org.apache.hadoop.hbase.mapreduce.TableOutputFormat.class);

            JavaPairRDD<ImmutableBytesWritable, Put> hbasePuts= customerRDD.mapToPair(new PairFunction<CustomerActivityRequestModel, ImmutableBytesWritable, Put>() {


                /**
                 * 
                 */
                private static final long serialVersionUID = -6574479136167252295L;

                @Override
                public Tuple2<ImmutableBytesWritable, Put> call(CustomerActivityRequestModel customer) throws Exception {


                            Bytes.toBytes("long"),Bytes.toBytes(customer.getLongitude()));
                    return new Tuple2<ImmutableBytesWritable, Put>(new ImmutableBytesWritable(), put); 
                }
            });
             hbasePuts.saveAsNewAPIHadoopDataset(newAPIJobConfiguration1.getConfiguration());

        }
    });
JavaDStream customerActivityStream=messages.map(新函数(){
/**
* 
*/
私有静态最终长serialVersionUID=-75093981513752762L;
@凌驾
公共CustomerActivityRequestModel调用(Tuple2 Tuple2)引发IOException,ClassNotFoundException{
CustomerActivityRequestModel x=NearbuySessionWorkerHelper.unmarshal(CustomerActivityRequestModel.class,tuple2.\u 2);
LOGGER.info(x.getActionLink());
LOGGER.info(x.getAppVersion());
返回x;
}
});
customerActivityStream.foreachRDD(新的VoidFunction(){
/**
* 
*/
私有静态最终长serialVersionUID=-9045343297759771559L;
@凌驾
公共void调用(JavaRDD customerRDD)引发异常{
配置hconf=HBaseConfiguration.create();
hconf.set(“hbase.zookeeper.quorum”、“localhost”);
hconf.set(“hbase.zookeeper.property.clientPort”,“2181”);
//hconf.set(TableOutputFormat.OUTPUT_TABLE,hbaseTableName);
hconf.set(TableInputFormat.INPUT_TABLE,hbaseTableName);
Job newAPIJobConfiguration1=Job.getInstance(hconf);
newAPIJobConfiguration1.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE,hbaseTableName);
newAPIJobConfiguration1.setOutputFormatClass(org.apache.hadoop.hbase.mapreduce.TableOutputFormat.class);
javapairdd hbasePuts=customerRDD.mapToPair(新的PairFunction(){
/**
* 
*/
私有静态最终长serialVersionUID=-6574479136167252295L;
@凌驾
公共元组2调用(CustomerActivityRequestModel customer)引发异常{
Bytes.toBytes(“long”)、Bytes.toBytes(customer.getLongitude());
返回新的Tuple2(newImmutableBytesWritable(),put);
}
});
hbasePuts.saveAsNewAPIHadoopDataset(newAPIJobConfiguration1.getConfiguration());
}
});

您正在执行的jar需要位于每个节点的类路径中,这在我的例子中解决了相同的问题