Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/392.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 部署到群集时flatmap函数中出现异常_Java_Apache Flink_Ignite - Fatal编程技术网

Java 部署到群集时flatmap函数中出现异常

Java 部署到群集时flatmap函数中出现异常,java,apache-flink,ignite,Java,Apache Flink,Ignite,我有一个flink ignite应用程序。我接收来自卡夫卡的消息,处理mesages,然后缓存点火。当我在ide(intellij)和独立jar中运行程序时,没有问题,但当我部署到集群时,我得到了这个异常(我在代码前面创建了表)。提前谢谢。 注意,连接变量在我的主类中是静态的 Caused by: java.lang.NullPointerException at altosis.flinkcompute.compute.Main$2.flatMap(Main.java:9

我有一个flink ignite应用程序。我接收来自卡夫卡的消息,处理mesages,然后缓存点火。当我在ide(intellij)和独立jar中运行程序时,没有问题,但当我部署到集群时,我得到了这个异常(我在代码前面创建了表)。提前谢谢。 注意,连接变量在我的主类中是静态的

   Caused by: java.lang.NullPointerException
        at altosis.flinkcompute.compute.Main$2.flatMap(Main.java:95)
        at altosis.flinkcompute.compute.Main$2.flatMap(Main.java:85)
        at org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
        at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
        ... 22 more
StreamExecutionEnvironment环境=StreamExecutionEnvironment.getExecutionEnvironment();
getConfig();
环境.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
环境。设置并行性(1);
Properties props=新属性();
setProperty(“bootstrap.servers”,“localhost:9092”);
props.setProperty(“group.id”、“事件组”);
FlinkKafkaConsumer=new FlinkKafkaConsumer(“EventTopic”,new EventSerializationSchema(),props);
DataStream eventDataStream=environment.addSource(使用者);
KeyedStream keyedEventStream=eventDataStream.assignTimestampsAndWatermarks(
具有PeriodicWatermarkSimpl()的新赋值器
).
keyBy(新的KeySelector(){
@凌驾
公共字符串getKey(EventSalesQuantity EventSalesQuantity)引发异常{
return eventsalequantity.getDealer();
}
});
DataStream eventSinkStream=keyedEventStream.window(TumblingEventTimeWindows.of(Time.of(1,TimeUnit.DAYS),Time.hours(21)).aggregate(new AggregateImpl());
点火=点火。启动();
ClientConfiguration cfg=new ClientConfiguration().setAddresses(“127.0.0.1:10800”);
igniteClient=Ignition.startClient(cfg);
System.out.println(“>>>瘦客户机启动示例”);
client.query(
新的SqlFieldsQuery(String.format(
如果不存在,则创建表Eventcache(eventtime VARCHAR主键,bayi VARCHAR,sales INT),其\“VALUE\u TYPE=%s\”,
EventSalesQuantity.class.getName()
)).setSchema(“公共”)
).getAll();
addSink(新的FlinkKafkaProducer(“localhost:9092”,“SinkEventTopic”,新的EventSinkSerializationSchema());
Class.forName(“org.apache.ignite.IgniteJdbcThinDriver”);
conn=DriverManager.getConnection(“jdbc:ignite:thin://127.0.0.1/");
flatMap(新的FlatMapFunction(){
@凌驾
公共void flatMap(Tuple2 eventSalesQuantityIntegerTuple2,收集器收集器)引发异常{
Ignsql=conn.prepareStatement(
“插入Eventcache(eventtime、bayi、sales)值(?,,?)”;
Ignsql.setString(1,eventSalesQuantityIntegerTuple2.f0.getTransactionDate());
Ignsql.setString(2,eventSalesQuantityIntegerTuple2.f0.getDealer());
setInt(3,eventSalesQuantityIntegerTuple2.f1);
Ignsql.execute();
Ignsql.close();
}
});
//eventSinkStream.print();
execute()```
当你说“注意连接变量在我的主类中是静态的”时,我假设你说的是
Ignsql
。如果是这样,那么代码将无法工作,因为映射函数无法使用该变量,而该函数在工作流实际开始运行之前由JobManager序列化和分发


您应该创建一个RichFlatMapFunction类,并在
open()
方法中设置所需的连接变量,然后在
close()
方法中关闭它们。如果设置连接变量需要配置参数,则应将这些参数传递到RichFlatMapFunction的构造函数中,并将其保存在(非瞬态)变量中,然后在
open()中使用它们
method.

您是否尝试过调试以查看第95行中所有变量/字段/等的值?@user2478398是的,我尝试过。字段没有问题。在ide中运行程序时,我也可以看到表中的数据。这并不能证明什么。当您使用调试器运行时,可能会屏蔽并发错误。。。像同步不充分引起的哈巴狗。不幸的是,您提供的代码片段不足以进行正确的诊断。@StephenC先生,我已经编辑了我的代码。谢谢你的回复。
            StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment();
            environment.getConfig();
            environment.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
            environment.setParallelism(1);
            Properties props = new Properties();
            props.setProperty("bootstrap.servers", "localhost:9092");
            props.setProperty("group.id","event-group");

            FlinkKafkaConsumer<EventSalesQuantity> consumer = new FlinkKafkaConsumer<EventSalesQuantity>("EventTopic",new EventSerializationSchema(),props);
            DataStream<EventSalesQuantity> eventDataStream = environment.addSource(consumer);

            KeyedStream<EventSalesQuantity, String> keyedEventStream = eventDataStream.assignTimestampsAndWatermarks(
                    new AssignerWithPeriodicWatermarksImpl()
            ).
                    keyBy(new KeySelector<EventSalesQuantity, String>() {
                        @Override
                        public String getKey(EventSalesQuantity eventSalesQuantity) throws Exception {
                            return  eventSalesQuantity.getDealer();
                        }
                    });

            DataStream<Tuple2<EventSalesQuantity,Integer>> eventSinkStream = keyedEventStream.window(TumblingEventTimeWindows.of(Time.of(1, TimeUnit.DAYS),Time.hours(21))).aggregate(new AggregateImpl());
            ignite = Ignition.start();
            ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
            igniteClient = Ignition.startClient(cfg);

            System.out.println(">>> Thin client put-get example started.");
            igniteClient.query(
                    new SqlFieldsQuery(String.format(
                            "CREATE TABLE IF NOT EXISTS Eventcache (eventtime VARCHAR PRIMARY KEY, bayi VARCHAR, sales INT ) WITH \"VALUE_TYPE=%s\"",
                            EventSalesQuantity.class.getName()
                    )).setSchema("PUBLIC")
            ).getAll();

            eventSinkStream.addSink(new FlinkKafkaProducer<Tuple2<EventSalesQuantity, Integer>>("localhost:9092","SinkEventTopic",new EventSinkSerializationSchema()));
            Class.forName("org.apache.ignite.IgniteJdbcThinDriver");

            conn = DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1/");
            eventSinkStream.flatMap(new FlatMapFunction<Tuple2<EventSalesQuantity, Integer>, Object>() {
                @Override
                public void flatMap(Tuple2<EventSalesQuantity, Integer> eventSalesQuantityIntegerTuple2, Collector<Object> collector) throws Exception {
                    Ignsql= conn.prepareStatement(
                            "INSERT INTO Eventcache (eventtime, bayi, sales) VALUES (?, ?, ?)");

                    Ignsql.setString(1, eventSalesQuantityIntegerTuple2.f0.getTransactionDate());
                    Ignsql.setString(2, eventSalesQuantityIntegerTuple2.f0.getDealer());
                    Ignsql.setInt(3, eventSalesQuantityIntegerTuple2.f1);
                    Ignsql.execute();
                    Ignsql.close();
                }
            });

           // eventSinkStream.print();
            environment.execute();```