Java 如何将Flink中的时间窗口保存到文本文件?

Java 如何将Flink中的时间窗口保存到文本文件?,java,apache-kafka,apache-flink,Java,Apache Kafka,Apache Flink,我开始在爪哇的阿帕切弗林克工作 我的目标是在一分钟的时间窗口中使用ApacheKafka主题,这将应用非常基本的信息,并将每个窗口的结果记录在一个文件中 到目前为止,我成功地将文本转换简化应用到我接收到的内容中,我应该使用apply或process来编写文件。我有点丢失了窗口的结果 这是到目前为止我的代码 package myflink; import org.apache.flink.api.common.functions.FlatMapFunction; import org.apache

我开始在爪哇的阿帕切弗林克工作

我的目标是在一分钟的时间窗口中使用ApacheKafka主题,这将应用非常基本的信息,并将每个窗口的结果记录在一个文件中

到目前为止,我成功地将文本转换简化应用到我接收到的内容中,我应该使用apply或process来编写文件。我有点丢失了窗口的结果

这是到目前为止我的代码

package myflink;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import java.time.ZoneId;
import java.util.Date;
import java.util.Properties;
import org.apache.flink.api.java.tuple.Tuple;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.tuple.Tuple3;
import org.apache.flink.shaded.akka.org.jboss.netty.channel.ExceptionEvent;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.functions.windowing.AllWindowFunction;
import org.apache.flink.streaming.api.functions.windowing.ProcessAllWindowFunction;
import org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;
import org.apache.flink.streaming.api.functions.windowing.WindowFunction;
import org.apache.flink.streaming.api.watermark.Watermark;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;
import scala.util.parsing.json.JSONObject;
public class BatchJob {
    public static void main(String[] args) throws Exception {
        final StreamExecutionEnvironment  env = StreamExecutionEnvironment.getExecutionEnvironment();
        Properties properties = new Properties();
        properties.setProperty("bootstrap.servers", "localhost:9092");
        properties.setProperty("zookeeper.connect", "localhost:2181");
        properties.setProperty("group.id", "test");
        properties.setProperty("auto.offset.reset", "latest");
        FlinkKafkaConsumer consumer = new FlinkKafkaConsumer("topic-basic-test", new SimpleStringSchema(), properties);
        DataStream<String> data = env.addSource(consumer);
        data.flatMap(new JSONparse()).timeWindowAll(Time.minutes(1))."NEXT ??" .print()
        System.out.println("Hola usuario 2");
        env.execute("Flink Batch Java API Skeleton");
    }
    public static class JSONparse implements FlatMapFunction<String, Tuple2<String, String>> {
        @Override
        public void flatMap(String s, Collector<Tuple2<String, String>> collector) throws Exception {
            System.out.println(s);
            s = s + "ACA PODES JUGAR NDEAH";
            collector.collect(new Tuple2<String,String>("M",s));
        }
    }
}
packagemyflink;
导入org.apache.flink.api.common.functions.FlatMapFunction;
导入org.apache.flink.api.common.functions.MapFunction;
导入org.apache.flink.api.common.serialization.SimpleStringSchema;
导入java.time.ZoneId;
导入java.util.Date;
导入java.util.Properties;
导入org.apache.flink.api.java.tuple.tuple;
导入org.apache.flink.api.java.tuple.Tuple2;
导入org.apache.flink.api.java.tuple.Tuple3;
导入org.apache.flink.shade.akka.org.jboss.netty.channel.ExceptionEvent;
导入org.apache.flink.streaming.api.datastream.datastream;
导入org.apache.flink.streaming.api.functions.windowing.AllWindowFunction;
导入org.apache.flink.streaming.api.functions.windowing.ProcessAllWindowFunction;
导入org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;
导入org.apache.flink.streaming.api.functions.windowing.WindowFunction;
导入org.apache.flink.streaming.api.watermark.watermark;
导入org.apache.flink.streaming.api.windowing.time.time;
导入org.apache.flink.streaming.api.windowing.windows.TimeWindow;
导入org.apache.flink.streaming.connectors.kafka.flinkkafconsumer;
导入org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
导入org.apache.flink.util.Collector;
导入scala.util.parsing.json.JSONObject;
公共类批处理作业{
公共静态void main(字符串[]args)引发异常{
最终StreamExecutionEnvironment env=StreamExecutionEnvironment.getExecutionEnvironment();
属性=新属性();
setProperty(“bootstrap.servers”,“localhost:9092”);
setProperty(“zookeeper.connect”,“localhost:2181”);
properties.setProperty(“group.id”、“test”);
properties.setProperty(“auto.offset.reset”、“latest”);
FlinkKafkaConsumer=new FlinkKafkaConsumer(“主题基本测试”,new SimpleStringSchema(),属性);
DataStream data=env.addSource(消费者);
data.flatMap(new JSONparse()).timeWindowAll(Time.minutes(1)).“下一步?”.print()
System.out.println(“Hola usuario 2”);
execute(“Flink批处理Java API框架”);
}
公共静态类JSONparse实现FlatMapFunction{
@凌驾
公共void flatMap(字符串s,收集器)引发异常{
系统输出打印项次;
s=s+“ACA PODES JUGAR NDEAH”;
collector.collect(新的Tuple2(“M”,s));
}
}
}

如果您想将每个一分钟窗口的结果转到自己的文件中,您可以查看使用带一分钟存储桶的,这应该满足您的需要,或者非常接近

我认为实际上每个窗口都会有一个目录,其中包含该窗口的每个并行实例中的一个文件——但是当您使用不并行运行的
timeWindowAll
时,每个bucket将只有一个文件,除非结果太大以至于文件滚动

顺便说一句,在FlatMap中进行JSON解析会执行得相当糟糕,因为这最终会为每个事件实例化一个新的解析器,这反过来会导致相当多的GC活动。最好使用RichFlatMap并在open()方法中创建一个解析器,以便对每个事件重用。更妙的是,使用
JSONKeyValueDeserializationSchema
而不是
SimpleStringSchema
,让kafka连接器为您处理json解析