Java 在spark streaming中处理多个流后提交偏移量

Java 在spark streaming中处理多个流后提交偏移量,java,spark-streaming,spark-streaming-kafka,Java,Spark Streaming,Spark Streaming Kafka,我有一个用例,我们从Kafka数据流创建多个流。我只想在成功处理两个流之后提交偏移量。这可能吗 当前战略: 1) create dstream one. 2) create dstream two. 3) process two streams in parallel by creating threads. 4) wait for all therads to complete using countdown latch. 5) finally commit all offsets. 但在上

我有一个用例,我们从Kafka数据流创建多个流。我只想在成功处理两个流之后提交偏移量。这可能吗

当前战略:

1) create dstream one.
2) create dstream two.
3) process two streams in parallel by creating threads.
4) wait for all therads to complete using countdown latch.
5) finally commit all offsets.
但在上述策略中,一个问题是如何跟踪未完全处理的记录的偏移量

JavaInputDStream<ConsumerRecord<String, String>> telemetryStream = KafkaUtils.createDirectStream(
                streamingContext, LocationStrategies.PreferConsistent(),
                ConsumerStrategies.Subscribe(topics, kafkaParams));

JavaDStream<String> telemetryDStream = telemetryStream.map(record -> {
    return record.value();
});

telemetryDStream.cache();

CountDownLatch latch = new CountDownLatch(2);
Thread t1 = new Thread(new Runnable() {
        @Override
        public void run() {
            try {
                //processing logic here
            } finally {
                latch.countDown();
            }
    }
});

t1.start();

Thread t2 = new Thread(new Runnable() {
    @Override
    public void run() {
        try {
            //processing logic here
        } finally {
                    latch.countDown();
            }
        }
});

t2.start();

latch.await();

//now commit offsets here
JavaInputDStream遥测流=KafkaUtils.createDirectStream(
streamingContext,LocationStrategies.PreferConsistent(),
订阅(主题,卡夫卡帕兰));
JavaDStream telemetryDStream=telemetryStream.map(记录->{
返回记录.value();
});
telemetryDStream.cache();
倒计时闩锁=新倒计时闩锁(2);
线程t1=新线程(新的可运行线程(){
@凌驾
公开募捐{
试一试{
//这里的处理逻辑
}最后{
倒计时();
}
}
});
t1.start();
线程t2=新线程(新可运行(){
@凌驾
公开募捐{
试一试{
//这里的处理逻辑
}最后{
倒计时();
}
}
});
t2.start();
satch.wait();
//现在在这里提交偏移量
有没有更好的方法来处理这个问题