Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/336.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 在kstreams应用程序中转换消息花费了意外的长时间_Java_Apache Kafka Streams - Fatal编程技术网

Java 在kstreams应用程序中转换消息花费了意外的长时间

Java 在kstreams应用程序中转换消息花费了意外的长时间,java,apache-kafka-streams,Java,Apache Kafka Streams,我有一个非常基本的用例kstreams应用程序,其中我将对消息进行几秒钟的去公告,然后使用transform将消息删除或存储在状态存储中。我还有一个标点方法,它每30秒触发一次,遍历存储并发出消息 我发现,从应用程序获取消息到将消息发送到转换函数所花费的时间比我预期的要长得多(我假设转换函数在窗口过期后很快发生)。对于我的用例来说,这并不完全是一个问题,但我很好奇,到底是什么原因需要这么长时间才能实现转换函数 final StreamsBuilder builder = new Stre

我有一个非常基本的用例kstreams应用程序,其中我将对消息进行几秒钟的去公告,然后使用transform将消息删除或存储在状态存储中。我还有一个标点方法,它每30秒触发一次,遍历存储并发出消息

我发现,从应用程序获取消息到将消息发送到转换函数所花费的时间比我预期的要长得多(我假设转换函数在窗口过期后很快发生)。对于我的用例来说,这并不完全是一个问题,但我很好奇,到底是什么原因需要这么长时间才能实现转换函数

    final StreamsBuilder builder = new StreamsBuilder();
    final StoreBuilder<KeyValueStore<String, Payload>> store = Stores.keyValueStoreBuilder(
            Stores.inMemoryKeyValueStore(keyValueStoreName),
            Serdes.String(),
            avroSerde
    );
    builder.addStateStore(store);

    final Consumed<String, Payload> consumed = Consumed.with(Serdes.String(), avroSerde)
            .withTimestampExtractor(new WallclockTimestampExtractor());
    final Produced<String, Payload> produced = Produced.with(Serdes.String(), avroSerde);
    final KStream<String, Payload> stream = builder.stream(inputTopic, consumed);
    final SessionWindows sessionWindows = SessionWindows
            .with(Duration.ofSeconds(2));
    final SessionWindowTransformerSupplier transformerSupplier =
            new SessionWindowTransformerSupplier(keyValueStoreName, scheduleTimeSeconds);
    final SessionBytesStoreSupplier sessionBytesStoreSupplier = Stores.persistentSessionStore(
            "debounce-window",
            Duration.ofSeconds(3));
    final Materialized<String, Payload, SessionStore<Bytes, byte[]>> materializedAs =
            Materialized.as(sessionBytesStoreSupplier);

    stream
            .selectKey((key, value) -> {
                logger.info("selecting key: " + key);
                return key;
            })
            .groupByKey()
            .windowedBy(sessionWindows)
            .reduce(payloadDebounceFunction::apply, materializedAs)
            .toStream()
            .transform(transformerSupplier, keyValueStoreName)
            .to(outputTopic, produced);

    return builder;
kstream是否比这里显示的做了更多的工作,以便像这样的事情花费它所需要的时间?希望对这是否是一个配置/计时问题或者这是否是kstreams应用程序的正常行为有所启发

编辑:我想我已经找到了错误的地方,这与commit.interval.ms的默认值有关

在检查内部主题之前,更改不会提交到内部主题,因此,在这些更改到达内部主题之前,我的转换函数不会启动。我把这个缩短到了一秒钟,马上就看到了区别

@Override
public void init(ProcessorContext context) {
    this.processorContext = context;
    this.store = (KeyValueStore<String, Payload>) context.getStateStore(keyValueStoreName);
    context.schedule(ofSeconds(scheduleTime), WALL_CLOCK_TIME, timestamp -> punctuate());
}

@Override
public KeyValue<String, Payload> transform(Windowed<String> key, Payload value) {
    synchronized (this) {
        if(value != null) {
            BatchScanStatus status = extractStatus(value);
            boolean removeFromStoreStatus = BatchScanStatus.CANCELLED.equals(status)
                    || BatchScanStatus.FINALIZING.equals(status);

            if(removeFromStoreStatus) {
                logger.info("Deleting key from store: {}", key);
                store.delete(key.key());
            } else {
                logger.info("Adding key to store: {}", key);
                store.putIfAbsent(key.key(), value);
            }
            processorContext.commit();
        }
        return null;
    }
}

private void punctuate() {
    synchronized (this) {
        final KeyValueIterator<String, Payload> keyIter = store.all();
        while(keyIter.hasNext()) {
            final KeyValue<String, Payload> record = keyIter.next();
            logger.info("Forwarding key: {}", record.key);
            processorContext.forward(record.key, record.value);
        }

        keyIter.close();
    }
}
15:58:35.238 [scheduler-79112bd0-2310-482e-9aab-8bcaae746082-StreamThread-1] INFO  c.b.d.f.s.kstreams.Scheduler - selecting key: keykeykey
15:58:59.181 [scheduler-79112bd0-2310-482e-9aab-8bcaae746082-StreamThread-1] INFO  c.b.d.f.s.k.s.SessionTransformer - Adding key to store: [keykeykey@1570737515238/1570737515238]