Java 记录消费者未正确读取记录?

Java 记录消费者未正确读取记录?,java,chronicle,chronicle-queue,Java,Chronicle,Chronicle Queue,我使用编年史队列(5.16.13)将json值写入和读取到编年史文件中。 要写入对象,我在循环中使用以下命令 try (final DocumentContext dc = appender.writingDocument()) { dc.wire().write(() -> "msg").text("Hallo asdf"); System.out.println("your data was store to index="+ dc.index());

我使用编年史队列(5.16.13)将json值写入和读取到编年史文件中。 要写入对象,我在循环中使用以下命令

try (final DocumentContext dc = appender.writingDocument()) {
        dc.wire().write(() -> "msg").text("Hallo asdf");
        System.out.println("your data was store to index="+ dc.index());
        return true;
    } catch (Exception e) {
        logger.warn("Unable to store value to chronicle", e);
        return false;
    }
DocumentContext documentContext;
    do {
        documentContext = tailer.readingDocument();
        currentOffset = documentContext.index();
        System.out.println("Current offset: " + currentOffset);
    } while (!documentContext.isData());
要读取项目,我在循环中执行以下调用

try (final DocumentContext dc = appender.writingDocument()) {
        dc.wire().write(() -> "msg").text("Hallo asdf");
        System.out.println("your data was store to index="+ dc.index());
        return true;
    } catch (Exception e) {
        logger.warn("Unable to store value to chronicle", e);
        return false;
    }
DocumentContext documentContext;
    do {
        documentContext = tailer.readingDocument();
        currentOffset = documentContext.index();
        System.out.println("Current offset: " + currentOffset);
    } while (!documentContext.isData());
我观察到的是,变量
currentOffset
没有变化,经过一段时间后(似乎取决于有效负载大小),循环变得无限大,当前偏移量有疯狂的值。 第一个循环的输出(缩短)为

Writing 0
your data was store to index=76385993359360
Writing 1
your data was store to index=76385993359361
Writing 2
your data was store to index=76385993359362
Writing 3
your data was store to index=76385993359363
Writing 4
your data was store to index=76385993359364
Writing 5
your data was store to index=76385993359365
Writing 6
your data was store to index=76385993359366
Writing 7
your data was store to index=76385993359367
Writing 8
your data was store to index=76385993359368
Writing 9
your data was store to index=76385993359369
Writing 10
your data was store to index=76385993359370
Writing 11
your data was store to index=76385993359371
Writing 12
your data was store to index=76385993359372
Writing 13
your data was store to index=76385993359373
Writing 14
your data was store to index=76385993359374
Writing 15
your data was store to index=76385993359375
Writing 16
your data was store to index=76385993359376
Writing 17
your data was store to index=76385993359377
Writing 18
your data was store to index=76385993359378
Writing 19
your data was store to index=76385993359379
Writing 20
your data was store to index=76385993359380
Writing 21
your data was store to index=76385993359381
Writing 22
your data was store to index=76385993359382
Writing 23
your data was store to index=76385993359383
Writing 24
your data was store to index=76385993359384
Writing 25
your data was store to index=76385993359385
Writing 26
your data was store to index=76385993359386
第二圈呢

Reading 0
Current offset: 76385993359360
Reading 1
Current offset: 76385993359360
Reading 2
Current offset: 76385993359360
Reading 3
Current offset: 76385993359360
Reading 4
Current offset: 76385993359360
Reading 5
Current offset: 76385993359360
Reading 6
Current offset: 76385993359360
Reading 7
Current offset: 76385993359360
Reading 8
Current offset: 76385993359360
Reading 9
Current offset: 76385993359360
Reading 10
Current offset: 76385993359360
Reading 11
Current offset: 76385993359360
Reading 12
Current offset: 76385993359360
Reading 13
Current offset: 76385993359360
Reading 14
Current offset: 76385993359360
Reading 15
Current offset: 76385993359360
Reading 16
Current offset: 76385993359360
Reading 17
Current offset: 76385993359360
Reading 18
Current offset: 76385993359360
Reading 19
Current offset: 76385993359360
Reading 20
Current offset: 76385993359360
Reading 21
Current offset: 76385993359360
Reading 22
Current offset: 76385993359360
Reading 23
Current offset: 76385993359360
Reading 24
Current offset: 76385993359360
Reading 25
Current offset: -9223372036854775808
我做错什么了吗? 那么Anobody能告诉我正确的用法吗

非常感谢

编辑:添加了最小工作示例 下面的单元测试对我来说失败了

@Test
public void fails() throws Exception {
    String basePath = System.getProperty("java.io.tmpdir");
    String path = Files.createTempDirectory(Paths.get(basePath), "chronicle-")
            .toAbsolutePath()
            .toString();
    logger.info("Using temp path '{}'", path);

    SingleChronicleQueue chronicleQueue = SingleChronicleQueueBuilder
            .single()
            .path(path)
            .build();

    // Create Appender
    ExcerptAppender appender = chronicleQueue.acquireAppender();

    // Create Tailer
    ExcerptTailer tailer = chronicleQueue.createTailer();
    tailer.toStart();

    int numberOfRecords = 10;

    // Write
    for (int i = 0; i <= numberOfRecords; i++) {
        System.out.println("Writing " + i);
        try (final DocumentContext dc = appender.writingDocument()) {
            dc.wire().write(() -> "msg").text("Hello World!");
            System.out.println("your data was store to index=" + dc.index());
        } catch (Exception e) {
            logger.warn("Unable to store value to chronicle", e);
        }
    }
    // Read
    for (int i = 0; i <= numberOfRecords; i++) {
        System.out.println("Reading " + i);
        DocumentContext documentContext = tailer.readingDocument();
        long currentOffset = documentContext.index();
        System.out.println("Current offset: " + currentOffset);

        Wire wire = documentContext.wire();

        if (wire != null) {
            String msg = wire
                    .read("msg")
                    .text();
        }
    }

    chronicleQueue.close();
} 

我自己使用@PeterLawrey的建议找到了答案,并将文档上下文包装在try with resources中。这就解决了问题。 请参阅下面更正的代码段

@Test
public void works() throws Exception {
    String basePath = System.getProperty("java.io.tmpdir");
    String path = Files.createTempDirectory(Paths.get(basePath), "chronicle-")
            .toAbsolutePath()
            .toString();
    logger.info("Using temp path '{}'", path);

    SingleChronicleQueue chronicleQueue = SingleChronicleQueueBuilder
            .single()
            .path(path)
            .build();

    // Create Appender
    ExcerptAppender appender = chronicleQueue.acquireAppender();

    // Create Tailer
    ExcerptTailer tailer = chronicleQueue.createTailer();
    tailer.toStart();

    int numberOfRecords = 10;

    // Write
    for (int i = 0; i <= numberOfRecords; i++) {
        System.out.println("Writing " + i);
        try (final DocumentContext dc = appender.writingDocument()) {
            dc.wire().write(() -> "msg").text("Hello World!");
            System.out.println("your data was store to index=" + dc.index());
        } catch (Exception e) {
            logger.warn("Unable to store value to chronicle", e);
        }
    }
    // Read
    for (int i = 0; i <= numberOfRecords; i++) {
        System.out.println("Reading " + i);
        try (DocumentContext documentContext = tailer.readingDocument()) {
            long currentOffset = documentContext.index();
            System.out.println("Current offset: " + currentOffset);

            Wire wire = documentContext.wire();

            if (wire != null) {
                String msg = wire
                        .read("msg")
                        .text();
            }
        }
    }

    chronicleQueue.close();
}

希望这对其他人有所帮助。

使用DocumentContext的目的是作为较低级别的界面之一,而不是每个人都喜欢。我赞成使用MethodReader/MethodWriter方法,除非你有理由在较低级别工作

@Test
public void works() {
    String path = OS.TMP + "/chronicle-" + System.nanoTime();
    System.out.println("Using temp path " + path);

    try (SingleChronicleQueue queue = SingleChronicleQueueBuilder
            .single()
            .path(path)
            .build()) {

        ExcerptAppender appender = queue.acquireAppender();
        Messager messager = appender.methodWriter(Messager.class);

        int numberOfRecords = 10;

        // Write
        for (int i = 0; i <= numberOfRecords; i++) {
            System.out.print("Writing " + i);
            messager.msg("Hello World!");
            System.out.println(", your data was stored at index=" + appender.lastIndexAppended());
        }

        ExcerptTailer tailer = queue.createTailer();
        MethodReader reader = tailer.methodReader((Messager) msg -> {
            System.out.println("Current offset: " + tailer.index()
                    + " msg: " + msg);
        });

        // Read
        while (reader.readOne()) {
            // busy wait.
        }
    }
}
注意:这样写的数据与原始文章相同


使用这种接口方法的一个优点是,您可以完全使用带有DTO的方法接口来实现业务组件,而完全不使用历史记录(或传输)。当您从测试中删除传输时,这简化了测试业务逻辑。

您应该在DocumentContext上使用try with resource,但是,在这种情况下,您的错误并不明显。我怀疑这是你怎么称呼这个密码的。你能提供一个完整的单元测试吗?@PeterLawrey,你能看看我对答案的评论吗?到底是什么问题?如何使用资源试用修复它?@Bhaskar我不确定。。。我想最后会有重要的事情发生。Try with resource调用自动执行,而我的第一个版本没有。我知道Try with resources的关闭将导致数据写入标头,但只有明确的方法才能做到这一点——我认为这不应该只是一些黑匣子魔法。可能是@PeterLawrey可以澄清?@Bhaskar文档中的所有示例都表明您需要
try(final DocumentContext dc=appender.writingDocument())
或者您必须自己调用close()。如果不调用close(),它实际上不会提交写操作。以完全相同的方式,所有阅读示例都显示了
try(DocumentContext=tailer.readingDocument())
,或者您必须自己关闭它()。这不仅仅是为了保持一致性,还允许库在正确的位置高效地执行内务管理。如果不调用
close()
它也不会正常工作。谢谢Peter。但我只是在这段代码中将nano时间戳添加到了Messager中(将appender和tailer分离为单独的线程,这些线程同时启动和运行),我注意到延迟超过50000微秒(是50000微秒)。我使用的方式有一些不正确的地方,或者消息没有尽快写入内存存储?(Mac i5,4个线程,8 GB RAM)我确实让JVM预热发生了,因为我将消息总数增加到1000万条,而忽略了前100K条。@Bhaskar这很可能发生在文件卷上。e、 g.查看我在C:\Users\peter\AppData\Local\Temp\chronicle-412979753710181\20181226.cq4中包含的第一个日志
net.openhft.chronicle.bytes.MappedFile-0块的分配花费了25.061毫秒。
避免这种情况的两种方法是a)创建一个块大小永远不会填充的文件b)使用预处理程序来降低这样的成本耽搁@Bhaskar对于生产,我建议使用带SSD的Linux,如果必须的话可以使用Windows,但是我们的客户都不使用Mac服务器进行生产。谢谢@PeterLawrey。我使用了块大小,还为appender添加了一些繁忙的等待,因此每次追加都会旋转等待3到5微秒,现在延迟大大降低。仍然有一些抖动,但我认为这是由于我的机器的CPU,而不是编年史本身。再次感谢您的指导。我建议您运行我的MicroJitterSampler,了解您的机器的CPU调度延迟有多大。在没有其他负载的情况下以及在运行基准测试时尝试它。
Using temp path C:\Users\peter\AppData\Local\Temp\/chronicle-412979753710181
[main] DEBUG net.openhft.chronicle.bytes.MappedFile - Allocation of 0 chunk in C:\Users\peter\AppData\Local\Temp\chronicle-412979753710181\metadata.cq4t took 15.418 ms.
[main] DEBUG net.openhft.chronicle.bytes.MappedFile - Allocation of 0 chunk in C:\Users\peter\AppData\Local\Temp\chronicle-412979753710181\20181226.cq4 took 25.061 ms.
Writing 0, your data was stored at index=76841259892736
Writing 1, your data was stored at index=76841259892737
Writing 2, your data was stored at index=76841259892738
Writing 3, your data was stored at index=76841259892739
Writing 4, your data was stored at index=76841259892740
Writing 5, your data was stored at index=76841259892741
Writing 6, your data was stored at index=76841259892742
Writing 7, your data was stored at index=76841259892743
Writing 8, your data was stored at index=76841259892744
Writing 9, your data was stored at index=76841259892745
Writing 10, your data was stored at index=76841259892746

Current offset: 76841259892736 msg: Hello World!
Current offset: 76841259892737 msg: Hello World!
Current offset: 76841259892738 msg: Hello World!
Current offset: 76841259892739 msg: Hello World!
Current offset: 76841259892740 msg: Hello World!
Current offset: 76841259892741 msg: Hello World!
Current offset: 76841259892742 msg: Hello World!
Current offset: 76841259892743 msg: Hello World!
Current offset: 76841259892744 msg: Hello World!
Current offset: 76841259892745 msg: Hello World!
Current offset: 76841259892746 msg: Hello World!
[main] DEBUG net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder - File released C:\Users\peter\AppData\Local\Temp\chronicle-412979753710181\20181226.cq4
[main] DEBUG net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder - File released C:\Users\peter\AppData\Local\Temp\chronicle-412979753710181\20181226.cq4