Apache camel ApacheCamel:如何使用;“完成”;用于标识写入文件的记录的文件已结束,并且可以移动

Apache camel ApacheCamel:如何使用;“完成”;用于标识写入文件的记录的文件已结束,并且可以移动,apache-camel,Apache Camel,正如标题所示,我想在将DB记录写入一个文件后将其移动到另一个文件夹中。 我已经研究了与此相关的几个问题: 但我的问题有点不同,因为我使用分割、流和并行处理来获取数据库记录并写入文件。我不知道何时以及如何创建完成的文件以及并行处理。以下是代码片段: 获取记录并将其写入文件的路径: from(<ROUTE_FETCH_RECORDS_AND_WRITE>) .setHeader(Exchange.FILE_PATH, constant("<path to temp

正如标题所示,我想在将DB记录写入一个文件后将其移动到另一个文件夹中。 我已经研究了与此相关的几个问题:

但我的问题有点不同,因为我使用分割、流和并行处理来获取数据库记录并写入文件。我不知道何时以及如何创建完成的文件以及并行处理。以下是代码片段:

获取记录并将其写入文件的路径:

from(<ROUTE_FETCH_RECORDS_AND_WRITE>)
        .setHeader(Exchange.FILE_PATH, constant("<path to temp folder>"))
        .setHeader(Exchange.FILE_NAME, constant("<filename>.txt"))
        .setBody(constant("<sql to fetch records>&outputType=StreamList))
        .to("jdbc:<endpoint>)
        .split(body(), <aggregation>).streaming().parallelProcessing()
            .<some processors>
            .aggregate(header(Exchange.FILE_NAME), (o, n) -> {
                <file aggregation>
                return o;
            }).completionInterval(<some time interval>)
                .toD("file://<to the temp file>")
            .end()
        .end()
        .to("file:"+<path to temp folder>+"?doneFileName=${file:header."+Exchange.FILE_NAME+"}.done"); //this line is just for trying out done filename 
from()
.setHeader(Exchange.FILE_路径,常量(“”))
.setHeader(Exchange.FILE_名称,常量(“.txt”))
.setBody(常量(“&outputType=StreamList))
.to(“jdbc:)
.split(body(),).streaming().parallelProcessing()
.
.aggregate(头文件(Exchange.FILE_名称),(o,n)->{
返回o;
}).completionInterval()
.toD(“文件:/”)
(完)
(完)
.to(“文件:“++”?doneFileName=${file:header.+Exchange.file_NAME++”}.done”)//这一行仅用于尝试完成文件名
在拆分器的聚合策略中,我的代码基本上统计处理的记录,并准备将发送回调用方的响应。 在我的另一个外部聚合中,我有用于聚合db行的代码,并将写入的内容发布到文件中

下面是用于移动文件的文件侦听器:

from("file://<path to temp folder>?delete=true&include=<filename>.*.TXT&doneFileName=done")
.to(file://<final filename with path>?fileExist=Append);
from(“文件://?删除=true&include=.*.TXT&doneFileName=done”)
.to(文件://?fileExist=Append);
这样做会给我带来这样的错误:

     Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - Cannot store file: <folder-path>/filename.TXT] org.apache.camel.component.file.GenericFileOperationFailedException: Cannot store file: <folder-path>/filename.TXT
    at org.apache.camel.component.file.FileOperations.storeFile(FileOperations.java:292)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.file.GenericFileProducer.writeFile(GenericFileProducer.java:277)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.file.GenericFileProducer.processExchange(GenericFileProducer.java:165)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.file.GenericFileProducer.process(GenericFileProducer.java:79)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:460)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.Pipeline.process(Pipeline.java:121)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.Pipeline.process(Pipeline.java:83)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.seda.SedaConsumer.sendToConsumers(SedaConsumer.java:298)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.seda.SedaConsumer.doRun(SedaConsumer.java:207)[209:org.apache.camel.camel-core:2.16.2]
    at org.apache.camel.component.seda.SedaConsumer.run(SedaConsumer.java:154)[209:org.apache.camel.camel-core:2.16.2]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[:1.8.0_144]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[:1.8.0_144]
    at java.lang.Thread.run(Thread.java:748)[:1.8.0_144]
Caused by: org.apache.camel.InvalidPayloadException: No body available of type: java.io.InputStream but has value: Total number of records discovered: 5
原因:[org.apache.camel.component.file.GenericFileOperationFailedException-无法存储文件:/filename.TXT]org.apache.camel.component.file.GenericFileOperationFailedException:无法存储文件:/filename.TXT
位于org.apache.camel.component.file.FileOperations.storeFile(FileOperations.java:292)[209:org.apache.camel.camel核心:2.16.2]
位于org.apache.camel.component.file.GenericFileProducer.writeFile(GenericFileProducer.java:277)[209:org.apache.camel.camel核心:2.16.2]
位于org.apache.camel.component.file.GenericFileProducer.processExchange(GenericFileProducer.java:165)[209:org.apache.camel.camel核心:2.16.2]
位于org.apache.camel.component.file.GenericFileProducer.process(GenericFileProducer.java:79)[209:org.apache.camel.camel核心:2.16.2]
位于org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61)[209:org.apache.camel.camel核心:2.16.2]
位于org.apache.camel.processor.SendProcessor.process(SendProcessor.java:141)[209:org.apache.camel.camel核心:2.16.2]
位于org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:77)[209:org.apache.camel.camel核心:2.16.2]
位于org.apache.camel.processor.redeliveryrorhandler.process(redeliveryrorhandler.java:460)[209:org.apache.camel.camel核心:2.16.2]
位于org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)[209:org.apache.camel.camel核心:2.16.2]
在org.apache.camel.processor.Pipeline.process(Pipeline.java:121)[209:org.apache.camel.camel核心:2.16.2]
位于org.apache.camel.processor.Pipeline.process(Pipeline.java:83)[209:org.apache.camel.camel核心:2.16.2]
位于org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190)[209:org.apache.camel.camel核心:2.16.2]
在org.apache.camel.component.seda.SedaConsumer.sendToConsumers(SedaConsumer.java:298)[209:org.apache.camel.camel核心:2.16.2]
位于org.apache.camel.component.seda.SedaConsumer.doRun(SedaConsumer.java:207)[209:org.apache.camel.camel核心:2.16.2]
在org.apache.camel.component.seda.SedaConsumer.run(SedaConsumer.java:154)[209:org.apache.camel.camel核心:2.16.2]
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[:1.8.0_144]
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[:1.8.0_144]
在java.lang.Thread.run(Thread.java:748)[:1.8.0_144]
原因:org.apache.camel.InvalidPayloadException:没有类型为java.io.InputStream但具有值的正文:发现的记录总数:5
我做错了什么?任何投入都会有所帮助


PS:ApacheCamel新引入的我猜错误来自
.toD(“file://”)
尝试编写文件,但发现了错误类型的正文(字符串
发现的记录总数:5
而不是
InputStream


我不明白为什么拆分器内有一个文件目标,而拆分器外有一个文件目标

正如@claus ibsen所建议的,尝试删除路由中额外的
.aggregate(…)
。要拆分和重新聚合,只需在拆分器中引用聚合策略即可。claus还指出了

from()
.setHeader(Exchange.FILE_路径,常量(“”))
.setHeader(Exchange.FILE_名称,常量(“.txt”))
.setBody(常量(“&outputType=StreamList))
.to(“jdbc:)
.split(body(),)
.streaming().parallelProcessing()
//下面的处理器获得单独的部件
.
(完)
//上面的end语句结束拆分和聚合。从这里开始
//您将获得拆分器的重新聚合结果。
//因此,您可以简单地将其写入文件,也可以写入完成的文件
.至(……);
但是,如果需要控制聚合大小,则必须将拆分器和聚合器结合起来

from(<ROUTE_FETCH_RECORDS_AND_WRITE>)
    .setHeader(Exchange.FILE_PATH, constant("<path to temp folder>"))
    .setHeader(Exchange.FILE_NAME, constant("<filename>.txt"))
    .setBody(constant("<sql to fetch records>&outputType=StreamList))
    .to("jdbc:<endpoint>)
    // No aggregationStrategy here so it is a standard splitter
    .split(body())
        .streaming().parallelProcessing()
        // the processors below get individual parts 
        .<some processors>
    .end()
    // The end statement above ends split. From here 
    // you still got individual records from the splitter.
    .to(seda:aggregate);

// new route to do the controlled aggregation
from("seda:aggregate")
    // constant(true) is the correlation predicate => collect all messages in 1 aggregation
    .aggregate(constant(true), new YourAggregationStrategy())
        .completionSize(500)
    // not sure if this 'end' is needed
    .end()
    // write files with 500 aggregated records here
    .to("...");
from()
.setHeader(Exchange.FILE_路径,常量(“”))
.setHeader(Exchange.FILE_名称,常量(“.txt”))
.setBody(常量(“&outputType=StreamList))
.to(“jdbc:)
//这里没有聚合策略,因此它是一个标准拆分器
.split(body())
s
from(<ROUTE_FETCH_RECORDS_AND_WRITE>)
    .setHeader(Exchange.FILE_PATH, constant("<path to temp folder>"))
    .setHeader(Exchange.FILE_NAME, constant("<filename>.txt"))
    .setBody(constant("<sql to fetch records>&outputType=StreamList))
    .to("jdbc:<endpoint>)
    // No aggregationStrategy here so it is a standard splitter
    .split(body())
        .streaming().parallelProcessing()
        // the processors below get individual parts 
        .<some processors>
    .end()
    // The end statement above ends split. From here 
    // you still got individual records from the splitter.
    .to(seda:aggregate);

// new route to do the controlled aggregation
from("seda:aggregate")
    // constant(true) is the correlation predicate => collect all messages in 1 aggregation
    .aggregate(constant(true), new YourAggregationStrategy())
        .completionSize(500)
    // not sure if this 'end' is needed
    .end()
    // write files with 500 aggregated records here
    .to("...");