Split 使用驼峰EIP拆分器跳过csv的第一行处理所有其他行并聚合所有行,包括跳过的行
是否有一种简单的方法可以跳过csv头的第一行,使用驼峰EIP拆分器处理所有其他行并聚合所有行,包括跳过的行? 我需要转换CSV文件中每条记录的日期,但跳过第一行,即标题。我正在尝试使用驼峰EIP拆分器。 谢谢大家!Split 使用驼峰EIP拆分器跳过csv的第一行处理所有其他行并聚合所有行,包括跳过的行,split,apache-camel,splitter,Split,Apache Camel,Splitter,是否有一种简单的方法可以跳过csv头的第一行,使用驼峰EIP拆分器处理所有其他行并聚合所有行,包括跳过的行? 我需要转换CSV文件中每条记录的日期,但跳过第一行,即标题。我正在尝试使用驼峰EIP拆分器。 谢谢大家! <route id="core.predix.consumer.route" autoStartup="true" > <from id="predixConsumer" ref="predixConsumer" />
<route
id="core.predix.consumer.route"
autoStartup="true" >
<from id="predixConsumer" ref="predixConsumer" />
<convertBodyTo type="java.lang.String" />
<split streaming="true" > <!-- strategyRef="enrichmentAggregationStrategy" stopOnException="true"> -->
<tokenize token="\n"/>
<log message="Split line ${body}"/>
<!-- <process ref="EnrichementProcessor"/> -->
</split>
<to uri="{{fileDestinationEndpoint}}" />
</route>
这不起作用。我认为这是可行的,因为如果property.CamelSlitIndex>0,它应该只执行语句中的内容,但它会针对每一行执行,即使CamelSplitIndex=-0 这是一个bug,还是我用错了
<route
id="core.pre.consumer.route"
autoStartup="true" >
<from id="preConsumer" ref="preConsumer" />
<convertBodyTo type="java.lang.String" />
<split streaming="true" > <!-- strategyRef="enrichmentAggregationStrategy" stopOnException="true"> -->
<tokenize token="\n"/>
<log message="Split line ${body}"/>
<choice>
<when>
<simple>"${property.CamelSplitIndex} > 0"</simple>
<process ref="timeStampEnrichmentProcessor" />
<log message="Camel Split Index is greater than zero: ${property.CamelSplitIndex}" />
</when>
</choice>
</split>
<log message="body: ${body}" />
<to uri="{{fileDestinationEndpoint}}" />
</route>
“${property.CamelSplitIndex}>0”
有点晚,但是
将逻辑放入聚合器
<route
id="fadec.core.PostFlightReportProducerRoute"
autoStartup="true">
<from uri="seda:postFlightReportProducer" />
<split streaming="true" strategyRef="statusAggregationStrategy">
<simple>${body}</simple>
<log message="inbound Post Flight Summary Report message body: ${body}" />
<process ref="postFlightReportMarshaler" />
<to uri="velocity:templates/postFlightReportSummary.vm" />
<log message="Velocity output: ${body}" loggingLevel="INFO" />
</split>
<wireTap uri="{{DFEndpointTest}}fileName=/${header.aircraftMetadata.outputPath}/${header.ship}_${date:now:yyyy-MM-dd_HH-mm-ss}.csv" />
<choice>
<when>
<simple>${body} == null</simple>
<log message="body is NULL, do not send NULL body!" />
<stop></stop>
</when>
<otherwise>
<process ref="xlsxProcessor"/>
<wireTap uri="{{DFEndpointTest}}fileName=/${header.aircraftMetadata.outputPath}/${header.ship}_${date:now:yyyy-MM-dd_HH-mm-ss}.xlsx" />
<log message="Sending data packet: ${header.aircraftMetadata.outputPath}/${header.ship}_${date:now:yyyy-MM-dd_HH-mm-ss}.xlsx" />
<stop></stop>
</otherwise>
</choice>
</route>
public class StatusAggregationStrategy implements AggregationStrategy {
private Logger log = LoggerFactory.getLogger(StatusAggregationStrategy.class.getName());
@Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
//-------------------------------------------------------------------------------------
// Arrived | oldExchange | newExchange | Description
//-------------------------------------------------------------------------------------
// A | NULL | A | first message arrives for the first group
// B | A | B | second message arrives for the first group
// F | NULL | F | first message arrives for the second group
// C | AB | C | third message arrives for the first group
//---------------------------------------------------------------------------------------
log.info("Status Aggregation Strategy :: start");
if ( oldExchange == null ) { //This will set the 1st record with the Header
log.info("old Exchange is Null");
String body = newExchange.getIn().getBody(String.class);
newExchange.getIn().setBody(body);
return newExchange;
}
//Each newBody msg exchange will have 2 records, a header and a data record
String newBody = newExchange.getIn().getBody(String.class);
String existingBody = oldExchange.getIn().getBody(String.class);
StringBuilder osb = new StringBuilder();
log.info("New Body exchange: " + newBody);
log.info("Old Body exchange: " + existingBody);
String SkipRecord = "";
String addRecord = "";
Scanner osc = new Scanner(newBody).useDelimiter("\r\n|\n");
while ( osc.hasNextLine() ) {
SkipRecord = osc.nextLine();
//osc.nextLine(); //move past header
log.info("aggregation: skip record: " + SkipRecord);
if ( osc.hasNextLine() ) {
addRecord = osc.nextLine();
log.info("aggregation addRecord: " + addRecord );
osb.append(addRecord).append(System.lineSeparator());
} else { log.error("bad newBody message exchange, has no data record!"); }
}
osc.close();
log.info("Joined exchange: Old body: " + existingBody + " New body: " + osb.toString());
oldExchange.getIn().setBody(existingBody + osb.toString() );
log.debug("Status Aggregation Strategy :: finish");
return oldExchange;
} //Exchange process
${body}
${body}==null
公共类StatusAggregationStrategy实现AggregationStrategy{
私有日志记录器log=LoggerFactory.getLogger(StatusAggregationStrategy.class.getName());
@凌驾
公共交换聚合(交换旧交换、交换新交换){
//-------------------------------------------------------------------------------------
//抵达|旧交易所|新交易所|描述
//-------------------------------------------------------------------------------------
//第一组的第一条消息到达| NULL | A |
//B | A | B |第一组收到第二条消息
//F | NULL | F |第二组的第一条消息到达
//C | AB | C |第一组收到第三条消息
//---------------------------------------------------------------------------------------
log.info(“状态聚合策略::启动”);
如果(oldchange==null){//这将设置头的第一条记录
log.info(“旧交换为空”);
String body=newExchange.getIn().getBody(String.class);
newExchange.getIn().setBody(body);
返回newExchange;
}
//每个newBody消息交换将有2个记录,一个标题和一个数据记录
String newBody=newExchange.getIn().getBody(String.class);
String existingBody=oldchange.getIn().getBody(String.class);
StringBuilder osb=新的StringBuilder();
日志信息(“新主体交换:“+新主体”);
log.info(“旧主体交换:”+现有主体);
字符串skiprecard=“”;
字符串addRecord=“”;
Scanner osc=新扫描仪(新车身)。使用分隔符(“\r\n | \n”);
while(osc.hasNextLine()){
skiprecard=osc.nextLine();
//osc.nextLine();//移过标题
log.info(“聚合:跳过记录:“+SkipRecord”);
if(osc.hasNextLine()){
addRecord=osc.nextLine();
log.info(“聚合addRecord:+addRecord”);
append(addRecord).append(System.lineSeparator());
}else{log.error(“错误的新手消息交换,没有数据记录!”);}
}
osc.close();
log.info(“加入交换:旧主体:“+existingBody+”新主体:“+osb.toString()”);
oldExchange.getIn().setBody(existingBody+osb.toString());
调试(“状态聚合策略::完成”);
退换货;
}//交换过程
没有提到,不管它是否是csv,.txt是可以接受的,只需要跳过文件的头,第一行,处理到结束,然后聚合整个文件,包括头。我希望有一些不涉及太多Java代码的东西来处理它。“${property.CamelSplitIndex}>0“尝试在标记化后打印标题,以查看可用的标题及其值。谢谢Rashti,您是否有一个在blueprint xml中执行此操作的示例,我在java dsl中找到了一个示例?.to(“log:like to see all?level=INFO&showAll=true&multiline=true”))我看到CamelSplit索引在递增,但是当索引为0时,内部的语句正在执行?CamelSplitIndex:0,CamelSplitIndex:1,…。CamelSplitIndex:10同样,我发现了一个简单语言的函数,可以实现这一点,但是,我不知道如何执行它。有人举过一个例子吗?Camel2.19:skip函数迭代邮件正文并跳过第一个项目数。这可与拆分器EIP一起使用,以拆分邮件正文并跳过前N个项目数。是否尝试使用exchangeProperty.CamelSplitIndex而不是property.CamelSplitIndex?