Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/315.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Apache Beam-RabbitMq读取-失败确认消息并引发异常_Java_Rabbitmq_Apache Beam_Apache Beam Io - Fatal编程技术网

Java Apache Beam-RabbitMq读取-失败确认消息并引发异常

Java Apache Beam-RabbitMq读取-失败确认消息并引发异常,java,rabbitmq,apache-beam,apache-beam-io,Java,Rabbitmq,Apache Beam,Apache Beam Io,我正在实现一个管道来读取RabbitMq队列 我在未绑定流中读取时遇到问题 这表示通道已关闭,ack未发送到rabbitmq,消息仍在队列上: WARNING: Failed to finalize Finalization{expiryTime=2020-11-21T19:33:14.909Z, callback=org.apache.beam.sdk.io.Read$UnboundedSourceAsSDFWrapperFn$$Lambda$378/0x00000001007ee440@4a

我正在实现一个管道来读取RabbitMq队列

我在未绑定流中读取时遇到问题

这表示通道已关闭,ack未发送到rabbitmq,消息仍在队列上:

WARNING: Failed to finalize Finalization{expiryTime=2020-11-21T19:33:14.909Z, callback=org.apache.beam.sdk.io.Read$UnboundedSourceAsSDFWrapperFn$$Lambda$378/0x00000001007ee440@4ae82af9} for completed bundle CommittedImmutableListBundle{PCollection=Read RabbitMQ queue/Read(RabbitMQSource)/ParDo(UnboundedSourceAsSDFWrapper)/ParMultiDo(UnboundedSourceAsSDFWrapper)/ProcessKeyedElements/SplittableParDoViaKeyedWorkItems.GBKIntoKeyedWorkItems.out [PCollection], key=org.apache.beam.repackaged.direct_java.runners.local.StructuralKey$CoderStructuralKey@3607f949, elements=[ValueInGlobalWindow{value=ComposedKeyedWorkItem{key=[-55, 41, -123, 97, 13, 104, 92, 61, 92, 122, -19, 112, -90, 16, 7, -97, 89, 107, -80, 12, 9, 120, 10, -97, 72, 114, -62, -105, 101, -34, 96, 48, 30, -96, 8, -19, 23, -115, -9, 87, 1, -58, -127, 70, -59, -24, -40, -111, -63, -119, 51, -108, 126, 64, -4, -120, -41, 9, 56, -63, -18, -18, -1, 17, -82, 90, -32, 110, 67, -12, -97, 10, -107, -110, 13, -74, -47, -113, 122, 27, 52, 46, -111, -118, -8, 118, -3, 20, 71, -109, 65, -87, -94, 107, 114, 116, -110, -126, -79, -123, -67, 18, -33, 70, -100, 9, -81, -65, -2, 98, 33, -122, -46, 23, -103, -70, 79, -23, 74, 9, 5, -9, 65, -33, -52, 5, 9, 101], elements=[], timers=[TimerData{timerId=1:1605986594072, timerFamilyId=, namespace=Window(org.apache.beam.sdk.transforms.windowing.GlobalWindow@4958d651), timestamp=2020-11-21T19:23:14.072Z, outputTimestamp=2020-11-21T19:23:14.072Z, domain=PROCESSING_TIME}]}, pane=PaneInfo.NO_FIRING}], minimumTimestamp=-290308-12-21T19:59:05.225Z, synchronizedProcessingOutputWatermark=2020-11-21T19:23:14.757Z}
com.rabbitmq.client.AlreadyClosedException: channel is already closed due to clean channel shutdown; protocol method: #method<channel.close>(reply-code=200, reply-text=OK, class-id=0, method-id=0)
        at com.rabbitmq.client.impl.AMQChannel.ensureIsOpen(AMQChannel.java:258)
        at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:427)
        at com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:421)
        at com.rabbitmq.client.impl.recovery.RecoveryAwareChannelN.basicAck(RecoveryAwareChannelN.java:93)
        at com.rabbitmq.client.impl.recovery.AutorecoveringChannel.basicAck(AutorecoveringChannel.java:428)
        at org.apache.beam.sdk.io.rabbitmq.RabbitMqIO$RabbitMQCheckpointMark.finalizeCheckpoint(RabbitMqIO.java:433)
        at org.apache.beam.runners.direct.EvaluationContext.handleResult(EvaluationContext.java:195)
        at org.apache.beam.runners.direct.QuiescenceDriver$TimerIterableCompletionCallback.handleResult(QuiescenceDriver.java:287)
        at org.apache.beam.runners.direct.DirectTransformExecutor.finishBundle(DirectTransformExecutor.java:189)
        at org.apache.beam.runners.direct.DirectTransformExecutor.run(DirectTransformExecutor.java:126)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)
警告:未能完成终结{expiryTime=2020-11-21T19:33:14.909Z,callback=org.apache.beam.sdk.io.Read$unboundedsourceasdfwrapperfn$$Lambda$378/0x00000001007ee440@4ae82af9}对于已完成的bundle committeeMutableListBundle{PCollection=Read RabbitMQ queue/Read(RabbitMQSource)/ParDo(unboundedsourceasdfwrapper)/ParMultiDo(unboundedsourceasdfwrapper)/ProcessKeyedElements/SplittableParDoViaKeyedWorkItems.GBKIntoKeyedWorkItems.out[PCollection],key=org.apache.beam.repacked.direct\u java.runners.local.StructuralKey$CoderStructuralKey@3607f949,elements=[ValueInGlobalWindow{value=ComposedKeyedWorkItem{key=[-55, 41, -123, 97, 13, 104, 92, 61, 92, 122, -19, 112, -90, 16, 7, -97, 89, 107, -80, 12, 9, 120, 10, -97, 72, 114, -62, -105, 101, -34, 96, 48, 30, -96, 8, -19, 23, -115, -9, 87, 1, -58, -127, 70, -59, -24, -40, -111, -63, -119, 51, -108, 126, 64, -4, -120, -41, 9, 56, -63, -18, -18, -1, 17, -82, 90, -32, 110, 67, -12, -97, 10, -107, -110, 13, -74、-47、-113、122、27、52、46、-111、-118、-8、118、-3、20、71、-109、65、-87、-94、107、114、116、-110、-126、-79、-123、-67、18、-33、70、-100、9、-81、-65、-2、98、33、-122、-46、23、-103、-70、79、-23、74、9、5、-9、65、-33、-52、5、9、101],元素=[],计时器=[]计时器=[TimerData{TimerData{TimerGrid=1:1605986595959594072,timerFamilyId=,命名空间=Window(org.apache.beam.sdk.transforms.windowing。GlobalWindow@4958d651),timestamp=2020-11-21T19:23:14.072Z,outputTimestamp=2020-11-21T19:23:14.072Z,domain=PROCESSING_TIME}]},pane=PaneInfo.NO_firming}],minimumTimestamp=-290308-12-21T19:59:05.225Z,synchronizedProcessingOutputWatermark=2020-11-21T19:23:14.757Z}
com.rabbitmq.client.AlreadyClosedException:由于干净通道关闭,通道已关闭;协议方法:#方法(回复代码=200,回复文本=OK,类id=0,方法id=0)
位于com.rabbitmq.client.impl.AMQChannel.com(AMQChannel.java:258)
位于com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:427)
位于com.rabbitmq.client.impl.AMQChannel.transmit(AMQChannel.java:421)
位于com.rabbitmq.client.impl.recovery.RecoveryAwareChannelN.basicAck(RecoveryAwareChannelN.java:93)
位于com.rabbitmq.client.impl.recovery.AutorecoveringChannel.basicAck(AutorecoveringChannel.java:428)
位于org.apache.beam.sdk.io.rabbitmq.RabbitMqIO$rabbitmqcheckmark.finalizeCheckpoint(RabbitMqIO.java:433)
位于org.apache.beam.runners.direct.EvaluationContext.handleResult(EvaluationContext.java:195)
位于org.apache.beam.runners.direct.quiescedriver$timeriterablelecompletioncallback.handleResult(quiescedriver.java:287)
位于org.apache.beam.runners.DirectTransformExecutor.finishBundle(DirectTransformExecutor.java:189)
位于org.apache.beam.runners.direct.DirectTransformExecutor.run(DirectTransformExecutor.java:126)
位于java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
位于java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
位于java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
位于java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
位于java.base/java.lang.Thread.run(Thread.java:834)

但是

如果我包括MaxNumRecords

我收到消息并将ack发送到rabbitmq队列

但它作为绑定数据工作


代码

我的代码如下:

    Pipeline p = Pipeline.create(options);

   PCollection<RabbitMqMessage> messages = p.apply("Read RabbitMQ queue",
        RabbitMqIO.read()
        .withUri("amqp://guest:guest@localhost:5672")
        .withQueue("queue")
        //.withMaxNumRecords(1)  // TRANFORM BOUND
        );
 
    PCollection<TableRow> rows = messages.apply("Transform Json to TableRow",
        ParDo.of(new DoFn<RabbitMqMessage, TableRow>() {

        @ProcessElement
        public void processElement(ProcessContext c) {

            ObjectMapper objectMapper = new ObjectMapper();
            String jsonInString = new String(c.element().getBody());
            LOG.info(jsonInString);
        }
  }));

  rows.apply(
      "Write to BigQuery",
      BigQueryIO.writeTableRows()
          .to("livelo-analytics-dev:cart_idle.cart_idle_process")
          .withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_NEVER)
          .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
  );
Pipeline p=Pipeline.create(选项);
PCollection messages=p.apply(“读取RabbitMQ队列”,
RabbitMqIO.read()
.withUri(“amqp://guest:guest@本地主机:5672“)
.withQueue(“队列”)
//.withMaxNumRecords(1)//传输绑定
);
PCollection rows=messages.apply(“将Json转换为TableRow”,
(新DoFn()的副部长{
@过程元素
公共void processElement(ProcessContext c){
ObjectMapper ObjectMapper=新的ObjectMapper();
String jsonInString=新字符串(c.element().getBody());
LOG.info(jsonInString);
}
}));
行。应用(
“写入BigQuery”,
BigQueryIO.writeTableRows()
.to(“livelo分析开发:购物车空闲。购物车空闲\u流程”)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE\u NEVER)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.Write\u追加)
);

有人能帮上忙吗?

我给ApacheDevThread发了一封电子邮件,从中得到了一个非常棒的答案 张伯元 这对我来说是个解决办法

As a workaround, you can add --experiments=use_deprecated_read when launching your pipeline to bypass the sdf unbounded source wrapper here.

把它作为一个参数放在命令行上,对我来说效果很好

我向apache开发线程发送了一封电子邮件,从 张伯元 这对我来说是个解决办法

As a workaround, you can add --experiments=use_deprecated_read when launching your pipeline to bypass the sdf unbounded source wrapper here.
把它作为一个参数放在命令行上,对我来说效果很好