Google bigquery 从BigQuery到Cloud Bigtable的Google云数据流管道中的异常

Google bigquery 从BigQuery到Cloud Bigtable的Google云数据流管道中的异常,google-bigquery,google-cloud-dataflow,google-cloud-bigtable,Google Bigquery,Google Cloud Dataflow,Google Cloud Bigtable,执行数据流管道,每隔一段时间我们就会看到这些异常。我们能为他们做些什么吗?我们有一个非常简单的流程,从BigQuery查询中读取数据并在BigTable中填充数据 管道内的数据也会发生什么变化?它是再加工的吗?还是在去BigTable的途中丢失了 CloudBigtableIO.initializeForWrite(p); p.apply(BigQueryIO.Read.fromQuery(getQuery())) .apply(ParDo.of(new DoFn<T

执行数据流管道,每隔一段时间我们就会看到这些异常。我们能为他们做些什么吗?我们有一个非常简单的流程,从BigQuery查询中读取数据并在BigTable中填充数据

管道内的数据也会发生什么变化?它是再加工的吗?还是在去BigTable的途中丢失了

 CloudBigtableIO.initializeForWrite(p);
     p.apply(BigQueryIO.Read.fromQuery(getQuery()))
     .apply(ParDo.of(new DoFn<TableRow, Mutation>() {
           public void processElement(ProcessContext c) {
             Mutation output = convertDataToRow(c.element());
             c.output(output);
           }

           }))
         .apply(CloudBigtableIO.writeToTable(config));


private static Mutation convertDataToRow(TableRow element) {
     LOG.info("element: "+ element);
     LOG.info("BASM_segment_id: "+ element.get("BASM_segment_id"));
     if(element.get("BASM_AID") != null){
         Put obj = new Put(getRowKey(element).getBytes()).addColumn(SEGMENT_FAMILY, SEGMENT_COLUMN_NAME, ((String)element.get("BAS_category")).getBytes() );
                obj.addColumn(USER_FAMILY, "AID".getBytes(), ((String)element.get("BASM_AID")).getBytes());
         if(element.get("BASM_segment_id") != null){
                obj.addColumn(SEGMENT_FAMILY, "segment_id".getBytes(), ((String)element.get("BASM_segment_id")).getBytes());
         }
         if(element.get("BAS_sub_category") != null){
                obj.addColumn(SEGMENT_FAMILY, "sub_category".getBytes(), ((String)element.get("BAS_sub_category")).getBytes());
         }
         if(element.get("BAS_name") != null){
                obj.addColumn(SEGMENT_FAMILY, "name".getBytes(), ((String)element.get("BAS_name")).getBytes());
         }
         if(element.get("BAS_description") != null){
                obj.addColumn(SEGMENT_FAMILY, "description".getBytes(), ((String)element.get("BAS_description")).getBytes());
         }
         if(element.get("BAS_last_compute_day") != null){obj.addColumn(USER_FAMILY, "Krux_User_id".getBytes(), ((String)element.get("BASM_krux_user_id")).getBytes());
                obj.addColumn(SEGMENT_FAMILY, "last_compute_day".getBytes(), ((String)element.get("BAS_last_compute_day")).getBytes());
         }
         if(element.get("BAS_type") != null){
                obj.addColumn(SEGMENT_FAMILY, "type".getBytes(), ((String)element.get("BAS_type")).getBytes());
         }      
         if(element.get("BASM_REGID") != null){
                obj.addColumn(USER_FAMILY, "REGID".getBytes(), ((String)element.get("BASM_REGID")).getBytes() );
         }
        return obj;
     }else{
         return null;
     }
    }
2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:com.google.cloud.dataflow.sdk.util.UserCodeException:org.apache.hadoop。。。。 2016-08-23(13:17:54)java.lang.RuntimeException:


提前感谢

我们在脱机状态下进行了交谈。这里的问题是,与集群中Cloud Bigtable节点的数量相比,您有太多的数据流工作者。你需要改变这个比例,或者联系我们的团队来增加你的CloudBigtable资源

相对于您拥有的Cloud Bigtable节点数量,Bigtable的性能令人钦佩,但数据流的负载太高,无法可靠地处理

您可以在谷歌云控制台的图表中查看您的使用情况。任何超过80%的容量都可能导致问题。如果获得更多Bigtable配额,则可以在运行数据流作业之前增加节点数,并在作业完成后减少节点数。例如

==

关于“管道内的数据发生了什么?是重新处理的?还是在传输到BigTable时丢失的?”:

数据流再次尝试将数据发送到Bigtable。在这些情况下,数据流的重试机制将纠正临时问题


不幸的是,当问题被证明是云Bigtable过载时,重试会向Bigtable发送更多流量,从而加剧问题。

您使用的是什么版本的客户端?0.9.1?@LesVogel-GoogleDevRel是的,我们正在为bigtable hbase数据流使用0.9.1版本。我已经请工程部门的人发表意见-应该是今天晚些时候。你能在日志中搜索“批量操作期间发生的异常”(原文如此)吗?这将提供更多关于实际问题的信息记录。RetriesHaustedWithDetailsException过于通用。您可能需要考虑:if(output!=null){c.output(output);};这种类型的异常可能发生在空值的情况下。感谢@Solomon Duskis,我现在减少了工作人员的数量,我看不到之前出现的异常,但很明显,我的工作现在需要更多的时间来完成。是的。减少工人就可以做到这一点。我快速看了一下你的图表,看起来你可以再增加几个工人。或者,您可以要求我们增加配额,在启动数据流之前添加更多Bigtable节点,并在完成后减少数量。Bigtable节点越多,可以实现的吞吐量就越大,完成作业的速度也就越快。
(7e75740160102c05): java.lang.RuntimeException: com.google.cloud.dataflow.sdk.util.UserCodeException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: StatusRuntimeException: 1 time, at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:162) at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase$DoFnContext.outputWindowedValue(DoFnRunnerBase.java:287) at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase$DoFnProcessContext.output(DoFnRunnerBase.java:449) at com.nytimes.adtech.dataflow.pipelines.BigTableSegmentData$2.processElement(BigTableSegmentData.java:70) Caused by: com.google.cloud.dataflow.sdk.util.UserCodeException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: StatusRuntimeException: 1 time, at com.google.cloud.dataflow.sdk.util.UserCodeException.wrap(UserCodeException.java:35) at com.google.cloud.dataflow.sdk.util.UserCodeException.wrapIf(UserCodeException.java:40) at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.wrapUserCodeException(DoFnRunnerBase.java:368) at com.google.cloud.dataflow.sdk.util.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:51) at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.processElement(DoFnRunnerBase.java:138) at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:190) at com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.processElement(ForwardingParDoFn.java:42) at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerLoggingParDoFn.processElement(DataflowWorkerLoggingParDoFn.java:47) at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:53) at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52) at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:160) at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase$DoFnContext.outputWindowedValue(DoFnRunnerBase.java:287) at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase$DoFnProcessContext.output(DoFnRunnerBase.java:449) at com.nytimes.adtech.dataflow.pipelines.BigTableSegmentData$2.processElement(BigTableSegmentData.java:70) at com.google.cloud.dataflow.sdk.util.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:49) at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.processElement(DoFnRunnerBase.java:138) at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:190) at com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.processElement(ForwardingParDoFn.java:42) at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerLoggingParDoFn.processElement(DataflowWorkerLoggingParDoFn.java:47) at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:53) at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52) at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:226) at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.start(ReadOperation.java:167) at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:71) at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:288) at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.doWork(DataflowWorker.java:221) at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:173) at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.doWork(DataflowWorkerHarness.java:193) at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:173) at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:160) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: StatusRuntimeException: 1 time, at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.handleExceptions(BigtableBufferedMutator.java:389) at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.mutate(BigtableBufferedMutator.java:274) at com.google.cloud.bigtable.dataflow.CloudBigtableIO$CloudBigtableSingleTableBufferedWriteFn.processElement(CloudBigtableIO.java:966)