Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/list/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Google cloud dataflow POutput无法转换为WriteResult_Google Cloud Dataflow_Apache Beam - Fatal编程技术网

Google cloud dataflow POutput无法转换为WriteResult

Google cloud dataflow POutput无法转换为WriteResult,google-cloud-dataflow,apache-beam,Google Cloud Dataflow,Apache Beam,我正试图在代码中处理BigQuery的错误 PCollection convertedTableRows= 管道 .apply(“ReadFromKafka”,buildReadToKafkaIO(选项)) .apply(“ConvertMessageToTableRow”,new.TransformTableRow()); WriteResult WriteResult=convertedTableRows.apply(“WriteRecords”, BigQueryIO.writeTable

我正试图在代码中处理BigQuery的错误

PCollection convertedTableRows=
管道
.apply(“ReadFromKafka”,buildReadToKafkaIO(选项))
.apply(“ConvertMessageToTableRow”,new.TransformTableRow());
WriteResult WriteResult=convertedTableRows.apply(“WriteRecords”,
BigQueryIO.writeTableRows());
//          ...
writeResult.getFailedInsertsWither()
.apply(“WrapInsertionErrors”,MapElements.into(TypeDescriptors.strings())
.via(类名::wrapbigqueryinserror));
编译时出现错误:
org.apache.beam.sdk.values.POutput无法转换为org.apache.beam.sdk.io.gcp.bigquery.WriteResult

我可以理解这个错误,因为它实现了POutput


因此,在某个地方,override apply方法可以返回WriteResult?

似乎错过了对
.to()
方法的调用

请参见模板的示例:

    WriteResult writeResult =
        convertedTableRows
            .get(TRANSFORM_OUT)
            .apply(
                "WriteSuccessfulRecords",
                BigQueryIO.writeTableRows()
                    .withoutValidation()
                    .withCreateDisposition(CreateDisposition.CREATE_NEVER)
                    .withWriteDisposition(WriteDisposition.WRITE_APPEND)
                    .withExtendedErrorInfo()
                    .withMethod(BigQueryIO.Write.Method.STREAMING_INSERTS)
                    .withFailedInsertRetryPolicy(InsertRetryPolicy.retryTransientErrors())
                    .to(options.getOutputTableSpec()));

似乎错过了对
.to()
方法的调用

请参见模板的示例:

    WriteResult writeResult =
        convertedTableRows
            .get(TRANSFORM_OUT)
            .apply(
                "WriteSuccessfulRecords",
                BigQueryIO.writeTableRows()
                    .withoutValidation()
                    .withCreateDisposition(CreateDisposition.CREATE_NEVER)
                    .withWriteDisposition(WriteDisposition.WRITE_APPEND)
                    .withExtendedErrorInfo()
                    .withMethod(BigQueryIO.Write.Method.STREAMING_INSERTS)
                    .withFailedInsertRetryPolicy(InsertRetryPolicy.retryTransientErrors())
                    .to(options.getOutputTableSpec()));
谢谢你,博登。 事实上,我省略了部分代码

我重新检查了Transform.TransformTableRow(),确保它返回TableRow

PCollection
看起来非常重要

PCollection convertedTableRows=inputFromKafka.apply(“转换”,
ParDo.of(new Transform.TransformTableRow());
WriteResult writeResultToBigQuery=convertedTableRows
.apply(“writeToBigQuery”,buildWriteTableRowsToBigQueryIO(选项));
谢谢博登。
事实上,我省略了部分代码

我重新检查了Transform.TransformTableRow(),确保它返回TableRow

PCollection
看起来非常重要

PCollection convertedTableRows=inputFromKafka.apply(“转换”,
ParDo.of(new Transform.TransformTableRow());
WriteResult writeResultToBigQuery=convertedTableRows
.apply(“writeToBigQuery”,buildWriteTableRowsToBigQueryIO(选项));