Google bigquery ApacheBeam:将具有对象列表的对象转换为多个TableRows以写入BigQuery

Google bigquery ApacheBeam:将具有对象列表的对象转换为多个TableRows以写入BigQuery,google-bigquery,google-cloud-dataflow,apache-beam,apache-beam-io,Google Bigquery,Google Cloud Dataflow,Apache Beam,Apache Beam Io,我正在使用beam管道来处理json并将其写入bigquery。JSON是这样的 { "message": [{ "name": "abc", "itemId": "2123", "itemName": "test" }, { "name": "vfg", "itemId": "56457", "itemName": "Chicken" }], "publishDate": "2017-10-26T04:54:16.207Z" } 我使用Jac

我正在使用beam管道来处理json并将其写入bigquery。JSON是这样的

{
"message": [{
    "name": "abc",
    "itemId": "2123",
    "itemName": "test"

}, {
    "name": "vfg",
    "itemId": "56457",
    "itemName": "Chicken"
}],
"publishDate": "2017-10-26T04:54:16.207Z"
}

我使用Jackson将其解析为以下结构

class Feed{
List<Message> messages; 
TimeStamp  publishDate;

}
我试图创建下面的转换。但是,不确定它将如何基于消息列表输出多行

private class BuildRowListFn extends DoFn<KV<String, Feed>, List<TableRow>> {

    @ProcessElement
    public void processElement(ProcessContext context) {
        Feed feed = context.element().getValue();

        List<Message> messages = feed.getMessage();
        List<TableRow> rows = new ArrayList<>();
        messages.forEach((message) -> {
            TableRow row = new TableRow();
            row.set("column1", feed.getPublishDate());
            row.set("column2", message.getEventItemMap().get("key1"));
            row.set("column3", message.getEventItemMap().get("key2"));
            rows.add(row);
        }

        );

    }
我得到以下例外

Exception in thread "main" org.apache.beam.sdk.Pipeline$PipelineExecutionException: java.lang.NullPointerException
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:331)
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:301)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:200)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:63)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:297)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:283)
at com.chefd.gcloud.analytics.pipeline.MyPipeline.main(MyPipeline.java:284)


Caused by: java.lang.NullPointerException
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:759)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:809)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.flushRows(StreamingWriteFn.java:126)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.finishBundle(StreamingWriteFn.java:96)
请帮忙


多谢各位

您似乎假设
DoFn
每个元素只能输出一个值。事实并非如此:它可以为每个元素输出任意数量的值—无值、一个值、多个值等。
DoFn
甚至可以

在您的情况下,只需为
@ProcessElement
方法中的每一行调用
c.output(row)
,例如:
rows.forEach(c::output)
。当然,您还需要将
DoFn
的类型更改为
DoFn
,因为它的输出
PCollection
中的元素类型是
TableRow
,而不是
List
——您只是为每个输入元素在集合中生成多个元素,但这不会更改类型


另一种方法是执行您当前所做的操作,同时执行
c.output(rows)
,然后应用
plant.iterables()
PCollection
展平为
PCollection
(您可能需要将
List
替换为
Iterable
,以使其正常工作)。但是另一种方法更简单。

你好,Eugene,我刚刚添加了。WithFailedSertrepPolicy(InsertRetryPolicy.alwaysRetry())显示了这个问题。我动态设置的数据类型是一个键的时间戳,值是字符串。现在这是完美的插入。非常感谢你的帮助。它真的救了我!。干杯谢谢,我也这么认为。我认为这和弗林克的平面图很相似。干杯
id, name, field1, field2, message1.property1, message1.property2...

id, name, field1, field2, message2.property1, message2.property2...
private class BuildRowListFn extends DoFn<KV<String, Feed>, List<TableRow>> {

    @ProcessElement
    public void processElement(ProcessContext context) {
        Feed feed = context.element().getValue();

        List<Message> messages = feed.getMessage();
        List<TableRow> rows = new ArrayList<>();
        messages.forEach((message) -> {
            TableRow row = new TableRow();
            row.set("column1", feed.getPublishDate());
            row.set("column2", message.getEventItemMap().get("key1"));
            row.set("column3", message.getEventItemMap().get("key2"));
            rows.add(row);
        }

        );

    }
List<Message> messages = feed.getMessage();
        messages.forEach((message) -> {
            TableRow row = new TableRow();
            row.set("column2", message.getEventItemMap().get("key1"));
            context.output(row);
            }
rows.apply(BigQueryIO.writeTableRows().to(getTable(projectId, datasetId, tableName)).withSchema(getSchema())
                    .withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED)
                    .withWriteDisposition(WriteDisposition.WRITE_APPEND));
Exception in thread "main" org.apache.beam.sdk.Pipeline$PipelineExecutionException: java.lang.NullPointerException
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:331)
at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish(DirectRunner.java:301)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:200)
at org.apache.beam.runners.direct.DirectRunner.run(DirectRunner.java:63)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:297)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:283)
at com.chefd.gcloud.analytics.pipeline.MyPipeline.main(MyPipeline.java:284)


Caused by: java.lang.NullPointerException
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:759)
at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.insertAll(BigQueryServicesImpl.java:809)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.flushRows(StreamingWriteFn.java:126)
at org.apache.beam.sdk.io.gcp.bigquery.StreamingWriteFn.finishBundle(StreamingWriteFn.java:96)