Google cloud dataflow 在数据流2.X中写入

Google cloud dataflow 在数据流2.X中写入,google-cloud-dataflow,apache-beam,Google Cloud Dataflow,Apache Beam,下面的代码适用于Dataflow1.9SDK,迁移到2.X PCollection<TableRow> tableRow = ... tableRow.apply(BigQueryIO.Write() .to(String.format("%1$s:%2$s.%3$s",projectId, bqDataSet, bqTable)) .withSchema(schema) .withWr

下面的代码适用于Dataflow1.9SDK,迁移到2.X

PCollection<TableRow> tableRow = ...

tableRow.apply(BigQueryIO.Write()
                .to(String.format("%1$s:%2$s.%3$s",projectId, bqDataSet, bqTable))
                .withSchema(schema)
                .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND));
PCollection tableRow=。。。
tableRow.apply(BigQueryIO.Write()
.to(String.format(“%1$s:%2$s.%3$s”,projectId,bqDataSet,bqTable))
.withSchema(schema)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.Write_APPEND));
我明白了

The method apply(PTransform<? super PCollection<TableRow>,OutputT>) in the type PCollection<TableRow> is not applicable for the arguments (BigQueryIO.Write<Object>)

方法应用(pttransform您是否尝试过使用BigqueryIO.writeTableRows()

ApacheBeam 2.1.0 BigqueryIO文档

您可以尝试提供
TableRow
显式键入(
BigQuery.write()…
)或使用上面建议的
BigQuery.writeTableRows()

看起来该接口在2.x中是通用的。早期版本的TableRow是硬编码的