Google bigquery BigQuery代码段中出现错误

Google bigquery BigQuery代码段中出现错误,google-bigquery,dataflow,Google Bigquery,Dataflow,我不熟悉数据流,并尝试在大查询中动态获取表的模式。 此外,我还需要动态获取目标表的名称,我正在BigQueryIO.write.to()中为其使用动态目标类。如果在执行管道之前为目标表提供了模式,则它可以工作。但为了动态获取模式,我使用了BigQuery片段,它将datasetId和tableId作为输入,并返回给定表的模式。当尝试使用代码段运行管道时,它给出了下面提到的错误 感谢您的帮助。 提前谢谢 Exception in thread "main" java.lang.NoSuchMeth

我不熟悉数据流,并尝试在大查询中动态获取表的模式。 此外,我还需要动态获取目标表的名称,我正在BigQueryIO.write.to()中为其使用动态目标类。如果在执行管道之前为目标表提供了模式,则它可以工作。但为了动态获取模式,我使用了BigQuery片段,它将datasetId和tableId作为输入,并返回给定表的模式。当尝试使用代码段运行管道时,它给出了下面提到的错误

感谢您的帮助。 提前谢谢

Exception in thread "main" java.lang.NoSuchMethodError: com.google.api.client.googleapis.services.json.AbstractGoogleJsonClient$Builder.setBatchPath(Ljava/lang/String;)Lcom/google/api/client/googleapis/services/AbstractGoogleClient$Builder;
    at com.google.api.services.bigquery.Bigquery$Builder.setBatchPath(Bigquery.java:3519)
    at com.google.api.services.bigquery.Bigquery$Builder.<init>(Bigquery.java:3498)
    at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.newBigQueryClient(BigQueryServicesImpl.java:881)
    at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.access$200(BigQueryServicesImpl.java:79)
    at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:388)
    at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl$DatasetServiceImpl.<init>(BigQueryServicesImpl.java:345)
    at org.apache.beam.sdk.io.gcp.bigquery.BigQueryServicesImpl.getDatasetService(BigQueryServicesImpl.java:105)
    at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$TypedRead.validate(BigQueryIO.java:676)
    at org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:640)
    at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:656)
    at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:660)
    at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$600(TransformHierarchy.java:311)
    at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245)
    at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:458)
    at org.apache.beam.sdk.Pipeline.validate(Pipeline.java:575)
    at org.apache.beam.sdk.Pipeline.run(Pipeline.java:310)
    at org.apache.beam.sdk.Pipeline.run(Pipeline.java:297)
    at project2.configTable.main(configTable.java:146)
线程“main”java.lang.NoSuchMethodError中的异常:com.google.api.client.googleapis.services.json.AbstractGoogleJsonClient$Builder.setBatchPath(Ljava/lang/String;)Lcom/google/api/client/googleapis/services/AbstractGoogleClient$Builder; 位于com.google.api.services.bigquery.bigquery$Builder.setBatchPath(bigquery.java:3519) 位于com.google.api.services.bigquery.bigquery$Builder。(bigquery.java:3498) 位于org.apache.beam.sdk.io.gcp.bigquery.BigQueryServiceSiml.newBigQueryClient(BigQueryServiceSiml.java:881) 位于org.apache.beam.sdk.io.gcp.bigquery.BigQueryServiceSiml.access$200(BigQueryServiceSiml.java:79) 位于org.apache.beam.sdk.io.gcp.bigquery.BigQueryServiceSiml$DatasetServiceImpl。(BigQueryServiceSiml.java:388) 位于org.apache.beam.sdk.io.gcp.bigquery.BigQueryServiceSiml$DatasetServiceImpl。(BigQueryServiceSiml.java:345) 位于org.apache.beam.sdk.io.gcp.bigquery.BigQueryServiceSiml.getDatasetService(BigQueryServiceSiml.java:105) 位于org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$typeDreard.validate(BigQueryIO.java:676) 位于org.apache.beam.sdk.Pipeline$ValidateVisitor.enterCompositeTransform(Pipeline.java:640) 位于org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:656) 位于org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:660) 位于org.apache.beam.sdk.runners.TransformHierarchy$Node.access$600(TransformHierarchy.java:311) 访问org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245) 在org.apache.beam.sdk.Pipeline.traversetopological上(Pipeline.java:458) 位于org.apache.beam.sdk.Pipeline.validate(Pipeline.java:575) 位于org.apache.beam.sdk.Pipeline.run(Pipeline.java:310) 位于org.apache.beam.sdk.Pipeline.run(Pipeline.java:297) 位于project2.configTable.main(configTable.java:146) 代码:

package项目2;
导入java.io.File;
导入java.util.ArrayList;
导入java.util.List;
导入org.apache.avro.Schema;
导入org.apache.beam.runners.dataflow.DataflowRunner;
导入org.apache.beam.runners.dataflow.options.DataflowPipelineOptions;
导入org.apache.beam.sdk.io.TextIO;
导入org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
导入org.apache.beam.sdk.io.gcp.bigquery.DynamicDestinations;
导入org.apache.beam.sdk.io.gcp.bigquery.TableDestination;
导入org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.CreateDisposition;
导入org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.WriteDisposition;
导入org.apache.beam.sdk.options.pipelineoptions工厂;
导入org.apache.beam.sdk.options.ValueProvider.NestedValueProvider;
导入org.apache.beam.sdk.transforms.DoFn;
导入org.apache.beam.sdk.transforms.ParDo;
导入org.apache.beam.sdk.transforms.SerializableFunction;
导入org.apache.beam.sdk.transforms.View;
导入org.apache.beam.sdk.transforms.DoFn.ProcessContext;
导入org.apache.beam.sdk.transforms.DoFn.ProcessElement;
导入org.apache.beam.sdk.values.PCollection;
导入org.apache.beam.sdk.values.PCollectionView;
导入org.apache.beam.sdk.values.valuesinSingleWindow;
导入com.google.api.services.bigquery.model.Table;
导入com.google.api.services.bigquery.model.TableFieldSchema;
导入com.google.api.services.bigquery.model.TableRow;
导入com.google.api.services.bigquery.model.TableSchema;
导入com.google.cloud.bigquery.bigquery;
导入com.google.cloud.bigquery.BigQueryOptions;
导入com.google.cloud.bigquery.Field;
导入com.google.cloud.bigquery.FieldList;
导入com.google.cloud.bigquery.bigquery;
导入com.google.cloud.bigquery.BigQueryOptions;
导入com.google.cloud.bigquery.DatasetInfo;
导入com.google.cloud.bigquery.Field;
导入com.google.cloud.bigquery.FieldValueList;
导入com.google.cloud.bigquery.InsertAllRequest;
导入com.google.cloud.bigquery.InsertAllResponse;
导入com.google.cloud.bigquery.LegacySQLTypeName;
导入com.google.cloud.bigquery.QueryJobConfiguration;
导入com.google.cloud.bigquery.StandardTableDefinition;
导入com.google.cloud.bigquery.TableId;
导入com.google.cloud.bigquery.TableInfo;
导入java.util.HashMap;
导入java.util.Map;
导入avro.shade.com.google.common.collect.ImmutableList;
可配置的公共类{
公共静态void main(字符串[]args){
//TODO自动生成的方法存根
customInt op=PipelineOptionsFactory.as(customInt.class);
op.setProject(“我的新项目”);
op.setTempLocation(“gs://train-10/projects”);
op.setWorkerMachineType(“n1-standard-1”);
op.setTemplateLocation(“gs://train-10/带代码段的主模板”);
op.setRunner(DataflowRunner.class);
org.apache.beam.sdk.Pipeline p=org.apache.beam.sdk.Pipeline.create(op);
PCollection indata=p.apply(“接受侧输入”,BigQueryIO.readTableRows()。来自(“我的新项目:training.config”);
PCollectionView=indata.apply(“转换为视图”),ParDo.of(new DoFn(){
@过程元素
公共void processElement(ProcessContext c){
TableRow行=c.元素();
c、 输出(row.get(“file”).toString());
}
})).apply(View.asSingleton());
PCollection mainop=p.apply(“获取输入”,TextIO.read())from(NestedValueProvider.of(op.getInputFile(),new SerializableFunction()){
公共字符串应用(字符串输入){
//TODO自动-
package project2;

import java.io.File;
import java.util.ArrayList;
import java.util.List;

import org.apache.avro.Schema;
import org.apache.beam.runners.dataflow.DataflowRunner;
import org.apache.beam.runners.dataflow.options.DataflowPipelineOptions;
import org.apache.beam.sdk.io.TextIO;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.io.gcp.bigquery.DynamicDestinations;
import org.apache.beam.sdk.io.gcp.bigquery.TableDestination;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.CreateDisposition;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.WriteDisposition;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.options.ValueProvider.NestedValueProvider;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.SerializableFunction;
import org.apache.beam.sdk.transforms.View;
import org.apache.beam.sdk.transforms.DoFn.ProcessContext;
import org.apache.beam.sdk.transforms.DoFn.ProcessElement;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.PCollectionView;
import org.apache.beam.sdk.values.ValueInSingleWindow;

import com.google.api.services.bigquery.model.Table;
import com.google.api.services.bigquery.model.TableFieldSchema;
import com.google.api.services.bigquery.model.TableRow;
import com.google.api.services.bigquery.model.TableSchema;
import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.Field;
import com.google.cloud.bigquery.FieldList;


import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.DatasetInfo;
import com.google.cloud.bigquery.Field;
import com.google.cloud.bigquery.FieldValueList;
import com.google.cloud.bigquery.InsertAllRequest;
import com.google.cloud.bigquery.InsertAllResponse;
import com.google.cloud.bigquery.LegacySQLTypeName;
import com.google.cloud.bigquery.QueryJobConfiguration;

import com.google.cloud.bigquery.StandardTableDefinition;
import com.google.cloud.bigquery.TableId;
import com.google.cloud.bigquery.TableInfo;
import java.util.HashMap;
import java.util.Map;

import avro.shaded.com.google.common.collect.ImmutableList;

public class configTable {

    public static void main(String[] args) {
        // TODO Auto-generated method stub
        customInt op=PipelineOptionsFactory.as(customInt.class);
        op.setProject("my-new-project");
        op.setTempLocation("gs://train-10/projects");
        op.setWorkerMachineType("n1-standard-1");
        op.setTemplateLocation("gs://train-10/main-template-with-snippets");
        op.setRunner(DataflowRunner.class);
        org.apache.beam.sdk.Pipeline p=org.apache.beam.sdk.Pipeline.create(op);

        PCollection<TableRow> indata=p.apply("Taking side input",BigQueryIO.readTableRows().from("my-new-project:training.config"));

        PCollectionView<String> view=indata.apply("Convert to view",ParDo.of(new DoFn<TableRow, String>() {
            @ProcessElement
            public void processElement(ProcessContext c) {
                TableRow row=c.element();
                c.output(row.get("file").toString());
            }
        })).apply(View.asSingleton());  

    PCollection<TableRow> mainop =  p.apply("Taking input",TextIO.read().from(NestedValueProvider.of(op.getInputFile(), new SerializableFunction<String, String>() {

            public String apply(String input) {
                // TODO Auto-generated method stub
                return "gs://train-10/projects/"+input;
            }

        } ))).apply("Transform",ParDo.of(new DoFn<String, TableRow>() {
            @ProcessElement
            public void processElement(ProcessContext c ) {
                c.output(new TableRow().set("data", c.element()));
                }
            }));

    mainop.apply("Write data",BigQueryIO.writeTableRows().to(new DynamicDestinations<TableRow, String>() {
        @Override
        public String getDestination(ValueInSingleWindow<TableRow> element) {
            // TODO Auto-generated method stub
            String d=sideInput(view);
            String tablespec="my-new-project:training."+d;
            return tablespec;
        }
        @Override
         public List<PCollectionView<?>> getSideInputs() {
               return ImmutableList.of(view);
              }
        @Override
        public TableDestination getTable(String destination) {
            // TODO Auto-generated method stub
            //String dest=String.format("%s:%s.%s","my-new-project","training", destination);
            String dest=destination;
            return new TableDestination(dest, dest);
        }
        @Override
        public TableSchema getSchema(String destination) {

            BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

           com.google.cloud.bigquery.Table table=bigquery.getTable("training", destination);
           com.google.cloud.bigquery.Schema tbschema=table.getDefinition().getSchema();
           FieldList tfld=tbschema.getFields();
           List<TableFieldSchema> flds=new ArrayList<>();
            for (Field each : tfld) {
                flds.add(new TableFieldSchema().setName(each.getName()).setType(each.getType().toString()));
            }
            return new TableSchema().setFields(flds);

        }
    }).withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED).withWriteDisposition(WriteDisposition.WRITE_TRUNCATE));

    p.run();
    }
}
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED).withWriteDisposition(WriteDisposition.WRITE_TRUNCATE))
com.google.cloud.bigquery.Table table=bigquery.getTable("training", destination);
com.google.cloud.bigquery.Schema tbschema=table.getDefinition().getSchema();