Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/google-cloud-platform/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 数据流批处理作业未扩展_Python_Google Cloud Platform_Google Compute Engine_Google Cloud Dataflow_Apache Beam - Fatal编程技术网

Python 数据流批处理作业未扩展

Python 数据流批处理作业未扩展,python,google-cloud-platform,google-compute-engine,google-cloud-dataflow,apache-beam,Python,Google Cloud Platform,Google Compute Engine,Google Cloud Dataflow,Apache Beam,我的数据流作业(作业ID:2020-08-18_07_55_15-14428306650890914471)没有扩展到超过1个工作线程,尽管数据流将目标工作线程设置为1000个 该作业被配置为查询Google Patents BigQuery数据集,使用ParDo自定义函数和transformers(huggingface)库标记文本,序列化结果,并将所有内容写入一个巨大的拼花地板文件 我假设(在昨天运行作业之后,它映射了一个函数而不是使用beam.DoFn类),问题是一些非并行对象消除了缩放;

我的数据流作业(作业ID:2020-08-18_07_55_15-14428306650890914471)没有扩展到超过1个工作线程,尽管数据流将目标工作线程设置为1000个

该作业被配置为查询Google Patents BigQuery数据集,使用ParDo自定义函数和transformers(huggingface)库标记文本,序列化结果,并将所有内容写入一个巨大的拼花地板文件

我假设(在昨天运行作业之后,它映射了一个函数而不是使用beam.DoFn类),问题是一些非并行对象消除了缩放;因此,需要将标记化过程重构为一个类

以下是使用以下命令从命令行运行的脚本:

python bq_to_parquet_pipeline_w_class.py --extra_package transformers-3.0.2.tar.gz
剧本:

    import os
    import re
    import argparse
    
    import google.auth
    import apache_beam as beam
    from apache_beam.options import pipeline_options
    from apache_beam.options.pipeline_options import GoogleCloudOptions
    from apache_beam.options.pipeline_options import PipelineOptions
    from apache_beam.options.pipeline_options import SetupOptions
    from apache_beam.runners import DataflowRunner
    
    
    from apache_beam.io.gcp.internal.clients import bigquery
    import pyarrow as pa
    import pickle
    from transformers import AutoTokenizer
    
    
    print('Defining TokDoFn')
    class TokDoFn(beam.DoFn):
        def __init__(self, tok_version, block_size=200):
            self.tok = AutoTokenizer.from_pretrained(tok_version)
            self.block_size = block_size
    
        def process(self, x):
            txt = x['abs_text'] + ' ' + x['desc_text'] + ' ' + x['claims_text']
            enc = self.tok.encode(txt)
    
            for idx, token in enumerate(enc):
                chunk = enc[idx:idx + self.block_size]
                serialized = pickle.dumps(chunk)
                yield serialized
    
    
    def run(argv=None, save_main_session=True):
        query_big = '''
        with data as (
          SELECT 
            (select text from unnest(abstract_localized) limit 1) abs_text,
            (select text from unnest(description_localized) limit 1) desc_text,
            (select text from unnest(claims_localized) limit 1) claims_text,
            publication_date,
            filing_date,
            grant_date,
            application_kind,
            ipc
          FROM `patents-public-data.patents.publications` 
        )
    
        select *
        FROM data
        WHERE
          abs_text is not null 
          AND desc_text is not null
          AND claims_text is not null
          AND ipc is not null
        '''
    
        query_sample = '''
        SELECT *
        FROM `client_name.patent_data.patent_samples`
        LIMIT 2;
        '''
    
        print('Start Run()')
        parser = argparse.ArgumentParser()
        known_args, pipeline_args = parser.parse_known_args(argv)
    
        '''
        Configure Options
        '''
        # Setting up the Apache Beam pipeline options.
        # We use the save_main_session option because one or more DoFn's in this
        # workflow rely on global context (e.g., a module imported at module level).
        options = PipelineOptions(pipeline_args)
        options.view_as(SetupOptions).save_main_session = save_main_session
    
        # Sets the project to the default project in your current Google Cloud environment.
        _, options.view_as(GoogleCloudOptions).project = google.auth.default()
    
        # Sets the Google Cloud Region in which Cloud Dataflow runs.
        options.view_as(GoogleCloudOptions).region = 'us-central1'
    
    
        # IMPORTANT! Adjust the following to choose a Cloud Storage location.
        dataflow_gcs_location = 'gs://client_name/dataset_cleaned_pq_classTok'
        # Dataflow Staging Location. This location is used to stage the Dataflow Pipeline and SDK binary.
        options.view_as(GoogleCloudOptions).staging_location = f'{dataflow_gcs_location}/staging'
    
        # Dataflow Temp Location. This location is used to store temporary files or intermediate results before finally outputting to the sink.
        options.view_as(GoogleCloudOptions).temp_location = f'{dataflow_gcs_location}/temp'
    
        # The directory to store the output files of the job.
        output_gcs_location = f'{dataflow_gcs_location}/output'
    
        print('Options configured per GCP Notebook Examples')
        print('Configuring BQ Table Schema for Beam')
    
    
        #Write Schema (to PQ):
        schema = pa.schema([
            ('block', pa.binary())
        ])
    
        print('Starting pipeline...')
        with beam.Pipeline(runner=DataflowRunner(), options=options) as p:
            res = (p
                   | 'QueryTable' >> beam.io.Read(beam.io.BigQuerySource(query=query_big, use_standard_sql=True))
                   | beam.ParDo(TokDoFn(tok_version='gpt2', block_size=200))
                   | beam.Map(lambda x: {'block': x})
                   | beam.io.WriteToParquet(os.path.join(output_gcs_location, f'pq_out'),
                                            schema,
                                            record_batch_size=1000)
                   )
            print('Pipeline built. Running...')
    
    if __name__ == '__main__':
        import logging
        logging.getLogger().setLevel(logging.INFO)
        logging.getLogger("transformers.tokenization_utils_base").setLevel(logging.ERROR)
        run()

解决方案有两个方面:

当我运行作业时,超出了以下配额,全部在“计算引擎API”下(在此处查看配额:):

  • CPU(我要求增加到50个)
  • 永久磁盘标准(GB)(我要求增加到12500)
  • 使用IP地址(我要求增加到50)
注意:如果在作业运行时读取控制台输出,任何超出的配额都应打印为信息行

按照彼得·金(Peter Kim)的上述建议,我将“最大数量工人”作为我命令的一部分:

python bq_to_parquet_pipeline_w_class.py --extra_package transformers-3.0.2.tar.gz --max_num_workers 22
我开始缩放


总而言之,如果有一种方法可以在达到配额时通过数据流控制台提示用户,并提供一种简单的方法来请求增加该配额(以及建议的补充配额),那就太好了,以及增加的请求数量的建议。

看起来您没有足够的配额来启动1000台机器。我可以问一下您在哪里看到的吗?请注意,目标工作人员设置为1000,而实际工作人员的数量保持在1。我没有收到通知说我正在尝试超过任何配额,因此我不太确定在哪里可以查看:增加我的配额。请检查/iam admin/quotasts/details以了解计算引擎CPU是否有足够的配额来启动1000个工人。目标工作进程的数量表示数据流需要多少台机器,并且不受您的配额限制。@PeterKim my us-central-1 Compute Engine API CPU的配额为24,页面上显示我当前使用的是1。是不是因为Dataflow试图直接扩展到1000,它没有注意到它可以扩展到24并停止在那里?好像有人已经抓到了那只虫子。