Google cloud dataflow 为什么我的Python数据流作业在写阶段被卡住了?

Google cloud dataflow 为什么我的Python数据流作业在写阶段被卡住了?,google-cloud-dataflow,apache-beam,Google Cloud Dataflow,Apache Beam,我编写了一个Python数据流作业,它成功地处理了300个文件,不幸的是,当我尝试在400个文件上运行它时,它会永远停留在写阶段 日志并没有真正的帮助,但我认为问题来自代码的编写逻辑,最初,我只想要一个输出文件,所以我写了: | 'Write' >> beam.io.WriteToText( known_args.output, file_name_suffix=".json",

我编写了一个Python数据流作业,它成功地处理了300个文件,不幸的是,当我尝试在400个文件上运行它时,它会永远停留在写阶段

日志并没有真正的帮助,但我认为问题来自代码的编写逻辑,最初,我只想要一个输出文件,所以我写了:

     | 'Write' >> beam.io.WriteToText(
                known_args.output,
                file_name_suffix=".json",
                num_shards=1,
                shard_name_template=""
            ))
然后,我删除了,
num\u shards=1
shard\u name\u template=“”
,我可以处理更多的文件,但它仍然会卡住

额外信息

  • 要处理的文件很小,小于1MB
  • 另外,当删除num_shard和shard_name_template字段时,我注意到数据被输出到目标路径中的临时文件夹中,但作业从未完成
  • 我遇到了下面的
    截止日期\u超过了
    异常,我尝试通过将--num\u workers增加到6,将--disk\u size\u gb增加到30来解决它,但它不起作用

您能推荐解决此类问题的方法吗?

在尝试投入资源后,我通过启用数据流洗牌服务成功地解决了这一问题。请看


只需在数据流监控UI中的
--experiments=shuffle\u mode=service
添加到您的
PipelineOptions

中,当遇到问题时,您是否仍能看到工作人员的活动(工作指标)?如果是的话,那么事情还在继续,但需要很长时间。如果稍后(未设置num_shards值),您是否看到Autoscaler发出增加主机数量的请求?另外,正在处理的数据量是多少?这些是很多小文件还是很多很大的文件。谢谢@RezaRokni我更新了我的问题,在额外信息部分回答了你。我看到处理的数据的活动将变为0,作业因超时而失败。只需在写入阶段进行检查,就可以在没有写入输出的情况下运行作业。这将确保它与数据无关,数据在运行所有文件时都会显示出来。例如,我看到过永不返回的regex语句。如果上述过程通过,那么可能是处理导致了数据的大量扇出,然后需要洗牌。您可能还需要考虑为数据流启用洗牌服务。请注意,这有一个基于数据洗牌量的相关成本。
Error message from worker: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 638, in do_work work_executor.execute() File "/usr/local/lib/python3.7/site-packages/dataflow_worker/executor.py", line 179, in execute op.start() File "dataflow_worker/shuffle_operations.py", line 63, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 64, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 79, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 80, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 82, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "/usr/local/lib/python3.7/site-packages/dataflow_worker/shuffle.py", line 441, in __iter__ for entry in entries_iterator: File "/usr/local/lib/python3.7/site-packages/dataflow_worker/shuffle.py", line 282, in __next__ return next(self.iterator) File "/usr/local/lib/python3.7/site-packages/dataflow_worker/shuffle.py", line 240, in __iter__ chunk, next_position = self.reader.Read(start_position, end_position) File "third_party/windmill/shuffle/python/shuffle_client.pyx", line 133, in shuffle_client.PyShuffleReader.Read OSError: Shuffle read failed: b'DEADLINE_EXCEEDED: (g)RPC timed out when extract-fields-three-mont-10090801-dlaj-harness-fj4v talking to extract-fields-three-mont-10090801-dlaj-harness-1f7r:12346. Server unresponsive (ping error: Deadline Exceeded, {"created":"@1602260204.931126454","description":"Deadline Exceeded","file":"third_party/grpc/src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}). Typically one can self manage this issue, please read: https://cloud.google.com/dataflow/docs/guides/common-errors#tsg-rpc-timeout'