Pandas 将170万数据从snowflake提取到csv文件时出错

Pandas 将170万数据从snowflake提取到csv文件时出错,pandas,dataframe,export-to-csv,snowflake-cloud-data-platform,Pandas,Dataframe,Export To Csv,Snowflake Cloud Data Platform,我有一个案例,我必须从雪花表中获取170万条记录并将其推送到CSV 但我得到了以下错误: File "D:\Cloud\call_test.py", line 20, in snowflakeConnect.export_to_csv(query, path_to_csv) File "D:\Cloud\snowflake_to_dataframe.py", line 28, in export_to_csv df = self._cursor.fetch

我有一个案例,我必须从雪花表中获取170万条记录并将其推送到CSV

但我得到了以下错误:

File "D:\Cloud\call_test.py", line 20, in
snowflakeConnect.export_to_csv(query, path_to_csv)
File "D:\Cloud\snowflake_to_dataframe.py", line 28, in export_to_csv
df = self._cursor.fetch_pandas_all()
File "C:\AMi-space\airflow\venv\lib\site-packages\snowflake\connector\cursor.py", line 855, in fetch_pandas_all
return self._result._fetch_pandas_all(**kwargs)
File "src\snowflake\connector\arrow_result.pyx", line 259, in snowflake.connector.arrow_result.ArrowResult._fetch_pandas_all
File "pyarrow\array.pxi", line 751, in pyarrow.lib._PandasConvertible.to_pandas
File "pyarrow\table.pxi", line 1668, in pyarrow.lib.Table._to_pandas
File "C:\AMi-space\airflow\venv\lib\site-packages\pyarrow\pandas_compat.py", line 792, in table_to_blockmanager
blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)
File "C:\AMi-space\airflow\venv\lib\site-packages\pyarrow\pandas_compat.py", line 1131, in _table_to_blocks
result = pa.lib.table_to_blocks(options, block_table, categories,
File "pyarrow\table.pxi", line 1117, in pyarrow.lib.table_to_blocks
File "pyarrow\error.pxi", line 116, in pyarrow.lib.check_status
pyarrow.lib.ArrowException: Unknown error: Wrapping FEBRAMAT Federa��o Brasileira De Redes failed
下面是我正在尝试的代码:

class ExportToCsv:
    _cursor = None

    def __init__(self, account, user, password, role, warehouse):

        connection = sfc.connect(
            account=account,
            user=user,
            password=password,
            role=role,
            warehouse=warehouse
        )
        print("Connecting to snowflake.....")
        self._cursor = connection.cursor()

    def export_to_csv(self, query, path_to_csv):
        try:
            query = "{}".format(query)
            
            self._cursor.execute(query)

            df = self._cursor.fetch_pandas_all()

            path_to_csv = "{}".format(path_to_csv)

            df.to_csv(path_to_csv, index=False)
        finally:
            self._cursor.close()
        

它运行良好,直到获得100000行。但是在这之后,如果您正在将170万行结果写入CSV文件,为什么不让snowflake写入S3(或您的平台存储),然后通过python移动/复制文件呢?这是呼叫管理必须采取的措施。现在,这就是我被赋予的工作。嗯,管理层应该说“有问题,试着解决它”,而开发部门应该说“方法一不起作用,现在开始方法二”@SimeonPrilgrim哈哈哈,希望我能像这样对管理层说