Pandas 熊猫到_拼花地板在大数据集上失败

Pandas 熊猫到_拼花地板在大数据集上失败,pandas,parquet,pyarrow,fastparquet,Pandas,Parquet,Pyarrow,Fastparquet,我正在尝试使用pandas to_拼花地板保存一个非常大的数据集,当超过某个限制时,无论是“pyarrow”还是“fastparquet”,它似乎都会失败。我复制了我在以下代码中遇到的错误,并很高兴听到关于如何克服该问题的想法: 使用Pyarrow: low = 3 high = 8 for n in np.logspace(low, high, high-low+1): t0 = time() df = pd.DataFrame.from_records([(f'ind_{x}

我正在尝试使用pandas to_拼花地板保存一个非常大的数据集,当超过某个限制时,无论是“pyarrow”还是“fastparquet”,它似乎都会失败。我复制了我在以下代码中遇到的错误,并很高兴听到关于如何克服该问题的想法:

使用Pyarrow:

low = 3
high = 8
for n in np.logspace(low, high, high-low+1):
    t0 = time()
    df = pd.DataFrame.from_records([(f'ind_{x}', ''.join(['x']*50))     for x in range(int(n))], columns=['a', 'b']).set_index('a')
    df.to_parquet(tmp_file, engine='pyarrow', compression='gzip')
    pd.read_parquet(tmp_file, engine='pyarrow')
    print(f'10^{np.log10(int(n))} read-write took {time()-t0} seconds')

10^3.0 read-write took 0.012851715087890625 seconds
10^4.0 read-write took 0.05722832679748535 seconds
10^5.0 read-write took 0.46846866607666016 seconds
10^6.0 read-write took 4.4494054317474365 seconds
10^7.0 read-write took 43.0602171421051 seconds
---------------------------------------------------------------------------
ArrowIOError                              Traceback (most recent call last)
<ipython-input-51-cad917a26b91> in <module>()
      5     df = pd.DataFrame.from_records([(f'ind_{x}', ''.join(['x']*50)) for x in range(int(n))], columns=['a', 'b']).set_index('a')
      6     df.to_parquet(tmp_file, engine='pyarrow', compression='gzip')
----> 7     pd.read_parquet(tmp_file, engine='pyarrow')
      8     print(f'10^{np.log10(int(n))} read-write took {time()-t0} seconds')

~/.conda/envs/anaconda3/lib/python3.6/site-packages/pandas/io/parquet.py in read_parquet(path, engine, columns, **kwargs)
    255 
    256     impl = get_engine(engine)
--> 257     return impl.read(path, columns=columns, **kwargs)

~/.conda/envs/anaconda3/lib/python3.6/site-packages/pandas/io/parquet.py in read(self, path, columns, **kwargs)
    128         kwargs['use_pandas_metadata'] = True
    129         return self.api.parquet.read_table(path, columns=columns,
--> 130                                            **kwargs).to_pandas()
    131 
    132     def _validate_write_lt_070(self, df):

~/.conda/envs/anaconda3/lib/python3.6/site-packages/pyarrow/parquet.py in read_table(source, columns, nthreads, metadata, use_pandas_metadata)
    939     pf = ParquetFile(source, metadata=metadata)
    940     return pf.read(columns=columns, nthreads=nthreads,
--> 941                    use_pandas_metadata=use_pandas_metadata)
    942 
    943 

~/.conda/envs/anaconda3/lib/python3.6/site-packages/pyarrow/parquet.py in read(self, columns, nthreads, use_pandas_metadata)
    148             columns, use_pandas_metadata=use_pandas_metadata)
    149         return self.reader.read_all(column_indices=column_indices,
--> 150                                     nthreads=nthreads)
    151 
    152     def scan_contents(self, columns=None, batch_size=65536):

_parquet.pyx in pyarrow._parquet.ParquetReader.read_all()

error.pxi in pyarrow.lib.check_status()
ArrowIOError: Arrow error: Invalid: BinaryArray cannot contain more than 2147483646 bytes, have 2147483650
low=3
高=8
对于np.对数空间中的n(低、高、高-低+1):
t0=时间()
df=pd.DataFrame.from_记录([(f'ind_{x}',''.join(['x']*50]),用于范围(int(n))]内的x,列=['a','b'])。设置_索引('a')
df.to_拼花地板(tmp_文件,引擎='pyarrow',压缩='gzip')
pd.read_拼花地板(tmp_文件,engine='pyarrow')
打印(f'10^{np.log10(int(n))}读写耗时{time()-t0}秒')
10^3.0读写耗时0.012851715087890625秒
10^4.0读写耗时0.05722832679748535秒
10^5.0读写耗时0.46846866607666016秒
10^6.0读写耗时4.4494054317474365秒
10^7.0读写耗时43.0602171421051秒
---------------------------------------------------------------------------
箭头IOERROR回溯(最近一次调用上次)
在()
5 df=pd.DataFrame.from_记录([(f'ind_{x}',''.join(['x']*50]),用于范围(int(n))]内的x,列=['a','b'])。设置_索引('a')
6 df.to_拼花地板(tmp_文件,引擎='pyarrow',压缩='gzip')
---->7 pd.read_拼花地板(tmp_文件,engine='pyarrow')
8打印(f'10^{np.log10(int(n))}读写耗时{time()-t0}秒')
阅读拼花地板中的~/.conda/envs/anaconda3/lib/python3.6/site-packages/pandas/io/parquet.py(路径、引擎、列,**kwargs)
255
256 impl=获取引擎(引擎)
-->257返回impl.read(路径,列=列,**kwargs)
读取中的~/.conda/envs/anaconda3/lib/python3.6/site-packages/pandas/io/parquet.py(self、path、columns、**kwargs)
128 kwargs['use\u pandas\u metadata']=True
129返回self.api.parquet.read_表(路径,列=列,
-->130**kwargs)至_pandas()
131
132 def\U validate\U write\U lt\U 070(自我,df):
读取表中的~/.conda/envs/anaconda3/lib/python3.6/site-packages/pyarrow/parquet.py(源、列、nthreads、元数据、使用元数据)
939 pf=ParquetFile(源,元数据=元数据)
940返回pf.read(columns=columns,nthreads=nthreads,
-->941使用熊猫元数据=使用熊猫元数据)
942
943
读取中的~/.conda/envs/anaconda3/lib/python3.6/site-packages/pyarrow/parquet.py(self、columns、nthreads,使用元数据)
148列,使用熊猫元数据=使用熊猫元数据)
149返回self.reader.read\u all(列索引=列索引,
-->150次读取=次读取)
151
152 def扫描内容(自身,列=无,批次大小=65536):
_pyarrow中的parquet.pyx.\u parquet.ParquetReader.read\u all()
pyarrow.lib.check_status()中的error.pxi
ArrowIOError:箭头错误:无效:BinaryArray不能包含超过2147483646个字节,必须包含2147483650个字节

使用快速拼花地板:

low = 3
high = 8
for n in np.logspace(low, high, high-low+1):
    t0 = time()
    df = pd.DataFrame.from_records([(f'ind_{x}', ''.join(['x']*50)) for x in range(int(n))], columns=['a', 'b']).set_index('a')
    df.to_parquet(tmp_file, engine='fastparquet', compression='gzip')
    pd.read_parquet(tmp_file, engine='fastparquet')
    print(f'10^{np.log10(int(n))} read-write took {time()-t0} seconds')

10^3.0 read-write took 0.17770028114318848 seconds
10^4.0 read-write took 0.06351733207702637 seconds
10^5.0 read-write took 0.46896958351135254 seconds
10^6.0 read-write took 5.464379549026489 seconds
10^7.0 read-write took 50.26520347595215 seconds
---------------------------------------------------------------------------
OverflowError                             Traceback (most recent call last)
<ipython-input-49-234a889ae790> in <module>()
      4     t0 = time()
      5     df = pd.DataFrame.from_records([(f'ind_{x}', ''.join(['x']*50)) for x in range(int(n))], columns=['a', 'b']).set_index('a')
----> 6     df.to_parquet(tmp_file, engine='fastparquet', compression='gzip')
      7     pd.read_parquet(tmp_file, engine='fastparquet')
      8     print(f'10^{np.log10(int(n))} read-write took {time()-t0} seconds')

~/.conda/envs/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py in to_parquet(self, fname, engine, compression, **kwargs)
   1647         from pandas.io.parquet import to_parquet
   1648         to_parquet(self, fname, engine,
-> 1649                    compression=compression, **kwargs)
   1650 
   1651     @Substitution(header='Write out the column names. If a list of strings '

~/.conda/envs/anaconda3/lib/python3.6/site-packages/pandas/io/parquet.py in to_parquet(df, path, engine, compression, **kwargs)
    225     """
    226     impl = get_engine(engine)
--> 227     return impl.write(df, path, compression=compression, **kwargs)
    228 
    229 

~/.conda/envs/anaconda3/lib/python3.6/site-packages/pandas/io/parquet.py in write(self, df, path, compression, **kwargs)
    198         with catch_warnings(record=True):
    199             self.api.write(path, df,
--> 200                            compression=compression, **kwargs)
    201 
    202     def read(self, path, columns=None, **kwargs):

~/.conda/envs/anaconda3/lib/python3.6/site-packages/fastparquet/writer.py in write(filename, data, row_group_offsets, compression, file_scheme, open_with, mkdirs, has_nulls, write_index, partition_on, fixed_text, append, object_encoding, times)
    846     if file_scheme == 'simple':
    847         write_simple(filename, data, fmd, row_group_offsets,
--> 848                      compression, open_with, has_nulls, append)
    849     elif file_scheme in ['hive', 'drill']:
    850         if append:

~/.conda/envs/anaconda3/lib/python3.6/site-packages/fastparquet/writer.py in write_simple(fn, data, fmd, row_group_offsets, compression, open_with, has_nulls, append)
    715                    else None)
    716             rg = make_row_group(f, data[start:end], fmd.schema,
--> 717                                 compression=compression)
    718             if rg is not None:
    719                 fmd.row_groups.append(rg)

~/.conda/envs/anaconda3/lib/python3.6/site-packages/fastparquet/writer.py in make_row_group(f, data, schema, compression)
    612                 comp = compression
    613             chunk = write_column(f, data[column.name], column,
--> 614                                  compression=comp)
    615             rg.columns.append(chunk)
    616     rg.total_byte_size = sum([c.meta_data.total_uncompressed_size for c in

~/.conda/envs/anaconda3/lib/python3.6/site-packages/fastparquet/writer.py in write_column(f, data, selement, compression)
    545                                    data_page_header=dph, crc=None)
    546 
--> 547     write_thrift(f, ph)
    548     f.write(bdata)
    549 

~/.conda/envs/anaconda3/lib/python3.6/site-packages/fastparquet/thrift_structures.py in write_thrift(fobj, thrift)
     49     pout = TCompactProtocol(fobj)
     50     try:
---> 51         thrift.write(pout)
     52         fail = False
     53     except TProtocolException as e:

~/.conda/envs/anaconda3/lib/python3.6/site-packages/fastparquet/parquet_thrift/parquet/ttypes.py in write(self, oprot)
   1028     def write(self, oprot):
   1029         if oprot._fast_encode is not None and self.thrift_spec is not None:
-> 1030             oprot.trans.write(oprot._fast_encode(self, [self.__class__, self.thrift_spec]))
   1031             return
   1032         oprot.writeStructBegin('PageHeader')

OverflowError: int out of range
low=3
高=8
对于np.对数空间中的n(低、高、高-低+1):
t0=时间()
df=pd.DataFrame.from_记录([(f'ind_{x}',''.join(['x']*50]),用于范围(int(n))]内的x,列=['a','b'])。设置_索引('a')
df.to_拼花地板(tmp_文件,engine='fastparquet',compression='gzip')
pd.read_拼花地板(tmp_文件,engine='fastparquet')
打印(f'10^{np.log10(int(n))}读写耗时{time()-t0}秒')
10^3.0读写耗时0.17770028114318848秒
10^4.0读写耗时0.06351733207702637秒
10^5.0读写耗时0.46896958351135254秒
10^6.0读写耗时5.46439549026489秒
10^7.0读写耗时50.26520347595215秒
---------------------------------------------------------------------------
溢出错误回溯(上次最近调用)
在()
4 t0=时间()
5 df=pd.DataFrame.from_记录([(f'ind_{x}',''.join(['x']*50]),用于范围(int(n))]内的x,列=['a','b'])。设置_索引('a')
---->6 df.to_拼花地板(tmp_文件,engine='fastparquet',compression='gzip')
7 pd.read_拼花地板(tmp_文件,engine='fastparquet')
8打印(f'10^{np.log10(int(n))}读写耗时{time()-t0}秒')
~/.conda/envs/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py in to_拼花地板(自身、fname、引擎、压缩,**kwargs)
1647从pandas.io.parquet导入到_parquet
1648至_拼花地板(自身、fname、引擎、,
->1649压缩=压缩,**千克)
1650
1651@Substitution(header='写出列名。如果字符串列表'
~/.conda/envs/anaconda3/lib/python3.6/site-packages/pandas/io/parquet.py in to_拼花地板(df、路径、引擎、压缩,**kwargs)
225     """
226 impl=获取引擎(引擎)
-->227返回impl.write(df,path,compression=compression,**kwargs)
228
229
写入中的~/.conda/envs/anaconda3/lib/python3.6/site-packages/pandas/io/parquet.py(self、df、path、compression,**kwargs)
198带有捕捉警告(记录=真):
199 self.api.write(路径,df,
-->200压缩=压缩,**千瓦格)
201
202 def读取(自身、路径、列=无,**kwargs):
写入时~/.conda/envs/anaconda3/lib/python3.6/site-packages/fastparquet/writer.py(文件名、数据、行组偏移量、压缩、文件方案、打开方式、mkdirs、有空、写入索引、分区打开、固定文本、追加、对象编码、时间)
846如果文件\u方案==‘简单’:
847简单写入(文件名、数据、fmd、行组偏移、,
-->848压缩、打开、有空、附加)
849['hive','drill']中的elif文件\u方案:
850如果附加:
~/.conda/envs/anaconda3/lib/python3.6/site-packages/fastparquet/writer.py in write\u simple(fn,数据,fmd,行组偏移,压缩,打开,有空,appen
from fastparquet import ParquetFile

df.to_parquet(tmp_file, engine='pyarrow', compression='gzip')
pf = ParquetFile(tmp_file)
for df in pf.iter_row_groups():
    print(df.head(n=10))