Python 将多个csv合并到一个csv

Python 将多个csv合并到一个csv,python,pandas,csv,Python,Pandas,Csv,我正在尝试将大约5000张csv表格合并到一个csv,各个csv文件的结构是相同的,因此代码应该很简单,但是我一直收到一条错误消息“未找到文件” 代码如下: csv_paths = set(glob.glob("folder_containing_csvs/*.csv")) full_csv_path = "folder_containing_csvs/full_df.csv" csv_paths -= set([full_csv_path]) for c

我正在尝试将大约5000张csv表格合并到一个csv,各个csv文件的结构是相同的,因此代码应该很简单,但是我一直收到一条错误消息“未找到文件”

代码如下:

csv_paths = set(glob.glob("folder_containing_csvs/*.csv"))
full_csv_path = "folder_containing_csvs/full_df.csv"
csv_paths -= set([full_csv_path])
for csv_path in csv_paths:
    print("csv_path", csv_path)
    df = pd.read_csv(csv_path, sep="\t")
    df[sorted(list(df.columns.values))].to_csv(full_csv_path, mode="a", header=not 
os.path.isfile(full_csv_path), sep="\t", index=False)
full_df = pd.read_csv(full_csv_path, sep="\t", encoding='utf-8')
full_df
该代码产生如下错误消息:

---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-47-11ffadd03e3e> in <module>
----> 1 full_df = pd.read_csv(full_csv_path, sep="\t", encoding='utf-8')
      2 full_df

~/.local/lib/python3.6/site-packages/pandas/io/parsers.py in read_csv(filepath_or_buffer,
sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, type, 
engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, 
nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, 
infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, 
chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, 
escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, 
low_memory, memory_map, float_precision)
    686     )
    687 
--> 688     return _read(filepath_or_buffer, kwds)
    689 
    690 

~/.local/lib/python3.6/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    452 
    453     # Create the parser.
--> 454     parser = TextFileReader(fp_or_buf, **kwds)
    455 
    456     if chunksize or iterator:

~/.local/lib/python3.6/site-packages/pandas/io/parsers.py in __init__(self, f, engine, **kwds)
    946             self.options["has_index_names"] = kwds["has_index_names"]
    947 
--> 948         self._make_engine(self.engine)
    949 
    950     def close(self):

~/.local/lib/python3.6/site-packages/pandas/io/parsers.py in _make_engine(self, engine)
   1178     def _make_engine(self, engine="c"):
   1179         if engine == "c":
-> 1180             self._engine = CParserWrapper(self.f, **self.options)
   1181         else:
   1182             if engine == "python":

~/.local/lib/python3.6/site-packages/pandas/io/parsers.py in __init__(self, src, **kwds)
   1991         if kwds.get("compression") is None and encoding:
   1992             if isinstance(src, str):
-> 1993                 src = open(src, "rb")
   1994                 self.handles.append(src)
   1995 

FileNotFoundError: [Errno 2] No such file or directory: 'folder_containing_csvs/full_df.csv'
---------------------------------------------------------------------------
FileNotFoundError回溯(最近一次调用上次)
在里面
---->1 full_df=pd.read_csv(full_csv_路径,sep=“\t”,encoding='utf-8')
2全方位
读取csv(文件路径或缓冲区)中的~/.local/lib/python3.6/site-packages/pandas/io/parsers.py,
sep、分隔符、标题、名称、索引列、使用列、压缩、前缀、重复列、类型、,
引擎、转换器、真值、假值、skipinitialspace、SkipProws、skipfooter、,
nrows、na值、保留默认值、na过滤器、详细、跳过空白行、解析日期、,
推断日期时间格式,保留日期列,日期解析器,日期优先,缓存日期,迭代器,
chunksize、压缩、千、十进制、行终止符、引号、引号、双引号、,
转义码、注释、编码、方言、错误行、警告行、删除空格、,
低内存、内存映射、浮点精度)
686     )
687
-->688返回读取(文件路径或缓冲区,kwds)
689
690
读取中的~/.local/lib/python3.6/site-packages/pandas/io/parsers.py(文件路径或缓冲区,kwds)
452
453#创建解析器。
-->454 parser=TextFileReader(fp_或_buf,**kwds)
455
456如果chunksize或迭代器:
~/.local/lib/python3.6/site-packages/pandas/io/parsers.py在uuuu init_uuu中(self,f,engine,**kwds)
946 self.options[“has_index_name”]=kwds[“has_index_name”]
947
-->948自动制造发动机(自动发动机)
949
950 def关闭(自):
引擎中的~/.local/lib/python3.6/site-packages/pandas/io/parsers.py(self,engine)
1178 def制造发动机(自,发动机=“c”):
1179如果发动机==“c”:
->1180 self.\u engine=CParserWrapper(self.f,**self.options)
1181其他:
1182如果引擎==“python”:
~/.local/lib/python3.6/site-packages/pandas/io/parsers.py in uuuuu init_uuu(self,src,**kwds)
1991如果kwds.get(“压缩”)为无,则编码:
1992如果存在(src,str):
->1993年src=开放(src,“rb”)
1994 self.handles.append(src)
1995
FileNotFoundError:[Errno 2]没有这样的文件或目录:“文件夹\u包含\u csvs/full\u df.csv”

glob提供的路径与脚本的执行位置相关

如果您的文件结构如下所示:

~/code/ |
       | merge.py
       | folder_containing_csvs/  |
                                  | file1.csv
                                  | file2.csv
必须从
/code
文件夹执行
merge.py
文件

e、 g

像这样做

~/$ python ./code/merge.py
将导致


NotFoundError:[Errno 2]没有这样的文件或目录:'folder_containing_csvs/full_df.csv'
glob提供的路径与脚本的执行位置相关

如果您的文件结构如下所示:

~/code/ |
       | merge.py
       | folder_containing_csvs/  |
                                  | file1.csv
                                  | file2.csv
必须从
/code
文件夹执行
merge.py
文件

e、 g

像这样做

~/$ python ./code/merge.py
将导致

NotFoundError:[Errno 2]没有这样的文件或目录:“文件夹\u包含\u csvs/full\u df.csv”

尝试以下操作:

loc_path = /path/to/folder/of/csv's
files = os.listdir(loc_path)
files = [file for file in files if '.csv' in file]

# now load them into a list
dfs = []
for file in files:
    dfs.append(pd.read_csv(loc_path+file), sep='\t')

# concat the dfs list:

df = pd.concat(dfs)
# Send this df.to_csv at location of your choice.
只需阅读5000 csv表单部分。您需要多少行?

尝试以下操作:

loc_path = /path/to/folder/of/csv's
files = os.listdir(loc_path)
files = [file for file in files if '.csv' in file]

# now load them into a list
dfs = []
for file in files:
    dfs.append(pd.read_csv(loc_path+file), sep='\t')

# concat the dfs list:

df = pd.concat(dfs)
# Send this df.to_csv at location of your choice.

只需阅读5000 csv表单部分。您需要多少行?

如果它们是csv文件,为什么不
open('merge.csv','w')。write(open('file1.csv').read()+open('file2.csv').read())
。如果有头,那么首先删除头。如果它们是csv文件,为什么不
open('merge.csv','w')。write(open('file1.csv').read()+open('file2.csv').read())
。如果有标题,则首先删除标题。将数据移动到/code文件夹后,代码工作正常。感谢您解释使用
glob
所需的文件结构。将数据移动到/code文件夹后,代码工作正常。感谢您解释使用
glob所需的文件结构