如何在Python中合并200个csv文件
伙计们,我这里有200个单独的csv文件,从SH(1)到SH(200)命名。我想将它们合并到一个csv文件中。我该怎么做呢?如果要在Python中使用合并的CSV,那么只需通过如何在Python中合并200个csv文件,python,csv,merge,concatenation,Python,Csv,Merge,Concatenation,伙计们,我这里有200个单独的csv文件,从SH(1)到SH(200)命名。我想将它们合并到一个csv文件中。我该怎么做呢?如果要在Python中使用合并的CSV,那么只需通过files参数获取要传递到的文件列表,然后使用模块一次读取所有文件。这取决于您所说的“合并”是什么意思——它们有相同的列吗?它们有标题吗?例如,如果它们都有相同的列,并且没有标题,那么简单的连接就足够了(打开要写入的目标文件,循环打开每个要读取的源,从打开的要读取的源使用到打开的要写入的目标,关闭源,继续循环--使用wit
files
参数获取要传递到的文件列表,然后使用模块一次读取所有文件。这取决于您所说的“合并”是什么意思——它们有相同的列吗?它们有标题吗?例如,如果它们都有相同的列,并且没有标题,那么简单的连接就足够了(打开要写入的目标文件,循环打开每个要读取的源,从打开的要读取的源使用到打开的要写入的目标,关闭源,继续循环--使用with
语句代表您进行关闭)。如果它们有相同的列,但也有标题,则在打开每个源文件进行读取后,在将其复制到目标文件之前,除第一个源文件外,每个源文件都需要一个readline
,以跳过标题行
fout=open("out.csv","a")
for num in range(1,201):
for line in open("sh"+str(num)+".csv"):
fout.write(line)
fout.close()
如果CSV文件不都有相同的列,那么您需要定义在什么意义上“合并”它们(如SQL连接?或“水平”,如果它们都有相同的行数?等等)--在这种情况下,我们很难猜出您的意思。您可以导入csv,然后将所有csv文件循环读取到列表中。然后将列表写回磁盘
import csv
rows = []
for f in (file1, file2, ...):
reader = csv.reader(open("f", "rb"))
for row in reader:
rows.append(row)
writer = csv.writer(open("some.csv", "wb"))
writer.writerows("\n".join(rows))
上面的方法不是很健壮,因为它没有错误处理,也没有关闭任何打开的文件。
无论单个文件中是否包含一行或多行CSV数据,这都应该有效。此外,我没有运行此代码,但它应该会让您知道该怎么做。正如ghostdog74所说,但这次使用的是标题:
fout=open("out.csv","a")
# first file:
for line in open("sh1.csv"):
fout.write(line)
# now the rest:
for num in range(2,201):
f = open("sh"+str(num)+".csv")
f.next() # skip the header
for line in f:
fout.write(line)
f.close() # not really needed
fout.close()
为什么你不能直接
sed 1d sh*.csv>merged.csv
有时候你甚至不必使用python!我只想看看篮子里的另一个代码示例
from glob import glob
with open('singleDataFile.csv', 'a') as singleFile:
for csvFile in glob('*.csv'):
for line in open(csvFile, 'r'):
singleFile.write(line)
上面的代码有一点变化,因为它实际上不能正常工作 应该是这样的
from glob import glob
with open('main.csv', 'a') as singleFile:
for csv in glob('*.csv'):
if csv == 'main.csv':
pass
else:
for line in open(csv, 'r'):
singleFile.write(line)
很容易将所有文件合并到一个目录中并合并它们
import glob
import csv
# Open result file
with open('output.txt','wb') as fout:
wout = csv.writer(fout,delimiter=',')
interesting_files = glob.glob("*.csv")
h = True
for filename in interesting_files:
print 'Processing',filename
# Open and process file
with open(filename,'rb') as fin:
if h:
h = False
else:
fin.next()#skip header
for line in csv.reader(fin,delimiter=','):
wout.writerow(line)
用于创建要附加的csv文件列表,然后运行以下代码:
import pandas as pd
combined_csv = pd.concat( [ pd.read_csv(f) for f in filenames ] )
如果要将其导出到单个csv文件,请使用以下命令:
combined_csv.to_csv( "combined_csv.csv", index=False )
我修改了@wisty所说的与Python3.x一起工作的内容,对于有编码问题的人,我还使用os模块来避免硬编码
import os
def merge_all():
dir = os.chdir('C:\python\data\\')
fout = open("merged_files.csv", "ab")
# first file:
for line in open("file_1.csv",'rb'):
fout.write(line)
# now the rest:
list = os.listdir(dir)
number_files = len(list)
for num in range(2, number_files):
f = open("file_" + str(num) + ".csv", 'rb')
f.__next__() # skip the header
for line in f:
fout.write(line)
f.close() # not really needed
fout.close()
下面是一个脚本:
- 将名为
的csv文件连接到SH1.csv
SH200.csv
- 保留标题
导入全局
进口稀土
#正在查找像“SH1.csv”…“SH200.csv”这样的文件名
模式=重新编译(“^SH([1-9]|[1-9][0-9]| 1[0-9][0-9]| 200).csv$”)
文件_parts=[glob.glob('*.csv')中名称的名称,如果pattern.match(名称)]
打开(“文件\u merged.csv”、“wb”)作为文件\u merged:
对于枚举(文件部分)中的(i,名称):
打开(名称“rb”)作为文件的一部分:
如果i!=0:
下一步(文件部分)#如果不是第一个文件,则跳过标题
文件\u merged.write(文件\u part.read())
更新wisty对python3的回答
fout=open("out.csv","a")
# first file:
for line in open("sh1.csv"):
fout.write(line)
# now the rest:
for num in range(2,201):
f = open("sh"+str(num)+".csv")
next(f) # skip the header
for line in f:
fout.write(line)
f.close() # not really needed
fout.close()
如果你在linux/mac上工作,你可以这样做
from subprocess import call
script="cat *.csv>merge.csv"
call(script,shell=True)
假设您有2个类似以下内容的
csv
文件:
csv1.csv:
id,name
1,Armin
2,Sven
csv2.csv:
id,place,year
1,Reykjavik,2017
2,Amsterdam,2018
3,Berlin,2019
您希望结果如下所示:csv3.csv:
id,name,place,year
1,Armin,Reykjavik,2017
2,Sven,Amsterdam,2018
3,,Berlin,2019
然后,您可以使用以下代码段执行此操作:
import csv
import pandas as pd
# the file names
f1 = "csv1.csv"
f2 = "csv2.csv"
out_f = "csv3.csv"
# read the files
df1 = pd.read_csv(f1)
df2 = pd.read_csv(f2)
# get the keys
keys1 = list(df1)
keys2 = list(df2)
# merge both files
for idx, row in df2.iterrows():
data = df1[df1['id'] == row['id']]
# if row with such id does not exist, add the whole row
if data.empty:
next_idx = len(df1)
for key in keys2:
df1.at[next_idx, key] = df2.at[idx, key]
# if row with such id exists, add only the missing keys with their values
else:
i = int(data.index[0])
for key in keys2:
if key not in keys1:
df1.at[i, key] = df2.at[idx, key]
# save the merged files
df1.to_csv(out_f, index=False, encoding='utf-8', quotechar="", quoting=csv.QUOTE_NONE)
在循环的帮助下,您可以为多个文件获得与您的情况相同的结果(200个csv文件)。如果文件没有按顺序编号,请采用下面的无障碍方法: windows计算机上的Python 3.6:
import pandas as pd
from glob import glob
interesting_files = glob("C:/temp/*.csv") # it grabs all the csv files from the directory you mention here
df_list = []
for filename in sorted(interesting_files):
df_list.append(pd.read_csv(filename))
full_df = pd.concat(df_list)
# save the final file in same/different directory:
full_df.to_csv("C:/temp/merged_pandas.csv", index=False)
或者,你可以这样做
cat sh*.csv > merged.csv
易于使用的功能:
def csv_merge(destination_path, *source_paths):
'''
Merges all csv files on source_paths to destination_path.
:param destination_path: Path of a single csv file, doesn't need to exist
:param source_paths: Paths of csv files to be merged into, needs to exist
:return: None
'''
with open(destination_path,"a") as dest_file:
with open(source_paths[0]) as src_file:
for src_line in src_file.read():
dest_file.write(src_line)
source_paths.pop(0)
for i in range(len(source_paths)):
with open(source_paths[i]) as src_file:
src_file.next()
for src_line in src_file:
dest_file.write(src_line)
if __name__ == "__main__":
paths = [f"sh{i}.csv" for i in range(1, 201)]
with open("output.csv", "w") as output_file:
concat_files_with_header(output_file, *paths)
通过使用@Adders的解决方案,以及后来由@varun改进的解决方案,我也实现了一些小小的改进,使整个合并的CSV只保留了主标题:
from glob import glob
filename = 'main.csv'
with open(filename, 'a') as singleFile:
first_csv = True
for csv in glob('*.csv'):
if csv == filename:
pass
else:
header = True
for line in open(csv, 'r'):
if first_csv and header:
singleFile.write(line)
first_csv = False
header = False
elif header:
header = False
else:
singleFile.write(line)
singleFile.close()
致以最诚挚的问候!!!您只需使用内置的库即可。即使您的某些CSV文件的列名或标题与其他投票结果稍有不同,此解决方案也会起作用
import pandas as pd
import os
df = pd.read_csv("e:\\data science\\kaggle assign\\monthly sales\\Pandas-Data-Science-Tasks-master\\SalesAnalysis\\Sales_Data\\Sales_April_2019.csv")
files = [file for file in os.listdir("e:\\data science\\kaggle assign\\monthly sales\\Pandas-Data-Science-Tasks-master\\SalesAnalysis\\Sales_Data")
for file in files:
print(file)
all_data = pd.DataFrame()
for file in files:
df=pd.read_csv("e:\\data science\\kaggle assign\\monthly sales\\Pandas-Data-Science-Tasks-master\\SalesAnalysis\\Sales_Data\\"+file)
all_data = pd.concat([all_data,df])
all_data.head()
import csv
import glob
filenames = [i for i in glob.glob("SH*.csv")]
header_keys = []
merged_rows = []
for filename in filenames:
with open(filename) as f:
reader = csv.DictReader(f)
merged_rows.extend(list(reader))
header_keys.extend([key for key in reader.fieldnames if key not in header_keys])
with open("combined.csv", "w") as f:
w = csv.DictWriter(f, fieldnames=header_keys)
w.writeheader()
w.writerows(merged_rows)
合并文件将包含可在文件中找到的所有可能列(标题\u键
)。文件中任何缺少的列都将呈现为空白/空(但保留文件的其余数据)
注:
- 如果您的CSV文件没有标题,这将不起作用。在这种情况下,您仍然可以使用
库,但不必使用CSV
和DictReader
,您必须使用基本的DictWriter
和reader
writer
- 这在处理海量数据时可能会遇到问题,因为整个内容都存储在内存中(
list)merged_rows
def concat_files_with_header(output_file, *paths):
for i, path in enumerate(paths):
with open(path) as input_file:
if i > 0:
next(input_file) # Skip header
output_file.writelines(input_file)
函数的使用示例:
def csv_merge(destination_path, *source_paths):
'''
Merges all csv files on source_paths to destination_path.
:param destination_path: Path of a single csv file, doesn't need to exist
:param source_paths: Paths of csv files to be merged into, needs to exist
:return: None
'''
with open(destination_path,"a") as dest_file:
with open(source_paths[0]) as src_file:
for src_line in src_file.read():
dest_file.write(src_line)
source_paths.pop(0)
for i in range(len(source_paths)):
with open(source_paths[i]) as src_file:
src_file.next()
for src_line in src_file:
dest_file.write(src_line)
if __name__ == "__main__":
paths = [f"sh{i}.csv" for i in range(1, 201)]
with open("output.csv", "w") as output_file:
concat_files_with_header(output_file, *paths)
您将以何种方式合并它们?(连接线,…)您希望如何合并它们?CSV文件中的每一行都是一行。因此,一个简单的选择是将所有文件连接在一起。每个文件有两列。我想将它们合并成一个连续两列的文件。@Chuck:关于在您的评论中(对问题和答案)获取所有答复的方法如何更新你的问题?这个问题应该命名为“如何合并…”而不是“如何合并…”每个文件都有两列标题。我想将它们合并为一个连续两列的文件。在windows上,C:\>copy*.csv merged.csv从一个文件中复制标题信息:sed-n 1p some_file.csv>merged_file.csv从所有其他文件中复制最后一行:sed 1d*.csv>>merged_file。csv@blinsay它会在每个文件中添加标题但是,CSV文件也会被合并到合并文件中。如何使用此命令