如何使用Python更快地读取多个CSV文件

如何使用Python更快地读取多个CSV文件,python,pandas,bigdata,Python,Pandas,Bigdata,我的程序应该读取约400.000个csv文件,这需要很长时间。我使用的代码是: for file in self.files: size=2048 csvData = pd.read_csv(file, sep='\t', names=['acol', 'bcol'], header=None, skiprows=range(0,int(size/2)), skipfooter=(int(size/2)-10))

我的程序应该读取约400.000个csv文件,这需要很长时间。我使用的代码是:

        for file in self.files:
            size=2048
            csvData = pd.read_csv(file, sep='\t', names=['acol', 'bcol'], header=None, skiprows=range(0,int(size/2)), skipfooter=(int(size/2)-10))

            for index in range(0,10):
                s=s+float(csvData['bcol'][index])
            s=s/10
            averages.append(s)
            time=file.rpartition('\\')[2]
            time=int(re.search(r'\d+', time).group())
            times.append(time)

是否有机会提高速度?

您可以使用线程。我从您的用例中获取并修改了以下代码

global times =[]

def my_func(file):
        size=2048
        csvData = pd.read_csv(file, sep='\t', names=['acol', 'bcol'], header=None, skiprows=range(0,int(size/2)), skipfooter=(int(size/2)-10))

        for index in range(0,10):
            s=s+float(csvData['bcol'][index])
        s=s/10
        averages.append(s)
        time=file.rpartition('\\')[2]
        time=int(re.search(r'\d+', time).group())
        times.append(time)

threads = []
# In this case 'self.files' is a list of files to be read.
for ii in range(self.files):
# We start one thread per file present.
    process = Thread(target=my_func, args=[ii])
    process.start()
    threads.append(process)
# We now pause execution on the main thread by 'joining' all of our started threads.
# This ensures that each has finished processing the urls.
for process in threads:
    process.join()

您可以使用多线程/子进程来加快进度。查看是否存在类似的问题。也许这也有帮助