在Python3中使用多处理读取文件
我有非常大的文件。每个文件几乎都是2GB。因此,我想并行运行多个文件。我可以这样做,因为所有的文件都有相似的格式,所以文件读取可以并行进行。我知道我应该使用多处理库,但我真的很困惑如何在代码中使用它 我的文件读取代码是:在Python3中使用多处理读取文件,python,python-3.x,python-multiprocessing,Python,Python 3.x,Python Multiprocessing,我有非常大的文件。每个文件几乎都是2GB。因此,我想并行运行多个文件。我可以这样做,因为所有的文件都有相似的格式,所以文件读取可以并行进行。我知道我应该使用多处理库,但我真的很困惑如何在代码中使用它 我的文件读取代码是: def file_reading(file,num_of_sample,segsites,positions,snp_matrix): with open(file,buffering=2000009999) as f: ###I read file h
def file_reading(file,num_of_sample,segsites,positions,snp_matrix):
with open(file,buffering=2000009999) as f:
###I read file here. I am not putting that code here.
try:
assert len(snp_matrix) == len(positions)
return positions,snp_matrix ## return statement
except:
print('length of snp matrix and length of position vector not the same.')
sys.exit(1)
我的主要职能是:
if __name__ == "__main__":
segsites = []
positions = []
snp_matrix = []
path_to_directory = '/dataset/example/'
extension = '*.msOut'
num_of_samples = 162
filename = glob.glob(path_to_directory+extension)
###How can I use multiprocessing with function file_reading
number_of_workers = 10
x,y,z = [],[],[]
array_of_number_tuple = [(filename[file], segsites,positions,snp_matrix) for file in range(len(filename))]
with multiprocessing.Pool(number_of_workers) as p:
pos,snp = p.map(file_reading,array_of_number_tuple)
x.extend(pos)
y.extend(snp)
因此,我对函数的输入如下:
TypeError:file_reading()缺少3个必需的位置参数:“segsite”、“positions”和“snp_matrix”传递到Pool.map的列表中的元素不会自动解压缩。“文件读取”函数中通常只能有一个参数 当然,这个参数可以是一个元组,所以自己解包是没有问题的:
def file_reading(args):
file, num_of_sample, segsites, positions, snp_matrix = args
with open(file,buffering=2000009999) as f:
###I read file here. I am not putting that code here.
try:
assert len(snp_matrix) == len(positions)
return positions,snp_matrix ## return statement
except:
print('length of snp matrix and length of position vector not the same.')
sys.exit(1)
if __name__ == "__main__":
segsites = []
positions = []
snp_matrix = []
path_to_directory = '/dataset/example/'
extension = '*.msOut'
num_of_samples = 162
filename = glob.glob(path_to_directory+extension)
number_of_workers = 10
x,y,z = [],[],[]
array_of_number_tuple = [(filename[file], num_of_samples, segsites,positions,snp_matrix) for file in range(len(filename))]
with multiprocessing.Pool(number_of_workers) as p:
pos,snp = p.map(file_reading,array_of_number_tuple)
x.extend(pos)
y.extend(snp)
对于未来,阅读以下内容可能会有所帮助:(我认为现在的问题与最初的问题相比发生了很大变化——尽管你可能想问同样的问题,但所写的内容有点不同)。因此,我删除了我的答案,因为它现在没有意义了……而且,你应该阅读并尝试使你的问题尽可能接近那里描述的内容。