Python 使用h5py合并所有h5文件
我是编码新手。有人能用Python中使用h5py的脚本提供帮助吗?在这个脚本中,我们可以读取所有目录和子目录,将多个h5文件合并到一个h5文件中。您需要的是文件中所有数据集的列表。我认为a的概念是这里所需要的。这将允许您从组中提取所有“数据集”,但当其中一个看起来是组本身时,递归地执行相同的操作,直到找到所有数据集。例如:Python 使用h5py合并所有h5文件,python,h5py,Python,H5py,我是编码新手。有人能用Python中使用h5py的脚本提供帮助吗?在这个脚本中,我们可以读取所有目录和子目录,将多个h5文件合并到一个h5文件中。您需要的是文件中所有数据集的列表。我认为a的概念是这里所需要的。这将允许您从组中提取所有“数据集”,但当其中一个看起来是组本身时,递归地执行相同的操作,直到找到所有数据集。例如: / |- dataset1 |- group1 |- dataset2 |- dataset3 |- dataset4 伪代码中的函数应如下所示: def ge
/
|- dataset1
|- group1
|- dataset2
|- dataset3
|- dataset4
伪代码中的函数应如下所示:
def getdatasets(key, file):
out = []
for name in file[key]:
path = join(key, name)
if file[path] is dataset: out += [path]
else out += getdatasets(path, file)
return out
例如:
/
|- dataset1
|- group1
|- dataset2
|- dataset3
|- dataset4
/dataset1
是一个数据集:将路径添加到输出,给出
out = ['/dataset1']
nested_out = ['/group/dataset2']
nested_out = ['/group/dataset2', '/group/dataset3']
out = ['/dataset1', '/group/dataset2', '/group/dataset3', '/dataset4']
/group
不是数据集:调用getdatasets('/group',file)
/group/dataset2
是一个数据集:将路径添加到输出,给出
out = ['/dataset1']
nested_out = ['/group/dataset2']
nested_out = ['/group/dataset2', '/group/dataset3']
out = ['/dataset1', '/group/dataset2', '/group/dataset3', '/dataset4']
/group/dataset3
是一个数据集:将路径添加到输出,给出
out = ['/dataset1']
nested_out = ['/group/dataset2']
nested_out = ['/group/dataset2', '/group/dataset3']
out = ['/dataset1', '/group/dataset2', '/group/dataset3', '/dataset4']
out = ['/dataset1', '/group/dataset2', '/group/dataset3']
/dataset4
是一个数据集:将路径添加到输出,给出
out = ['/dataset1']
nested_out = ['/group/dataset2']
nested_out = ['/group/dataset2', '/group/dataset3']
out = ['/dataset1', '/group/dataset2', '/group/dataset3', '/dataset4']
要制作一个简单的克隆,您可以执行以下操作
import h5py
import numpy as np
# function to return a list of paths to each dataset
def getdatasets(key,archive):
if key[-1] != '/': key += '/'
out = []
for name in archive[key]:
path = key + name
if isinstance(archive[path], h5py.Dataset):
out += [path]
else:
out += getdatasets(path,archive)
return out
# open HDF5-files
data = h5py.File('old.hdf5','r')
new_data = h5py.File('new.hdf5','w')
# read as much datasets as possible from the old HDF5-file
datasets = getdatasets('/',data)
# get the group-names from the lists of datasets
groups = list(set([i[::-1].split('/',1)[1][::-1] for i in datasets]))
groups = [i for i in groups if len(i)>0]
# sort groups based on depth
idx = np.argsort(np.array([len(i.split('/')) for i in groups]))
groups = [groups[i] for i in idx]
# create all groups that contain dataset that will be copied
for group in groups:
new_data.create_group(group)
# copy datasets
for path in datasets:
# - get group name
group = path[::-1].split('/',1)[1][::-1]
# - minimum group name
if len(group) == 0: group = '/'
# - copy data
data.copy(path, new_data[group])
当然,根据您的需要,还可以进行进一步的定制。您描述了一些文件的组合。如果是那样的话,你必须
new_data = h5py.File('new.hdf5','a')
并且可能会在路径中添加一些内容。您尝试了什么?你在哪里卡住了?嗨,Tom,github中提供的create_aggregate_file.py使用HDF5_utils,这样的包不存在。堆栈交换中有一个引用,我尝试了这段代码,但失败了,因为它读取了.h5扩展名d_names=os.listdir(os.getcwd())d_struct={}的所有目录和子目录,d_names中的I:f=HDF5.File(I,'r+')d_struct[I]=f.keys()f.close()