Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/333.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 提取面片并重建图像_Python_Numpy_Deep Learning_Medical - Fatal编程技术网

Python 提取面片并重建图像

Python 提取面片并重建图像,python,numpy,deep-learning,medical,Python,Numpy,Deep Learning,Medical,我正在尝试一项分割任务,图像是3d卷,因为由于gpu内存限制,我无法立即处理它们,我正在提取图像的补丁并对其执行操作 用于提取我正在使用的补丁 def cutup(data, blck, strd): sh = np.array(data.shape) blck = np.asanyarray(blck) strd = np.asanyarray(strd) nbl = (sh - blck) // strd + 1

我正在尝试一项分割任务,图像是3d卷,因为由于gpu内存限制,我无法立即处理它们,我正在提取图像的补丁并对其执行操作

用于提取我正在使用的补丁

    def cutup(data, blck, strd):
        sh = np.array(data.shape)
        blck = np.asanyarray(blck)
        strd = np.asanyarray(strd)
        nbl = (sh - blck) // strd + 1
        strides = np.r_[data.strides * strd, data.strides]
        dims = np.r_[nbl, blck]
        data6 = stride_tricks.as_strided(data, strides=strides, shape=dims)
        return data6.reshape(-1, *blck)

    def make_patches(image_folder, mask_folder):
        '''
        Given niigz image and mask files will create numpy files 
        '''
        for image, mask in tqdm.tqdm(zip(os.listdir(image_folder), os.listdir(mask_folder))):
            mask_ = mask
            mask = mask.split('_')
            image = mask[0]
            image_name = mask[0]
            mask_name = mask[0]
            image, mask = read_image_and_seg(os.path.join(image_folder, image), os.path.join(mask_folder,mask_))
            if image.shape[1] > 600:
                image = image[:,:600,:]
            desired_size_w = 896
            desired_size_h = 600
            desired_size_z = 600
            delta_w = desired_size_w - image.shape[0]
            delta_h = desired_size_h - image.shape[1]
            delta_z = desired_size_z - image.shape[2]

            padded_image =np.pad(image, ((0,delta_w), (0,delta_h), (0, delta_z)), 'constant')
            padded_mask  =np.pad(mask, ((0,delta_w), (0,delta_h), (0, delta_z)), 'constant')
            y  = cutup(padded_image, (128,128,128),(128,128,128))#Actually extract more patches by changing stride size
            y_ = cutup(padded_mask,  (128,128,128),(128,128,128))
            print(image_name)
            for index, (im , label) in enumerate(zip(y , y_)):
                if len(np.unique(im)) ==1:
                    continue
                else:
                    if not os.path.exists(os.path.join('../data/patches/images/',image_name.split('.')[0]+str(index))):
                        np.save(os.path.join('../data/patches/images/',image_name.split('.')[0]+str(index)), im)
                        np.save(os.path.join('../data/patches/masks/', image_name.split('.')[0]+str(index)), label)
现在,这将提取非重叠的面片,并给我numpy数组中的面片,就像我正在将图像转换为形状(用0填充)896640640一样,以便我可以提取所有面片

问题是我不知道上面的代码是否有效!为了测试它想要提取补丁,然后提取这些补丁并重建图像,现在我不确定该怎么做

现在这就是我所拥有的

    def reconstruct_image(folder_path_of_npy_files):
        slice_shape = len(os.listdir(folder_path_of_npy_files))
        recon_image = np.array([])
        for index, file in enumerate(os.listdir(folder_path_of_npy_files)):
            read_image = np.load(os.path.join(folder_path_of_npy_files, file))
            recon_image = np.append(recon_image, read_image)
        return recon_image
但这不起作用,因为它生成了一个(x128128)的数组,并一直填充第0维

所以我的问题是,如何重建图像?还是有更好的方法来提取和重建斑块


提前感谢。

如果事情相当简单(不是滑动窗口),那么您可以使用。例如:

import numpy as np
import skimage

# Create example
data = np.random.random((200,200,200))

blocks = skimage.util.shape.view_as_blocks(data, (10, 10, 10))

# Do the processing on the blocks here.
processed_blocks = blocks

new_data = np.reshape(process_blocks, (200, 200, 200))
但是,如果您有内存约束问题,这可能不是最好的方法,因为您将要多次复制原始数据(数据、块、新数据)等等,您可能需要比我这里的示例更聪明一些

如果您有内存问题,您可以非常小心地做的另一件事是更改数据的底层数据类型。例如,当我处理MRI数据时,大多数原始数据都是整数ish,但Python将其表示为float64。如果可以接受数据的舍入,则可以执行以下操作:

import numpy as np
import skimage

# Create example
data = 200*np.random.random((200,200,200)).astype(np.float16)  # 2 byte float

blocks = skimage.util.shape.view_as_blocks(data, (10, 10, 10))

# Do the processing on the blocks here.

new_data = np.reshape(blocks, (200, 200, 200))
此版本使用:

In [2]: whos
Variable   Type       Data/Info
-------------------------------
blocks     ndarray    20x20x20x10x10x10: 8000000 elems, type `float16`, 16000000 bytes (15.2587890625 Mb)                                                                                              
data       ndarray    200x200x200: 8000000 elems, type `float16`, 16000000 bytes (15.2587890625 Mb)
new_data   ndarray    200x200x200: 8000000 elems, type `float16`, 16000000 bytes (15.2587890625 Mb)
与第一个版本相比:

In [2]: whos
Variable   Type       Data/Info
-------------------------------
blocks     ndarray    20x20x20x10x10x10: 8000000 elems, type `float64`, 64000000 bytes (61.03515625 Mb)                                                                                                
data       ndarray    200x200x200: 8000000 elems, type `float64`, 64000000 bytes (61.03515625 Mb)
new_data   ndarray    200x200x200: 8000000 elems, type `float64`, 64000000 bytes (61.03515625 Mb)
因此,执行
np.float16
可以在RAM中节省大约4倍的内存

但是,进行这种类型的更改会对数据和算法进行假设(可能的舍入问题等)