Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/image-processing/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Machine learning 大形状图像去噪自动编码器_Machine Learning_Image Processing_Deep Learning_Conv Neural Network_Autoencoder - Fatal编程技术网

Machine learning 大形状图像去噪自动编码器

Machine learning 大形状图像去噪自动编码器,machine-learning,image-processing,deep-learning,conv-neural-network,autoencoder,Machine Learning,Image Processing,Deep Learning,Conv Neural Network,Autoencoder,我想为任何形状的图像创建一个去噪自动编码器。大多数解决方案的图像形状不大于(500500),而我的图像是形状的文档扫描(300020000)。我试图重塑图像并建立模型,但预测是错误的。有人能帮我吗 我尝试用这里的代码构建模型,围绕图像形状进行模拟,但预测失败。我已经有了一个文档去噪器。 对于一个大的形状,不需要有一个模型,您可以简单地拆分它们,将它们馈送到模型,然后将预测的块再次合并在一起。 我的模型接受512x512形状的图像,因此我必须将图像分割为512x512块。 图像必须大于或等于512

我想为任何形状的图像创建一个去噪自动编码器。大多数解决方案的图像形状不大于(500500),而我的图像是形状的文档扫描(300020000)。我试图重塑图像并建立模型,但预测是错误的。有人能帮我吗


我尝试用这里的代码构建模型,围绕图像形状进行模拟,但预测失败。

我已经有了一个文档去噪器。 对于一个大的形状,不需要有一个模型,您可以简单地拆分它们,将它们馈送到模型,然后将预测的块再次合并在一起。 我的模型接受512x512形状的图像,因此我必须将图像分割为512x512块。 图像必须大于或等于512x512。 如果图像较小,则只需调整其大小或将其调整为512x512形状即可

def split_page(page):
    chunk_size = (512, 512)
    main_size = page.shape[:2]
    chunks=[]
    chunk_grid = tuple(np.array(main_size)//np.array(chunk_size))
    extra_chunk = tuple(np.array(main_size)%np.array(chunk_size))
    for yc in range(chunk_grid[0]):
        row = []
        for xc in range(chunk_grid[1]):
            chunk = page[yc*chunk_size[0]:yc*chunk_size[0]+chunk_size[0], xc*chunk_size[1]: xc*chunk_size[1]+chunk_size[1]]
            row.append(chunk)
        if extra_chunk[1]:
            chunk = page[yc*chunk_size[0]:yc*chunk_size[0]+chunk_size[0], page.shape[1]-chunk_size[1]:page.shape[1]]
            row.append(chunk)
        chunks.append(row)
    if extra_chunk[0]:
        row = []
        for xc in range(chunk_grid[1]):
            chunk = page[page.shape[0]-chunk_size[0]:page.shape[0], xc*chunk_size[1]: xc*chunk_size[1]+chunk_size[1]]
            row.append(chunk)
        
        if extra_chunk[1]:
            chunk = page[page.shape[0]-chunk_size[0]:page.shape[0], page.shape[1]-chunk_size[1]:page.shape[1]]
            row.append(chunk)
        chunks.append(row)
        
    return chunks, page.shape[:2]

def merge_chunks(chunks, osize):
    extra = np.array(osize)%512
    page = np.ones(osize)
    for i, row in enumerate(chunks[:-1]):
        for j, chunk in enumerate(row[:-1]):
            page[i*512:i*512+512,j*512:j*512+512]=chunk
        page[i*512:i*512+512,osize[1]-512:osize[1]]=chunks[i,-1]

    if extra[0]:
        for j, chunk in enumerate(chunks[-1][:-1]):
            page[osize[0]-512:osize[0],j*512:j*512+512]=chunk
        page[osize[0]-512:osize[0],osize[1]-512:osize[1]]=chunks[-1,-1]

    else:
        for j, chunk in enumerate(chunks[-1][:-1]):
            page[osize[0]-512:osize[0],j*512:j*512+512]=chunk
        page[osize[0]-512:osize[0],osize[1]-512:osize[1]]=chunks[-1,-1]
        
    
    return page

def denoise(chunk):
    chunk = chunk.reshape(1,512,512,1)/255.
    denoised = model.predict(chunk).reshape(512,512)*255.
    return denoised

def denoise_page(page):
    chunks, osize= split_page(page)
    chunks = np.array(chunks)
    denoised_chunks = np.ones(chunks.shape)
    for i, row in enumerate(chunks):
        for j, chunk in enumerate(row):
            denoised = denoise(chunk)
            denoised_chunks[i][j]=denoised
    denoised_page = merge_chunks(denoised_chunks, osize)
    
    return denoised_page

我已经有了一个文档去噪器。 对于一个大的形状,不需要有一个模型,您可以简单地拆分它们,将它们馈送到模型,然后将预测的块再次合并在一起。 我的模型接受512x512形状的图像,因此我必须将图像分割为512x512块。 图像必须大于或等于512x512。 如果图像较小,则只需调整其大小或将其调整为512x512形状即可

def split_page(page):
    chunk_size = (512, 512)
    main_size = page.shape[:2]
    chunks=[]
    chunk_grid = tuple(np.array(main_size)//np.array(chunk_size))
    extra_chunk = tuple(np.array(main_size)%np.array(chunk_size))
    for yc in range(chunk_grid[0]):
        row = []
        for xc in range(chunk_grid[1]):
            chunk = page[yc*chunk_size[0]:yc*chunk_size[0]+chunk_size[0], xc*chunk_size[1]: xc*chunk_size[1]+chunk_size[1]]
            row.append(chunk)
        if extra_chunk[1]:
            chunk = page[yc*chunk_size[0]:yc*chunk_size[0]+chunk_size[0], page.shape[1]-chunk_size[1]:page.shape[1]]
            row.append(chunk)
        chunks.append(row)
    if extra_chunk[0]:
        row = []
        for xc in range(chunk_grid[1]):
            chunk = page[page.shape[0]-chunk_size[0]:page.shape[0], xc*chunk_size[1]: xc*chunk_size[1]+chunk_size[1]]
            row.append(chunk)
        
        if extra_chunk[1]:
            chunk = page[page.shape[0]-chunk_size[0]:page.shape[0], page.shape[1]-chunk_size[1]:page.shape[1]]
            row.append(chunk)
        chunks.append(row)
        
    return chunks, page.shape[:2]

def merge_chunks(chunks, osize):
    extra = np.array(osize)%512
    page = np.ones(osize)
    for i, row in enumerate(chunks[:-1]):
        for j, chunk in enumerate(row[:-1]):
            page[i*512:i*512+512,j*512:j*512+512]=chunk
        page[i*512:i*512+512,osize[1]-512:osize[1]]=chunks[i,-1]

    if extra[0]:
        for j, chunk in enumerate(chunks[-1][:-1]):
            page[osize[0]-512:osize[0],j*512:j*512+512]=chunk
        page[osize[0]-512:osize[0],osize[1]-512:osize[1]]=chunks[-1,-1]

    else:
        for j, chunk in enumerate(chunks[-1][:-1]):
            page[osize[0]-512:osize[0],j*512:j*512+512]=chunk
        page[osize[0]-512:osize[0],osize[1]-512:osize[1]]=chunks[-1,-1]
        
    
    return page

def denoise(chunk):
    chunk = chunk.reshape(1,512,512,1)/255.
    denoised = model.predict(chunk).reshape(512,512)*255.
    return denoised

def denoise_page(page):
    chunks, osize= split_page(page)
    chunks = np.array(chunks)
    denoised_chunks = np.ones(chunks.shape)
    for i, row in enumerate(chunks):
        for j, chunk in enumerate(row):
            denoised = denoise(chunk)
            denoised_chunks[i][j]=denoised
    denoised_page = merge_chunks(denoised_chunks, osize)
    
    return denoised_page

试着阅读并编辑你的问题:如何提出完美的问题:试着阅读并编辑你的问题:如何提出完美的问题: