Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/312.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何在OpenCV和Python背景上调整和转换蒙版图像的大小_Python_Image_Numpy_Opencv_Matplotlib - Fatal编程技术网

如何在OpenCV和Python背景上调整和转换蒙版图像的大小

如何在OpenCV和Python背景上调整和转换蒙版图像的大小,python,image,numpy,opencv,matplotlib,Python,Image,Numpy,Opencv,Matplotlib,通过我自己的谷歌搜索和下面的教程,我创建了下面的python脚本。它在图像中找到最主要(常见)的颜色,并用另一个“背景”图像替换它。它基本上创建了一个遮罩,并将其放置在背景图像的顶部。我的问题是如何调整遮罩的大小并转换它。我是使用Python的OpenCV的完全初学者,因此一些带有解释的代码示例将非常有用:) 以下是脚本: import os #from colorthief import ColorThief from PIL import Image import cv2 import ma

通过我自己的谷歌搜索和下面的教程,我创建了下面的python脚本。它在图像中找到最主要(常见)的颜色,并用另一个“背景”图像替换它。它基本上创建了一个遮罩,并将其放置在背景图像的顶部。我的问题是如何调整遮罩的大小并转换它。我是使用Python的OpenCV的完全初学者,因此一些带有解释的代码示例将非常有用:)

以下是脚本:

import os
#from colorthief import ColorThief
from PIL import Image
import cv2
import matplotlib.pyplot as plt
import numpy as np

imgDirec = "/Users/.../images"

def find_dominant_color(filename):
        #Resizing parameters
        width, height = 150,150
        image = Image.open(filename)
        image = image.resize((width, height),resample = 0)
        #Get colors from image object
        pixels = image.getcolors(width * height)
        #Sort them by count number(first element of tuple)
        sorted_pixels = sorted(pixels, key=lambda t: t[0])
        #Get the most frequent color
        dominant_color = sorted_pixels[-1][1]
        return dominant_color



filepath = "/Users/.../image.jpg" #Foreground Image
dominant_color = find_dominant_color(filepath)
#dominant_color = color_thief.get_color(quality=1)
print(dominant_color)
image = cv2.imread(filepath)
image_copy = np.copy(image)
image_copy = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB)
lower_blue = np.array([dominant_color[0]-20, dominant_color[1]-20, dominant_color[2]-20])     ##[R value, G value, B value]
upper_blue = np.array([dominant_color[0]+20, dominant_color[1]+20, dominant_color[2]+20])
#plt.imshow(image_copy)


mask = cv2.inRange(image_copy, lower_blue, upper_blue)
#plt.imshow(mask, cmap='gray')

masked_image = np.copy(image_copy)
masked_image[mask != 0] = [0, 0, 0]
#plt.imshow(masked_image)
background_image = cv2.imread('/Users/.../background1.jpg')
background_image = cv2.cvtColor(background_image, cv2.COLOR_BGR2RGB)

crop_background = background_image[0:image_copy.shape[0], 0:image_copy.shape[1]]

crop_background[mask == 0] = [0, 0, 0]

#plt.imshow(crop_background)

#These Transformations do not work as intended.
newImg = cv2.resize(crop_background, (0,0), fx=2, fy=2)

height, width = masked_image.shape[:2]
quarter_height, quarter_width = height / 4, width / 4
T = np.float32([[1, 0, quarter_width], [0, 1, quarter_height]])
img_translation = cv2.warpAffine(masked_image, T, (width, height)) 


final_image = crop_background + masked_image
plt.imshow(final_image)
plt.show()
这是image.jpg

这里是background1.jpg

正确运行脚本,我知道:

我希望能够使人变小,并在背景周围翻译他。我该怎么做?还有,有没有办法在把人物的小照片放在上面的同时保持背景图像的原始大小?我也是初学者(主要是iOS开发人员),所以可能有一个非常明显的解决方案。请开导我


提前谢谢

要回答这个问题,您必须在代码中找到两个方面。第一个问题是,背景裁剪在哪一行?此过程将在下面一行中进行

crop_background = background_image[0:image_copy.shape[0], 0:image_copy.shape[1]]
所以,要在背景中翻译
人物
,您必须定义两个偏移量来翻译背景中的图像。我会这样做:

x_offset=100 # translate in x-axis
y_offset=200  # translate in y-axis
crop_background = background_image[y_offset:image_copy.shape[0]+y_offset, x_offset:image_copy.shape[1]+x_offset]
到目前为止,我们添加了翻译功能,但是我们如何才能看到整个背景而不是裁剪的背景呢?要添加此功能,您应将
最终图像
覆盖到裁剪图像的确切位置

background_image[y_offset:image_copy.shape[0]+y_offset, x_offset:image_copy.shape[1]+x_offset]=final_image
通过添加此行,新图片将如下所示:

x_offset=100 # translate in x-axis
y_offset=200  # translate in y-axis
crop_background = background_image[y_offset:image_copy.shape[0]+y_offset, x_offset:image_copy.shape[1]+x_offset]

那么,如何调整图像的大小呢?OpenCV中有一个函数名为
cv2。resize
通过该函数,您可以将图像大小调整为任意大小,我将您的图像重塑为(100200),并重新运行代码:

image = cv2.resize(image,(100,200))
结果将是:

整个代码如下所示:

import os
#from colorthief import ColorThief
from PIL import Image
import cv2
import matplotlib.pyplot as plt
import numpy as np

imgDirec = "/home/isv/Desktop/"

def find_dominant_color(filename):
        #Resizing parameters
        width, height = 150,150
        image = Image.open(filename)
        image = image.resize((width, height),resample = 0)
        #Get colors from image object
        pixels = image.getcolors(width * height)
        #Sort them by count number(first element of tuple)
        sorted_pixels = sorted(pixels, key=lambda t: t[0])
        #Get the most frequent color
        dominant_color = sorted_pixels[-1][1]
        return dominant_color





filepath = "/home/isv/Desktop/image.jpg" #Foreground Image
dominant_color = find_dominant_color(filepath)
#dominant_color = color_thief.get_color(quality=1)
print(dominant_color)
image = cv2.imread(filepath)
image = cv2.resize(image,(100,200))    #added line
image_copy = np.copy(image)
image_copy = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB)
lower_blue = np.array([dominant_color[0]-20, dominant_color[1]-20, dominant_color[2]-20])     ##[R value, G value, B value]
upper_blue = np.array([dominant_color[0]+20, dominant_color[1]+20, dominant_color[2]+20])
#plt.imshow(image_copy)


mask = cv2.inRange(image_copy, lower_blue, upper_blue)
#plt.imshow(mask, cmap='gray')

masked_image = np.copy(image_copy)
masked_image[mask != 0] = [0, 0, 0]
#plt.imshow(masked_image)
background_image = cv2.imread('/home/isv/Desktop/background1.jpg')
background_image = cv2.cvtColor(background_image, cv2.COLOR_BGR2RGB)

x_offset=100    #added line
y_offset=200    #added line
crop_background = background_image[y_offset:image_copy.shape[0]+y_offset, x_offset:image_copy.shape[1]+x_offset]   #change line

crop_background[mask == 0] = [0, 0, 0]

#plt.imshow(crop_background)

#These Transformations do not work as intended.
newImg = cv2.resize(crop_background, (0,0), fx=2, fy=2)

height, width = masked_image.shape[:2]
quarter_height, quarter_width = height / 4, width / 4
T = np.float32([[1, 0, quarter_width], [0, 1, quarter_height]])
img_translation = cv2.warpAffine(masked_image, T, (width, height)) 


final_image = crop_background + masked_image
background_image[y_offset:image_copy.shape[0]+y_offset, x_offset:image_copy.shape[1]+x_offset]=final_image   #added line
plt.imshow(final_image)
plt.show()

plt.figure()                        # added line
plt.imshow(background_image)        # added line
plt.show()                          # added line
我希望这段代码能帮助你