Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
基于Python中的颜色过滤器绘制区域_Python_Opencv_Machine Learning_Deep Learning_Computer Vision - Fatal编程技术网

基于Python中的颜色过滤器绘制区域

基于Python中的颜色过滤器绘制区域,python,opencv,machine-learning,deep-learning,computer-vision,Python,Opencv,Machine Learning,Deep Learning,Computer Vision,我正在开发一个脚本,检测地板上的两条激光草图线 例如: 使用以下代码,我可以识别检测区域中的灯光并绘制线: vermelho_inicio = np.array([0, 9, 178]) #131,72,208 vermelho_fim = np.array([255, 60, 255]) mask = cv2.inRange(img, vermelho_inicio, vermelho_fim) edges = cv2.Canny(mask, 100, 200) #DESENHO AS L

我正在开发一个脚本,检测地板上的两条激光草图线

例如:

使用以下代码,我可以识别检测区域中的灯光并绘制线:

vermelho_inicio = np.array([0, 9, 178]) #131,72,208 vermelho_fim = np.array([255, 60, 255]) mask = cv2.inRange(img, vermelho_inicio, vermelho_fim)

edges = cv2.Canny(mask, 100, 200)
#DESENHO AS LINHAS NO LASER (Cone)
lines = cv2.HoughLinesP(edges, 5, np.pi/180, 0, maxLineGap=100)
a,b,c = lines.shape
if lines is not None:
   for line in lines:          
      x1, y1, x2, y2 = line[0]
      cv2.line(img, (x1, y1), (x2, y2), (0, 255, 0), 5)
我的结果是:

我需要什么

我需要绘制检测到的红色区域,并获得绘制区域的位置x1、y1、x2、y2。我希望的结果是下面的结果或类似的结果:

我的完整代码:

# -*- coding: utf-8 -*-
import numpy as np
import cv2
import time
import math

#STREAMINGS
#http://68.116.13.142:82/mjpg/video.mjpg INDUSTRIAL
#http://95.255.38.86:8080/mjpg/video.mjpg RUA ITALIA
#http://81.198.213.128:82/mjpg/video.mjpg CORREDOR MOVIMENTADO

class DetectorAPI:


     cap = cv2.VideoCapture("VideoCone.MOV")
     while True:
        r, img = cap.read()
        #DEFINE A ÁREA DO VIDEO EM QUE O MODELO IRA ATUAR
        #img = img[10:1280, 230:1280]
        img = cv2.resize(img, (800, 600))
        #Frame Detectação Red Zone
        #frame = cv2.GaussianBlur(img (5, 5), 0)
        vermelho_inicio = np.array([0, 9, 178])
        #131,72,208
        vermelho_fim = np.array([255, 60, 255])
        mask = cv2.inRange(img, vermelho_inicio, vermelho_fim)

        edges = cv2.Canny(mask, 100, 200)
        #DESENHO AS LINHAS NO LASER (Cone)
        lines = cv2.HoughLinesP(edges, 5, np.pi/180, 0, maxLineGap=100)
        a,b,c = lines.shape
        if lines is not None:
           for line in lines:          
              x1, y1, x2, y2 = line[0]
              cv2.line(img, (x1, y1), (x2, y2), (0, 255, 0), 5)


        #Crio o overlay para fazer a transparência no quadrado da Danger Área
        overlay = img.copy()



        #DESENHO A DANGER ÁREA
        #x1,y1 ------
        #|          |
        #|          |
        #|          |
        #--------x2,y2
        #CAPTURO AS INFORMAÇÕES DO FRAME
        height, width, channels = img.shape
        #DIVISÃO PARA CAPTURAR O CENTRO DA IMAGEM
        upper_left = (int(width / 4), int(height / 4))
        bottom_right = (int(width * 3 / 4), int(height * 3 / 4))
        #ESCREVO O RETANGULO NO CENTRO DO VÍDEO
        #DangerArea = cv2.rectangle(overlay,upper_left, bottom_right,(0,0,255),-1);
        #Escrevo o texto na Danger Area
        #cv2.putText(DangerArea,'Danger Area',(int(width / 4),int(height * 3 / 4)),  cv2.FONT_HERSHEY_SIMPLEX, 0.5,(255,255,255),2,cv2.LINE_AA)
        #cv2.addWeighted(overlay,0.3,img,1-0.4,0,img);
        #Imprimo no console o centro da imagem
        print('Upper_Left: '+str(upper_left)+' bottom_right: '+str(bottom_right));


        #Exibe o video 
        cv2.imshow("edges", edges)
        cv2.imshow("Detectar Pessoas", img)

        key = cv2.waitKey(1)
        if key & 0xFF == ord('q'):
            break

我能达到的最接近的是凸面外壳:


很好,这已经是我需要的很好的一部分了!只是另外一个问题,是否可以为整个内部区域着色​​红色多边形?是的,如果更改cv2.fillPoly的多段线。如果您想在示例中使用oclusion,可以使用cv2.addweight。这是一个好主意
mask = cv2.inRange(img, vermelho_inicio, vermelho_fim)
np_points = np.transpose(np.nonzero(mask))
points = np.fliplr(np_points) # opencv uses flipped x,y coordinates 
approx = cv2.convexHull(points)
cv2.polylines(img, [approx],True, (0,255,255), 5)