Python 将基于类类型的边界框坐标和图像保存到其他文件夹中

Python 将基于类类型的边界框坐标和图像保存到其他文件夹中,python,numpy,opencv,image-processing,deep-learning,Python,Numpy,Opencv,Image Processing,Deep Learning,我使用OpenCV+Python应用深度学习模型,将对象分为8类(动物类型),即猫、狗、马、鹿、熊、蜥蜴、猴子、未检测到对象(当图像中未检测到对象时) 我有一个文件夹,里面有各种动物的图片。我在一个文件夹中读取所有图像,然后应用深度学习模型提取每个图像中每个对象的边界框坐标 我想先把每一种动物图片放在相关的文件夹中,对每一张图片进行分类。其次,将该图像边界框的坐标保存在同一文件夹中。例如,如果网络检测到cat,我想将该图像和相应的坐标(作为文本文件.text)保存在cat文件夹中,如果在图像中未

我使用OpenCV+Python应用深度学习模型,将对象分为8类(动物类型),即猫、狗、马、鹿、熊、蜥蜴、猴子、未检测到对象(当图像中未检测到对象时)

我有一个文件夹,里面有各种动物的图片。我在一个文件夹中读取所有图像,然后应用深度学习模型提取每个图像中每个对象的边界框坐标

我想先把每一种动物图片放在相关的文件夹中,对每一张图片进行分类。其次,将该图像边界框的坐标保存在同一文件夹中。例如,如果网络检测到cat,我想将该图像和相应的坐标(作为文本文件.text)保存在cat文件夹中,如果在图像中未找到任何这些对象,则将其放在未检测到对象文件夹中

我的问题是如何将原始图像和该对象的边界框坐标保存在8类别文件夹中

这是我的密码:

import cv2
import numpy as np
import os
import glob
import argparse
import time

img_dir="/path/imgt/"
data_path=os.path.join(img_dir,'*g')
files=glob.glob(data_path)
data=[]

i = 0
for f1 in files:
     image=cv2.imread(f1)
     data.append(image)

     # construct the argument parse and parse the arguments
     ap = argparse.ArgumentParser()
     ap.add_argument("-i", "--image", required=True,
                     help="path to input image")
     ap.add_argument("-y", "--yolo", required=True,
                     help="base path to YOLO directory")
     ap.add_argument("-c", "--confidence", type=float, default=0.5,
                     help="minimum probability to filter weak detections")
     ap.add_argument("-t", "--threshold", type=float, default=0.3,
                     help="threshold when applyong non-maxima suppression")
     args = vars(ap.parse_args())

     # load the COCO class labels our YOLO model was trained on
     labelsPath = os.path.sep.join([args["yolo"], "obj.names"])
     LABELS = open(labelsPath).read().strip().split("\n")

     # initialize a list of colors to represent each possible class label
     np.random.seed(42)
     COLORS = np.random.randint(0, 255, size=(len(LABELS), 3),
                                dtype="uint8")

     # derive the paths to the YOLO weights and model configuration
     weightsPath = os.path.sep.join([args["yolo"], "yolo-obj_last.weights"])
     configPath = os.path.sep.join([args["yolo"], "yolo-obj.cfg"])

     # load our YOLO object detector trained on COCO dataset (80 classes)
     print("[INFO] loading YOLO from disk...")
     net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)

     # load our input image and grab its spatial dimensions
    # image = cv2.imread(args["image"])
     (H, W) = image.shape[:2]

     # determine only the *output* layer names that we need from YOLO
     ln = net.getLayerNames()
     ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]

     # construct a blob from the input image and then perform a forward
     # pass of the YOLO object detector, giving us our bounding boxes and
     # associated probabilities
     blob = cv2.dnn.blobFromImage(image, 1 / 255.0, (416, 416),
                                  swapRB=True, crop=False)
     net.setInput(blob)
     start = time.time()
     layerOutputs = net.forward(ln)
     end = time.time()

     # show timing information on YOLO
     print("[INFO] YOLO took {:.6f} seconds".format(end - start))

     # initialize our lists of detected bounding boxes, confidences, and
     # class IDs, respectively
     boxes = []
     confidences = []
     classIDs = []

     # loop over each of the layer outputs
     for output in layerOutputs:
          # loop over each of the detections
          for detection in output:
               # extract the class ID and confidence (i.e., probability) of
               # the current object detection
               scores = detection[5:]
               classID = np.argmax(scores)
               confidence = scores[classID]

               # filter out weak predictions by ensuring the detected
               # probability is greater than the minimum probability
               if confidence > args["confidence"]:
                    # scale the bounding box coordinates back relative to the
                    # size of the image, keeping in mind that YOLO actually
                    # returns the center (x, y)-coordinates of the bounding
                    # box followed by the boxes' width and height
                    box = detection[0:4] * np.array([W, H, W, H])
                    (centerX, centerY, width, height) = box.astype("int")

                    # use the center (x, y)-coordinates to derive the top and
                    # and left corner of the bounding box
                    x = int(centerX - (width / 2))
                    y = int(centerY - (height / 2))

                    # update our list of bounding box coordinates, confidences,
                    # and class IDs
                    boxes.append([x, y, int(width), int(height)])
                    confidences.append(float(confidence))
                    classIDs.append(classID)

     # apply non-maxima suppression to suppress weak, overlapping bounding
     # boxes
     idxs = cv2.dnn.NMSBoxes(boxes, confidences, args["confidence"],
                             args["threshold"])

     # ensure at least one detection exists
     if len(idxs) > 0:
          # loop over the indexes we are keeping
          for i in idxs.flatten():
               # extract the bounding box coordinates
               (x, y) = (boxes[i][0], boxes[i][1])
               (w, h) = (boxes[i][2], boxes[i][3])

               # draw a bounding box rectangle and label on the image
               color = [int(c) for c in COLORS[classIDs[i]]]
               cv2.rectangle(image, (x, y), (x + w, y + h), color, 2)
               text = "{}: {:.4f}".format(LABELS[classIDs[i]], confidences[i])
               cv2.putText(image, text, (x, y - 7), cv2.FONT_HERSHEY_SIMPLEX,0.6, color, 2)
               path = '/path/imgr/' + LABELS[classIDs[i]] + '/'
               cv2.imwrite(os.path.join(path, 'image' + str(i) + '.jpg'), image)
               with open(os.path.join(path, 'image' + str(i) + '.txt'), 'a+') as f:
                 f.write(str(classIDs[i]) + ' ' + str(x) + ' ' + str(y) + ' ' + str(w) + ' ' + str(h))
文本文件的外观如何?

.txt
-每个
.jpg
-图像文件的文件-位于同一目录中,名称相同,但扩展名为
.txt
-并放入文件:此图像上的对象编号和对象坐标,对于新行中的每个对象:
哪里:
-对象的整数,从
0
(类-1)
-相对于图像宽度和高度的浮点值,它可以等于
(0.0到1.0)
例如:
=/或=/
A提示:
-是矩形的中心(不是左上角) 例如,对于
img1.jpg
您将被创建
img1.txt
包含:

1 0.716797 0.395833 0.216406 0.147222
0 0.687109 0.379167 0.255469 0.158333
1 0.420312 0.395833 0.140625 0.166667

也许是这样的:

 path = os.path.join('/path/imgr/', LABELS[classID], image_name)
 cv2.imwrite(path + '.jpg', image)
 with open(path + '.txt'), 'a+') as f:
    f.write(str(classID) + ' ' + str(detection[0]) + ' ' + str(detection[1]) + ' ' + str(detection[2]) + ' ' + str(detection[3]) + '\n')
图像中可能有多个对象,在这种情况下,图像应写入每个相关文件夹,并附加到文本文件(如果存在)

image\u name
将由您生成,您可以使用正在读取的名称或计数器

此代码段应该放在if语句下面的某个地方:

if confidence > args["confidence"]: 
我会把它放在最后。你可能需要做一些小的调整,但这就是要点


更明确地说:

import cv2
import numpy as np
import os
import glob
import argparse
import time

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
#ap.add_argument("-i", "--image", required=True,
#                help="path to input image")
ap.add_argument("-y", "--yolo", required=True,
                help="base path to YOLO directory")  
ap.add_argument("-c", "--confidence", type=float, default=0.5,
                help="minimum probability to filter weak detections")
ap.add_argument("-t", "--threshold", type=float, default=0.3,
                help="threshold when applyong non-maxima suppression")
args = vars(ap.parse_args())

# load the COCO class labels our YOLO model was trained on
labelsPath = os.path.sep.join([args["yolo"], "obj.names"])
LABELS = open(labelsPath).read().strip().split("\n")

# derive the paths to the YOLO weights and model configuration
weightsPath = os.path.sep.join([args["yolo"], "yolo-obj_last.weights"])
configPath = os.path.sep.join([args["yolo"], "yolo-obj.cfg"])

# load our YOLO object detector trained on COCO dataset (80 classes)
print("[INFO] loading YOLO from disk...")
net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)

# determine only the *output* layer names that we need from YOLO
ln = net.getLayerNames()
ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]

img_dir="/path/imgt/"
data_path=os.path.join(img_dir,'*g')
files=glob.glob(data_path)

for f1 in files:
    # load our input image and grab its spatial dimensions
    image=cv2.imread(f1)

    # construct a blob from the input image and then perform a forward
    # pass of the YOLO object detector, giving us our bounding boxes and
    # associated probabilities
    blob = cv2.dnn.blobFromImage(image, 1 / 255.0, (416, 416),
                                 swapRB=True, crop=False)
    net.setInput(blob)
    layerOutputs = net.forward(ln)

    # loop over each of the layer outputs
    for output in layerOutputs:
         # loop over each of the detections
         for detection in output:
              # extract the class ID and confidence (i.e., probability) of
              # the current object detection
              scores = detection[5:]
              classID = np.argmax(scores)
              confidence = scores[classID]
              box = detection[0:4]
              # get upper left corner
              box[0] = box[0] - box[2]/2
              box[1] = box[1] - box[3]/2

              # filter out weak predictions by ensuring the detected
              # probability is greater than the minimum probability
              if confidence > args["confidence"]:
                  # write output files
                  class_dir = os.path.join('/path/imgr/', LABELS[classID])
                  if not os.path.exists(class_dir):
                      os.makedirs(class_dir)
                  path = os.path.join(class_dir, f1.split('/')[-1][:-4])
                  cv2.imwrite(path + '.jpg', image)
                  with open(path + '.txt'), 'a+') as f:
                      f.write(str(classID) + ' ' + str(box[0]) + ' ' + str(box[1]) + ' ' + str(box[2]) + ' ' + str(box[3]) + '\n')

通读一遍,确保您了解for循环中的每个部分都在做什么。一旦您熟悉了这个最小的示例,您可以添加回非最大抑制并绘制边界框(如果您喜欢)。

我尝试过,它没有正常工作。问题是它将一个图像放入文件夹中,然后放入tex下一张图片的t文件。生成的文本文件不正确。我更新了我的帖子。请检查。您的空白是错误的,缩进最后几行以与您对
cv.putText
的调用保持一致,它需要是for循环的一部分,您在iI上迭代时也这样做了,不幸的是,它确实输出了坐标和I法师是正确的。我编辑了代码,请查看。你说它生成的文本文件不正确,你能提供一个它们的外观示例吗?它们应该像我在帖子中解释的那样。但是现在是这样的:0 88 4 724 6210 50 5 668 5610 64-1 721 6170 73 0 716 5910 22 3 599 5100 40 1 732 6180 90 5 727 6230 123 10 900 6710 69 7 871 6810 399 58 731 678在我看来,这并不代表一个真正的问题。你展示了一些代码,这些代码做了一些事情,基本上是让SO社区完成你的项目。请参阅链接:获取与你的代码完全相同的代码。你不应该将其他人的代码作为你自己的代码展示,而应该简单地问这个惊人的社区y为您编写其余的代码。