Tensorflow自动分割图像

Tensorflow自动分割图像,tensorflow,image-processing,train-test-split,Tensorflow,Image Processing,Train Test Split,假设我有这样的目录 full_dataset |---horse <= 40 images of horse |---donkey <= 50 images of donkey |---cow <= 80 images of cow |---zebra <= <= 30 images of zebra 但我想自动分割该文件,而无需手动将目录更改为train文件夹和test文件夹。我不想像这样手动拆分它) 我所做的和失败的 (x_train, y_train),(x

假设我有这样的目录

full_dataset
|---horse <= 40 images of horse
|---donkey <= 50 images of donkey
|---cow <= 80 images of cow
|---zebra <= <= 30 images of zebra
但我想自动分割该文件,而无需手动将目录更改为train文件夹和test文件夹。我不想像这样手动拆分它)

我所做的和失败的

(x_train, y_train),(x_test, y_test) = my_dataset.load_data()

您不必使用tensorflow或keras划分数据集。如果已安装sklearn软件包,则可以简单地使用它:

from sklearn.model_selection import train_test_split
X = ...
Y = ...
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)
您也可以出于同样的目的使用numpy:

import numpy
X = ...
Y = ...
test_size = 0.2
train_nsamples = (1-test_size) * len(Y)
x_train, x_test, y_train, y_test = X[:train_nsamples,:], X[train_nsamples:, :], Y[:train_nsamples, ], Y[train_nsamples:,]
在凯拉斯:

from keras.datasets import mnist
import numpy as np
from sklearn.model_selection import train_test_split

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x = np.concatenate((x_train, x_test))
y = np.concatenate((y_train, y_test))

train_size = 0.7
x_train, x_test, y_train, y_test = train_test_split(x, y, train_size=train_size)

经过反复试验和一天的努力,我找到了解决办法

第一条路

import glob
horse = glob.glob('full_dataset/horse/*.*')
donkey = glob.glob('full_dataset/donkey/*.*')
cow = glob.glob('full_dataset/cow/*.*')
zebra = glob.glob('full_dataset/zebra/*.*')

data = []
labels = []

for i in horse:   
    image=tf.keras.preprocessing.image.load_img(i, color_mode='RGB', 
    target_size= (280,280))
    image=np.array(image)
    data.append(image)
    labels.append(0)
for i in donkey:   
    image=tf.keras.preprocessing.image.load_img(i, color_mode='RGB', 
    target_size= (280,280))
    image=np.array(image)
    data.append(image)
    labels.append(1)
for i in cow:   
    image=tf.keras.preprocessing.image.load_img(i, color_mode='RGB', 
    target_size= (280,280))
    image=np.array(image)
    data.append(image)
    labels.append(2)
for i in zebra:   
    image=tf.keras.preprocessing.image.load_img(i, color_mode='RGB', 
    target_size= (280,280))
    image=np.array(image)
    data.append(image)
    labels.append(3)

data = np.array(data)
labels = np.array(labels)

from sklearn.model_selection import train_test_split
X_train, X_test, ytrain, ytest = train_test_split(data, labels, test_size=0.2,
                                                random_state=42)
第二条路

image_generator = ImageDataGenerator(rescale=1/255, validation_split=0.2)    

train_dataset = image_generator.flow_from_directory(batch_size=32,
                                                 directory='full_dataset',
                                                 shuffle=True,
                                                 target_size=(280, 280), 
                                                 subset="training",
                                                 class_mode='categorical')

validation_dataset = image_generator.flow_from_directory(batch_size=32,
                                                 directory='full_dataset',
                                                 shuffle=True,
                                                 target_size=(280, 280), 
                                                 subset="validation",
                                                 class_mode='categorical')

第二种方式的主要缺点是,不能用于显示图片。如果写入
验证\u数据集[1]
,则会出错。但如果我使用第一种方法,它会起作用:
X\u测试[1]

你好,谢谢你的回答,但我很抱歉。当我使用“full_dataset”文件夹中的图像作为输入时,如何输入X和Y。如果它是熊猫数据帧,它更容易。但确实如此images@Ichsan,通过使用X和Y输入的mnist数据更新了答案。如何将“load_data()”函数用于名为“full_dataset”的文件夹?。如果它使用Mnist数据集,它就会工作。但我想用我的名为“full_dataset”的文件夹试试,就像我在问题中所示
image_generator = ImageDataGenerator(rescale=1/255, validation_split=0.2)    

train_dataset = image_generator.flow_from_directory(batch_size=32,
                                                 directory='full_dataset',
                                                 shuffle=True,
                                                 target_size=(280, 280), 
                                                 subset="training",
                                                 class_mode='categorical')

validation_dataset = image_generator.flow_from_directory(batch_size=32,
                                                 directory='full_dataset',
                                                 shuffle=True,
                                                 target_size=(280, 280), 
                                                 subset="validation",
                                                 class_mode='categorical')