Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/19.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x 该模型仅预测所有测试图像的一个类别_Python 3.x_Tensorflow_Tf.keras - Fatal编程技术网

Python 3.x 该模型仅预测所有测试图像的一个类别

Python 3.x 该模型仅预测所有测试图像的一个类别,python-3.x,tensorflow,tf.keras,Python 3.x,Tensorflow,Tf.keras,0 我是tensorflow的新手(我使用的是2.1.0版),遇到了一个问题。我想教我的模型对图像进行分类。我有四节课。我使用CNN,在训练模型时,我有25%的准确率(全套的1/4),我注意到在测试课中总是有预测的1级。我尝试了一切,放大和缩小,尝试了彩色和黑白照片。我的套装每节课有3400张32x32张照片。请帮忙 我的代码如下: import os import numpy as np import pandas as pd import matplotlib.pyplot as plt i

0

我是tensorflow的新手(我使用的是2.1.0版),遇到了一个问题。我想教我的模型对图像进行分类。我有四节课。我使用CNN,在训练模型时,我有25%的准确率(全套的1/4),我注意到在测试课中总是有预测的1级。我尝试了一切,放大和缩小,尝试了彩色和黑白照片。我的套装每节课有3400张32x32张照片。请帮忙

我的代码如下:

import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import shutil
import plotly.graph_objects as go
from sklearn.metrics import confusion_matrix, classification_report

from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras import optimizers
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.applications import VGG19
from tensorflow.keras import losses

np.set_printoptions(precision=6, suppress=True)

image_size = 32

base_dir = 'resize\\train'
raw_no_of_files = {}
classes = ['car', 'motorcycle', 'truck', 'building']
for dir in classes:
    raw_no_of_files[dir] = len(os.listdir(os.path.join(base_dir, dir)))

print(raw_no_of_files.items())

data_dir = 'res'

if not os.path.exists(data_dir):
    os.mkdir(data_dir)

train_dir = os.path.join(data_dir, 'train')
valid_dir = os.path.join(data_dir, 'valid')
test_dir = os.path.join(data_dir, 'test')

train_car_dir = os.path.join(train_dir, 'car')
train_motorcycle_dir = os.path.join(train_dir, 'motorcycle')
train_truck_dir = os.path.join(train_dir, 'truck')
train_building_dir = os.path.join(train_dir, 'building')

valid_car_dir = os.path.join(valid_dir, 'car')
valid_motorcycle_dir = os.path.join(valid_dir, 'motorcycle')
valid_truck_dir = os.path.join(valid_dir, 'truck')
valid_building_dir = os.path.join(valid_dir, 'building')

test_car_dir = os.path.join(test_dir, 'car')
test_motorcycle_dir = os.path.join(test_dir, 'motorcycle')
test_truck_dir = os.path.join(test_dir, 'truck')
test_building_dir = os.path.join(test_dir, 'building')

for directory in (train_dir, valid_dir, test_dir):
    if not os.path.exists(directory):
        os.mkdir(directory)

dirs = [train_car_dir, train_motorcycle_dir, train_truck_dir, train_building_dir,
        valid_car_dir, valid_motorcycle_dir, valid_truck_dir, valid_building_dir,
        test_car_dir, test_motorcycle_dir, test_truck_dir, test_building_dir]

for dir in dirs:
    if not os.path.exists(dir):
        os.mkdir(dir)

car_fnames = os.listdir(os.path.join(base_dir, 'car'))
motorcycle_fnames = os.listdir(os.path.join(base_dir, 'motorcycle'))
truck_fnames = os.listdir(os.path.join(base_dir, 'truck'))
building_fnames = os.listdir(os.path.join(base_dir, 'building'))

size = min(len(car_fnames), len(motorcycle_fnames), len(truck_fnames), len(building_fnames))

train_size = int(np.floor(0.7 * size))
valid_size = int(np.floor(0.2 * size))
test_size = size - train_size - valid_size

train_idx = train_size
valid_idx = train_size + valid_size
test_idx = train_size + valid_size + test_size

for i, fname in enumerate(car_fnames):
    if i <= train_idx:
        src = os.path.join(base_dir, 'car', fname)
        dst = os.path.join(train_car_dir, fname)
        shutil.copyfile(src, dst)
    elif train_idx < i <= valid_idx:
        src = os.path.join(base_dir, 'car', fname)
        dst = os.path.join(valid_car_dir, fname)
        shutil.copyfile(src, dst)
    elif valid_idx < i < test_idx:
        src = os.path.join(base_dir, 'car', fname)
        dst = os.path.join(test_car_dir, fname)
        shutil.copyfile(src, dst)

for i, fname in enumerate(motorcycle_fnames):
    if i <= train_idx:
        src = os.path.join(base_dir, 'motorcycle', fname)
        dst = os.path.join(train_motorcycle_dir, fname)
        shutil.copyfile(src, dst)
    elif train_idx < i <= valid_idx:
        src = os.path.join(base_dir, 'motorcycle', fname)
        dst = os.path.join(valid_motorcycle_dir, fname)
        shutil.copyfile(src, dst)
    elif valid_idx < i < test_idx:
        src = os.path.join(base_dir, 'motorcycle', fname)
        dst = os.path.join(test_motorcycle_dir, fname)
        shutil.copyfile(src, dst)

for i, fname in enumerate(truck_fnames):
    if i <= train_idx:
        src = os.path.join(base_dir, 'truck', fname)
        dst = os.path.join(train_truck_dir, fname)
        shutil.copyfile(src, dst)
    elif train_idx < i <= valid_idx:
        src = os.path.join(base_dir, 'truck', fname)
        dst = os.path.join(valid_truck_dir, fname)
        shutil.copyfile(src, dst)
    elif valid_idx < i < test_idx:
        src = os.path.join(base_dir, 'truck', fname)
        dst = os.path.join(test_truck_dir, fname)
        shutil.copyfile(src, dst)

for i, fname in enumerate(building_fnames):
    if i <= train_idx:
        src = os.path.join(base_dir, 'building', fname)
        dst = os.path.join(train_building_dir, fname)
        shutil.copyfile(src, dst)
    elif train_idx < i <= valid_idx:
        src = os.path.join(base_dir, 'building', fname)
        dst = os.path.join(valid_building_dir, fname)
        shutil.copyfile(src, dst)
    elif valid_idx < i < test_idx:
        src = os.path.join(base_dir, 'building', fname)
        dst = os.path.join(test_building_dir, fname)
        shutil.copyfile(src, dst)

print('samochód - zbiór treningowy', len(os.listdir(train_car_dir)))
print('samochód - zbiór walidacyjny', len(os.listdir(valid_car_dir)))
print('samochód - zbiór testowy', len(os.listdir(test_car_dir)))

print('motocykl - zbiór treningowy', len(os.listdir(train_motorcycle_dir)))
print('motocykl - zbiór walidacyjny', len(os.listdir(valid_motorcycle_dir)))
print('motocykl - zbiór testowy', len(os.listdir(test_motorcycle_dir)))

print('tir - zbiór treningowy', len(os.listdir(train_truck_dir)))
print('tir - zbiór walidacyjny', len(os.listdir(valid_truck_dir)))
print('tir - zbiór testowy', len(os.listdir(test_truck_dir)))

print('budowlane - zbiór treningowy', len(os.listdir(train_building_dir)))
print('budowlane - zbiór walidacyjny', len(os.listdir(valid_building_dir)))
print('budowlane - zbiór testowy', len(os.listdir(test_building_dir)))


train_datagen = ImageDataGenerator(
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True, 
    fill_mode='nearest'
)

valid_datagen = ImageDataGenerator()

train_generator = train_datagen.flow_from_directory(directory=train_dir,
                                                   target_size=(image_size, image_size),
                                                   batch_size=32,
                                                   class_mode='categorical')

valid_generator = valid_datagen.flow_from_directory(directory=valid_dir,
                                                   target_size=(image_size, image_size),
                                                   batch_size=32,
                                                   class_mode='categorical')



batch_size = 32
steps_per_epoch = train_size // batch_size
validation_steps = valid_size // batch_size


model = Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))

model.summary()

model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(4))

model.summary()

model.compile(optimizer=optimizers.RMSprop(lr=1e-5),
             loss='categorical_crossentropy',
             metrics=['acc'])

model.summary()

history = model.fit_generator(generator=train_generator,
                             steps_per_epoch=steps_per_epoch,
                             epochs=30,    # 100
                             validation_data=valid_generator,
                             validation_steps=validation_steps)


test_datagen = ImageDataGenerator()
test_generator = test_datagen.flow_from_directory(
    test_dir,
    target_size=(image_size, image_size),
    batch_size=1,
    class_mode='categorical',
    shuffle=False
)


model.save('cars_' + str(size) + '.h5')

import matplotlib.pyplot as plt

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, label='Dokładność trenowania')
plt.plot(epochs, val_acc, label='Dokładność walidacji')
plt.xlabel('Epoka')
plt.ylabel('Dokładność')
plt.title('Dokładność trenowania i walidacji')
plt.legend()

plt.figure()

plt.plot(epochs, loss, label='Strata trenowania')
plt.plot(epochs, val_loss, label='Strata walidacji')
plt.title('Strata trenowania i walidacji')
plt.legend()

plt.show()

y_prob = model.predict_generator(test_generator, test_generator.samples)
print(type(y_prob))
print(y_prob)
#
#
y_pred = np.argmax(y_prob, axis=1)
print(type(y_pred))
print(y_pred)
#
predictions  = pd.DataFrame({'class': y_pred})
print(predictions)

y_true = test_generator.classes
print(y_true)

y_pred = predictions['class'].values
print(y_pred)

print(test_generator.class_indices)

classes = list(test_generator.class_indices.keys())
print(classes)

cm = confusion_matrix(y_true, y_pred)
print(cm)
#
#
def plot_confusion_matrix(cm):
    cm = cm[::-1]
    cm = pd.DataFrame(cm, columns=classes, index=classes[::-1])

    fig = ff.create_annotated_heatmap(z=cm.values, x=list(cm.columns), y=list(cm.index), colorscale='ice', showscale=True, reversescale=True)
    fig.update_layout(width=500, height=500, title='Confusion Matrix', font_size=16)
    fig.show()

import plotly.figure_factory as ff
plot_confusion_matrix(cm)
#
print(classification_report(y_true, y_pred, target_names=test_generator.class_indices.keys()))


errors = pd.DataFrame({'y_true': y_true, 'y_pred': y_pred}, index=test_generator.filenames)
print(errors)

errors['is_incorrect'] = (errors['y_true'] != errors['y_pred']) * 1
print(errors)

print(errors[errors['is_incorrect'] == 1].index)
导入操作系统
将numpy作为np导入
作为pd进口熊猫
将matplotlib.pyplot作为plt导入
进口舒蒂尔
导入plotly.graph_对象作为go
从sklearn.metrics导入混淆矩阵、分类报告
从tensorflow.keras.preprocessing导入图像
从tensorflow.keras.preprocessing.image导入ImageDataGenerator
从tensorflow.keras.models导入顺序
从tensorflow.keras导入图层
从tensorflow.keras导入优化器
从tensorflow.keras.callbacks导入TensorBoard
从tensorflow.keras.applications导入VGG19
来自tensorflow.keras进口损失
np.设置打印选项(精度=6,抑制=真)
图像大小=32
基本目录='resize\\train'
原始\u文件的\u编号={}
类别=[“汽车”、“摩托车”、“卡车”、“建筑”]
对于类中的dir:
原始文件的数量[dir]=len(os.listdir(os.path.join(base\u dir,dir)))
打印(原始\u文件的编号\u.items())
数据_dir='res'
如果操作系统路径不存在(数据目录):
os.mkdir(数据目录)
train\u dir=os.path.join(数据\u dir,‘train’)
valid_dir=os.path.join(数据_dir,'valid')
test_dir=os.path.join(数据_dir,‘test’)
train\u car\u dir=os.path.join(train\u dir,'car')
train\u motorcycle\u dir=os.path.join(train\u dir,‘motorcycle’)
train\u truck\u dir=os.path.join(train\u dir,‘truck’)
train\u building\u dir=os.path.join(train\u dir,‘building’)
valid\u car\u dir=os.path.join(valid\u dir,'car')
valid\u motorcycle\u dir=os.path.join(valid\u dir,‘motorcycle’)
valid\u truck\u dir=os.path.join(valid\u dir,'truck')
valid\u building\u dir=os.path.join(valid\u dir,‘building’)
test\u car\u dir=os.path.join(test\u dir,'car')
test\u motorcycle\u dir=os.path.join(test\u dir,‘motorcycle’)
test\u truck\u dir=os.path.join(test\u dir,‘truck’)
test\u building\u dir=os.path.join(test\u dir,‘building’)
对于中的目录(列目录、有效目录、测试目录):
如果不存在os.path.exists(目录):
os.mkdir(目录)
dirs=[火车、汽车、摩托车、卡车、建筑、,
有效的汽车方向,有效的摩托车方向,有效的卡车方向,有效的建筑方向,
测试车、测试摩托车、测试卡车、测试建筑]
对于目录中的目录:
如果不存在os.path.exists(目录):
os.mkdir(dir)
car\u fnames=os.listdir(os.path.join(base\u dir,'car'))
motorcycle\u fnames=os.listdir(os.path.join(base\u dir,'motorcycle'))
truck\u fnames=os.listdir(os.path.join(base\u dir,'truck'))
building_fnames=os.listdir(os.path.join(base_dir,'building'))
尺寸=最小值(车辆、摩托车、卡车、建筑)
列车尺寸=内部(np.地板(0.7*尺寸))
有效尺寸=整数(np.楼层(0.2*尺寸))
测试尺寸=尺寸-系列尺寸-有效尺寸
列车idx=列车尺寸
有效尺寸=列车尺寸+有效尺寸
test\u idx=列车尺寸+有效尺寸+测试尺寸
对于i,枚举中的fname(car_fname):

如果我想知道为什么您的输出密度有4个节点
model.add(layers.dense(4))
model.add(layers.dense(4),activation='softmax')
在最后一层进行此更改。我将4个节点放在最后一层,因为我有4个类。我添加了此激活,并且我仍然有25%的准确率Pass
scale=1/255。
参数用于所有三个
ImageDataGenerator
调用,并查看它是否解决了问题。