Python 如何通过2节课培训Keras CNN?

Python 如何通过2节课培训Keras CNN?,python,tensorflow,keras,computer-vision,conv-neural-network,Python,Tensorflow,Keras,Computer Vision,Conv Neural Network,我有两门课,猫和狗都在火车文件中的不同文件夹中。 我必须把它们放在x_火车和y_火车或其他什么地方吗?请用很多方法来帮助我。最简单的方法可能是从目录中使用ImageDataGenerator.flow\u。我假设您的目录结构如下所示 train='D:/xyz/train' model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(150,150,3))) model.add(Activation('relu')) model.

我有两门课,猫和狗都在火车文件中的不同文件夹中。

我必须把它们放在x_火车和y_火车或其他什么地方吗?请用很多方法来帮助我。最简单的方法可能是从目录中使用ImageDataGenerator.flow\u。我假设您的目录结构如下所示

train='D:/xyz/train'
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(150,150,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten()) # output b/w 1 and 0 so pool all the features into a 1D array
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.add(Activation('sigmoid'))

model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics ='accuracy')
model.fit(train,batch_size = 10, epochs = 50)
model.add(Activation('softmax')) # change because compliled as categorical
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics =[accuracy'])
rlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5,
                                             patience=1, verbose=)
那么代码是

D:/xyz/train
----dogs
--------dog_image_1
--------dog_image_2

----cats
--------cat_image-1
--------cat_image_2
对于您的模型,将最后一行更改为

sdir =r'D:/xyz/train'
img_height=150 # set this to desired image height
img_width=150 # set this to the desired image width
batch_size=32 # set this to desired batch size
channels=3 # number of color channels
v_split=.2 # set this to the percent of images you want to use for validation
gen=ImageDataGenerator(rescale=1/255, validation_split=v_split)
train_gen=gen.flow_from_directory(sdir,  target_size=(img_height, img_width),
                                 color_mode="rgb",  classes=None, 
                                 class_mode="categorical", batch_size=batch_size,
                                 shuffle=True,  seed=123,subset='training')
valid_gen=gen.flow_from_directory(sdir,  target_size=(img_height, img_width),
                                 color_mode="rgb",  classes=None, 
                                 class_mode="categorical", batch_size=batch_size,
                                 shuffle=False, subset='validation')
现在,为了使您的模型性能更好,我建议您使用两个Keras回调。第一个是高原。文件表明,这将根据验证损失调整您的学习率。推荐代码如下所示

train='D:/xyz/train'
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(150,150,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten()) # output b/w 1 and 0 so pool all the features into a 1D array
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.add(Activation('sigmoid'))

model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics ='accuracy')
model.fit(train,batch_size = 10, epochs = 50)
model.add(Activation('softmax')) # change because compliled as categorical
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics =[accuracy'])
rlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5,
                                             patience=1, verbose=)
第二个回调是EarlyStopping,文档表明该回调将监控验证损失,如果连续历元的“耐心”数没有减少,它将终止训练,并将验证损失最低的历元的权重加载到您的模型中。建议的代码如下所示

train='D:/xyz/train'
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(150,150,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten()) # output b/w 1 and 0 so pool all the features into a 1D array
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2))
model.add(Activation('sigmoid'))

model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics ='accuracy')
model.fit(train,batch_size = 10, epochs = 50)
model.add(Activation('softmax')) # change because compliled as categorical
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics =[accuracy'])
rlronp=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5,
                                             patience=1, verbose=)
现在使用这个代码

es=tf.keras.callbacks.EarlyStopping(  monitor="val_loss", patience=4,
                                      verbose=1,  restore_best_weights=True)
这样就可以了

请仔细检查KERA。