Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/ant/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Deep learning SPECT图数据集上的VGG16_Deep Learning_Image Recognition_Transfer Learning - Fatal编程技术网

Deep learning SPECT图数据集上的VGG16

Deep learning SPECT图数据集上的VGG16,deep-learning,image-recognition,transfer-learning,Deep Learning,Image Recognition,Transfer Learning,我遵循Rajsha的指导: 我们的想法是将VGG16应用于我的数据集,该数据集由spectrograms组成,并让它在正常和异常两类之间做出决定 然而,模型没有学习,尽管我的顶层是0.5 val_acc 我做错什么了吗?我将在下面留下我的代码: # dimensions of our images img_width, img_height = 240, 240 train_data_dir = '/content/gdrive/My Drive/Melspec/melspecimages/

我遵循Rajsha的指导:

我们的想法是将VGG16应用于我的数据集,该数据集由spectrograms组成,并让它在正常和异常两类之间做出决定

然而,模型没有学习,尽管我的顶层是0.5 val_acc

我做错什么了吗?我将在下面留下我的代码:

# dimensions of our images
img_width, img_height = 240, 240

train_data_dir = '/content/gdrive/My Drive/Melspec/melspecimages/train'
validation_data_dir = '/content/gdrive/My Drive/Melspec/melspecimages/val'

batch_size = 32
datagen = ImageDataGenerator(preprocessing_function=preprocess_input)

model_vgg = applications.VGG16(include_top=False, weights='imagenet',input_shape=(240,240,3))

model_vgg.trainable=False

train_generator_bottleneck = datagen.flow_from_directory(
        train_data_dir,
        target_size=(img_width, img_height),
        batch_size=batch_size,
        class_mode='binary',
        shuffle=True)

validation_generator_bottleneck = datagen.flow_from_directory(
        validation_data_dir,
        target_size=(img_width, img_height),
        batch_size=batch_size,
        class_mode='binary',
        shuffle=False) 

train_samples = 30272
validation_samples = 7584

bottleneck_features_train = model_vgg.predict_generator(train_generator_bottleneck, train_samples // batch_size)
np.save(open('/content/gdrive/My Drive/Melspec/spec_vgg_bottleneck_features_train.npy', 'wb'), bottleneck_features_train)

bottleneck_features_validation = model_vgg.predict_generator(validation_generator_bottleneck, validation_samples // batch_size)
np.save(open('/content/gdrive/My Drive/Melspec/spec_vgg_bottleneck_features_validation.npy', 'wb'), bottleneck_features_validation)

train_data = np.load(open('/content/gdrive/My Drive/Melspec/spec_vgg_bottleneck_features_train.npy', 'rb'))
train_labels = np.array([0] * (train_samples // 2) + [1] * (train_samples // 2))

validation_data = np.load(open('/content/gdrive/My Drive/Melspec/spec_vgg_bottleneck_features_validation.npy', 'rb'))
validation_labels = np.array([0] * (validation_samples // 2) + [1] * (validation_samples // 2))

model_top = Sequential()
model_top.add(Flatten(input_shape=train_data.shape[1:]))
model_top.add(Dense(256, activation='relu'))
model_top.add(Dropout(0.5))
model_top.add(Dense(1, activation='sigmoid'))

model_top.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])

model_top.fit(train_data, train_labels,
        epochs=epochs, 
        batch_size=batch_size,
        validation_data=(validation_data, validation_labels))
```
#我们图像的尺寸
img_宽度,img_高度=240,240
train_data_dir='/content/gdrive/My Drive/Melspec/melspecimages/train'
验证\数据\目录='/content/gdrive/My Drive/Melspec/melspecimages/val'
批量大小=32
datagen=ImageDataGenerator(预处理函数=预处理输入)
model_vgg=applications.VGG16(包括_top=False、weights=imagenet、input_shape=(240240,3))
模型_vgg.可培训=错误
列车\u生成器\u瓶颈=来自\u目录的datagen.flow\u(
列车数据目录,
目标尺寸=(图像宽度、图像高度),
批次大小=批次大小,
class_mode='binary',
洗牌=真)
验证\u生成器\u瓶颈=来自\u目录的datagen.flow\u(
验证\u数据\u目录,
目标尺寸=(图像宽度、图像高度),
批次大小=批次大小,
class_mode='binary',
洗牌(错误)
列车样本=30272
验证样本=7584
瓶颈\u特征\u序列=模型\u vgg.预测\u生成器(序列\u生成器\u瓶颈,序列\u样本//批量大小)
np.save(打开('/content/gdrive/My Drive/Melspec/spec_vgg_瓶颈_features_train.npy',wb'),瓶颈_features_train)
瓶颈\u特征\u验证=模型\u vgg.预测\u生成器(验证\u生成器\u瓶颈,验证\u样本//批量大小)
保存(打开('/content/gdrive/My Drive/Melspec/spec\u vgg\u瓶颈\u功能\u验证.npy',wb'),瓶颈\u功能\u验证)
列车数据=np.load(打开('/content/gdrive/My Drive/Melspec/spec_vgg_瓶颈_train.npy',rb'))
train\u labels=np.array([0]*(train\u samples//2)+[1]*(train\u samples//2))
validation_data=np.load(打开('/content/gdrive/My Drive/Melspec/spec_vgg_瓶颈_features_validation.npy',rb'))
validation\u labels=np.array([0]*(validation\u samples//2)+[1]*(validation\u samples//2))
模型_top=顺序()
model_top.add(展平(输入_shape=train_data.shape[1:]))
模型_top.add(密集(256,activation='relu'))
模型_顶部添加(辍学(0.5))
模型_top.add(密集型(1,激活='sigmoid'))
模型\u top.compile(优化器='rmsprop',loss='binary\u crossentropy',metrics=['accurity'])
模型顶部配合(列车数据、列车标签、,
时代,
批次大小=批次大小,
验证数据=(验证数据、验证标签))
```

找到了答案:我的标签错了

我在网上读到,我们应该在给train_生成器馈电时使用shuffle=True,但是类不是以相同的顺序混合的,只有文件,从而导致错误的标签

我切换到shuffle=False,并且class_模式=None

我还必须确保数据库中的文件在两个类中的编号相同,并且这些文件可以被我的批处理大小整除


希望这对其他初学者有帮助

您好,请尝试修改您的问题并将其发布到stackoverflow社区