Python 训练准确性及;验证精度从第一个历元开始保持不变

Python 训练准确性及;验证精度从第一个历元开始保持不变,python,keras,3d,cnn,vgg-net,Python,Keras,3d,Cnn,Vgg Net,这是我第一次发布问题,如果写得不好或结构不好,请原谅 数据集由TIF格式的图像组成。这意味着我正在运行一个3D CNN 这些图像是模拟X射线图像,数据集有2类;正常和异常。 他们的标签是正常的'0',异常的'1' 我的文件夹树如下所示: 训练 正常的 异常现象 验证 正常的 异常现象 我所做的是初始化2个数组; 训练和y\u训练 我运行了一个FOR循环,该循环将正常的图像导入并附加到序列,并为每个附加的图像将'0'附加到y_序列。所以,如果我在火车上有10个正常的图像,我在火车上也会

这是我第一次发布问题,如果写得不好或结构不好,请原谅


数据集由TIF格式的图像组成。这意味着我正在运行一个3D CNN

这些图像是模拟X射线图像,数据集有2类;正常和异常。 他们的标签是正常的'0',异常的'1'

我的文件夹树如下所示:

  • 训练

    • 正常的

    • 异常现象

  • 验证

    • 正常的
    • 异常现象

  • 我所做的是初始化2个数组; 训练y\u训练

    我运行了一个FOR循环,该循环将正常的图像导入并附加到序列,并为每个附加的图像将'0'附加到y_序列。所以,如果我在火车上有10个正常的图像,我在火车上也会有10个'0's

    对于异常图像重复此操作,并将其附加到序列中,'1'也将附加到y_序列中。这意味着列车正常图像和异常图像组成。而y\U列'0's组成,然后是'1's

    对validation文件夹执行另一个FOR循环,其中我的数组是testy\u test


    这是我的神经网络代码:

    def vgg1():
        model = Sequential()
        model.add(Conv3D(16, (3, 3, 3), activation="relu", padding="same", name="block1_conv1", input_shape=(128, 128, 128, 1), data_format="channels_last")) # 64
        model.add(Conv3D(16, (3, 3, 3), activation="relu", padding="same", name="block1_conv2", data_format="channels_last")) # 64
        model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block1_pool'))
        model.add(Dropout(0.5)) 
    
        model.add(Conv3D(32, (3, 3, 3), activation="relu", padding="same", name="block2_conv1", data_format="channels_last")) # 128
        model.add(Conv3D(32, (3, 3, 3), activation="relu", padding="same", name="block2_conv2", data_format="channels_last")) # 128
        model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block2_pool'))
        model.add(Dropout(0.5)) 
        model.add(Conv3D(64, (3, 3, 3), activation="relu", padding="same", name="block3_conv1", data_format="channels_last")) # 256
        model.add(Conv3D(64, (3, 3, 3), activation="relu", padding="same", name="block3_conv2", data_format="channels_last")) # 256
        model.add(Conv3D(64, (3, 3, 3), activation="relu", padding="same", name="block3_conv3", data_format="channels_last")) # 256
        model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block3_pool'))
        model.add(Dropout(0.5)) 
    
        model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block4_conv1", data_format="channels_last")) # 512
        model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block4_conv2", data_format="channels_last")) # 512
        model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block4_conv3", data_format="channels_last")) # 512
        model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block4_pool'))
        model.add(Dropout(0.5)) 
    
        model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block5_conv1", data_format="channels_last")) # 512 
        model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block5_conv2", data_format="channels_last")) # 512 
        model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block5_conv3", data_format="channels_last")) # 512 
        model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block5_pool'))
        model.add(Dropout(0.5)) 
    
        model.add(Flatten(name='flatten'))
        model.add(Dense(4096, activation='relu',name='fc1')) 
    
        model.add(Dense(4096, activation='relu',name='fc2'))  
    
        model.add(Dense(2, activation='softmax', name='predictions'))  
        print(model.summary())
        return model
    
    

    以下代码是x\U列y\U列x\U测试y\U测试和model.compile的初始化代码

    我还将我的标签转换为一种热编码

    from keras.utils import to_categorical
    
    
    model = vgg1()
    
    model.compile(
      loss='categorical_crossentropy', 
      optimizer='adam',
      metrics=['accuracy']
    )
    
    x_train = np.load('/content/drive/My Drive/3D Dataset v2/x_train.npy')
    y_train = np.load('/content/drive/My Drive/3D Dataset v2/y_train.npy')
    y_train = to_categorical(y_train)
    x_test = np.load('/content/drive/My Drive/3D Dataset v2/x_test.npy')
    y_test = np.load('/content/drive/My Drive/3D Dataset v2/y_test.npy')
    y_test = to_categorical(y_test)
    
    x_train = x_train.astype('float32') / 255.
    x_test = x_test.astype('float32') / 255.
    

    这是我想强调的问题,即恒定的训练精度和验证精度

    Train on 127 samples, validate on 31 samples
    Epoch 1/25
    127/127 [==============================] - 1700s 13s/step - loss: 1.0030 - accuracy: 0.7480 - val_loss: 0.5842 - val_accuracy: 0.7419
    Epoch 2/25
    127/127 [==============================] - 1708s 13s/step - loss: 0.5813 - accuracy: 0.7480 - val_loss: 0.5728 - val_accuracy: 0.7419
    Epoch 3/25
    127/127 [==============================] - 1693s 13s/step - loss: 0.5758 - accuracy: 0.7480 - val_loss: 0.5720 - val_accuracy: 0.7419
    Epoch 4/25
    127/127 [==============================] - 1675s 13s/step - loss: 0.5697 - accuracy: 0.7480 - val_loss: 0.5711 - val_accuracy: 0.7419
    Epoch 5/25
    127/127 [==============================] - 1664s 13s/step - loss: 0.5691 - accuracy: 0.7480 - val_loss: 0.5785 - val_accuracy: 0.7419
    Epoch 6/25
    127/127 [==============================] - 1666s 13s/step - loss: 0.5716 - accuracy: 0.7480 - val_loss: 0.5710 - val_accuracy: 0.7419
    Epoch 7/25
    127/127 [==============================] - 1676s 13s/step - loss: 0.5702 - accuracy: 0.7480 - val_loss: 0.5718 - val_accuracy: 0.7419
    Epoch 8/25
    127/127 [==============================] - 1664s 13s/step - loss: 0.5775 - accuracy: 0.7480 - val_loss: 0.5718 - val_accuracy: 0.7419
    Epoch 9/25
    127/127 [==============================] - 1660s 13s/step - loss: 0.5753 - accuracy: 0.7480 - val_loss: 0.5711 - val_accuracy: 0.7419
    Epoch 10/25
    127/127 [==============================] - 1681s 13s/step - loss: 0.5756 - accuracy: 0.7480 - val_loss: 0.5714 - val_accuracy: 0.7419
    Epoch 11/25
    127/127 [==============================] - 1679s 13s/step - loss: 0.5675 - accuracy: 0.7480 - val_loss: 0.5710 - val_accuracy: 0.7419
    Epoch 12/25
    127/127 [==============================] - 1681s 13s/step - loss: 0.5779 - accuracy: 0.7480 - val_loss: 0.5741 - val_accuracy: 0.7419
    Epoch 13/25
    127/127 [==============================] - 1682s 13s/step - loss: 0.5763 - accuracy: 0.7480 - val_loss: 0.5723 - val_accuracy: 0.7419
    Epoch 14/25
    127/127 [==============================] - 1685s 13s/step - loss: 0.5732 - accuracy: 0.7480 - val_loss: 0.5714 - val_accuracy: 0.7419
    Epoch 15/25
    127/127 [==============================] - 1685s 13s/step - loss: 0.5701 - accuracy: 0.7480 - val_loss: 0.5710 - val_accuracy: 0.7419
    Epoch 16/25
    127/127 [==============================] - 1678s 13s/step - loss: 0.5704 - accuracy: 0.7480 - val_loss: 0.5733 - val_accuracy: 0.7419
    Epoch 17/25
    127/127 [==============================] - 1663s 13s/step - loss: 0.5692 - accuracy: 0.7480 - val_loss: 0.5710 - val_accuracy: 0.7419
    Epoch 18/25
    127/127 [==============================] - 1657s 13s/step - loss: 0.5731 - accuracy: 0.7480 - val_loss: 0.5717 - val_accuracy: 0.7419
    Epoch 19/25
    127/127 [==============================] - 1674s 13s/step - loss: 0.5708 - accuracy: 0.7480 - val_loss: 0.5712 - val_accuracy: 0.7419
    Epoch 20/25
    127/127 [==============================] - 1666s 13s/step - loss: 0.5795 - accuracy: 0.7480 - val_loss: 0.5730 - val_accuracy: 0.7419
    Epoch 21/25
    127/127 [==============================] - 1671s 13s/step - loss: 0.5635 - accuracy: 0.7480 - val_loss: 0.5753 - val_accuracy: 0.7419
    Epoch 22/25
    127/127 [==============================] - 1672s 13s/step - loss: 0.5713 - accuracy: 0.7480 - val_loss: 0.5718 - val_accuracy: 0.7419
    Epoch 23/25
    127/127 [==============================] - 1672s 13s/step - loss: 0.5666 - accuracy: 0.7480 - val_loss: 0.5711 - val_accuracy: 0.7419
    Epoch 24/25
    127/127 [==============================] - 1669s 13s/step - loss: 0.5695 - accuracy: 0.7480 - val_loss: 0.5724 - val_accuracy: 0.7419
    Epoch 25/25
    127/127 [==============================] - 1663s 13s/step - loss: 0.5675 - accuracy: 0.7480 - val_loss: 0.5721 - val_accuracy: 0.7419
    


    有什么问题,我可以做些什么来纠正?我认为保持恒定的准确度是不可取的。

    首先将学习率从0.1调整到0.0001。而且你的样本量很小。你必须以某种方式增加它。我已经将学习率降低到0.0001,但精确度保持不变。我也将尝试使用数据扩充,首先将学习率从0.1调整到0.0001。而且你的样本量很小。你必须以某种方式增加它。我已经将学习率降低到0.0001,但精确度保持不变。我还将尝试使用数据扩充