Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/image/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Keras 无监督卷积自动编码器总是给出空白输出_Keras_Deep Learning_Conv Neural Network_Autoencoder_Unsupervised Learning - Fatal编程技术网

Keras 无监督卷积自动编码器总是给出空白输出

Keras 无监督卷积自动编码器总是给出空白输出,keras,deep-learning,conv-neural-network,autoencoder,unsupervised-learning,Keras,Deep Learning,Conv Neural Network,Autoencoder,Unsupervised Learning,我想训练一个无监督图像的自动编码器。我有大约300张火车图像和100张验证图像。但当我把一个看不见的图像输入到经过训练的自动编码器时,它给出了完全空白的输出 train_images = os.listdir('./Data/train') val_images = os.listdir('./Data/val') X_train = [] X_val = [] for i in range(len(train_images)): img = cv2.imread('./Data/t

我想训练一个无监督图像的自动编码器。我有大约300张火车图像和100张验证图像。但当我把一个看不见的图像输入到经过训练的自动编码器时,它给出了完全空白的输出

train_images = os.listdir('./Data/train')
val_images = os.listdir('./Data/val')

X_train = []
X_val = []

for i in range(len(train_images)):
    img = cv2.imread('./Data/train/'+train_images[i])
    img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    resized = cv2.resize(img, (224,224), interpolation = cv2.INTER_AREA)
    X_train.append(resized)

X_train = np.asarray(X_train)
X_train = X_train.astype('float32')/255.
X_train = np.reshape(X_train, (len(X_train), 224, 224, 1))

for i in range(len(val_images)):
    img = cv2.imread('./Data/val/'+val_images[i])
    img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    resized = cv2.resize(img, (224,224), interpolation = cv2.INTER_AREA)
    X_val.append(resized)

X_val = np.asarray(X_val)
X_val = X_val.astype('float32')/255.
X_val = np.reshape(X_val, (len(X_val), 224, 224, 1))

print(len(X_train))
print(len(X_val))
这里的
X\u train.shape
X\u val.shape
分别是
(300224,1)
(100224224,1)

这是我的
upconv\u concat
函数

def upconv_concat(bottom_a, bottom_b, n_filter, pool_size, stride, padding='VALID'):
    up_conv = Conv2DTranspose(filters=n_filter, kernel_size=[pool_size, pool_size],
                                         strides=stride, padding=padding)(bottom_a)
    return Concatenate(axis=-1)([up_conv, bottom_b])
以下是一些参数

input_img = Input(shape=(224, 224, 1))
droprate=0.25
num_classes = 1
这是我的模型

conv_1_1 = Conv2D(filters = 64, kernel_size = 3, activation='relu', padding='same')(input_img)
conv_1_1_bn = BatchNormalization()(conv_1_1)
conv_1_1_do = Dropout(droprate)(conv_1_1_bn)

pool_1 = MaxPooling2D(pool_size= 2, strides = 2)(conv_1_1_do)

conv_4_1 = SeparableConv2D(filters = 512, kernel_size = 3, activation='relu', padding='same')(pool_1)
conv_4_1_bn = BatchNormalization()(conv_4_1)
conv_4_1_do = Dropout(droprate)(conv_4_1_bn)

pool_4 = MaxPooling2D(pool_size= 2, strides = 2)(conv_4_1_do)

conv_5_1 = SeparableConv2D(filters = 1024, kernel_size = 3, activation='relu', padding='same')(pool_4)
conv_5_1_bn = BatchNormalization()(conv_5_1)
conv_5_1_do = Dropout(droprate)(conv_5_1_bn)

upconv_1 = upconv_concat(conv_5_1_do, conv_4_1_do, n_filter=512, pool_size=2, stride=2) 

conv_6_1 = SeparableConv2D(filters = 512, kernel_size = 3, activation='relu', padding='same')(upconv_1)
conv_6_1_bn = BatchNormalization()(conv_6_1)
conv_6_1_do = Dropout(droprate)(conv_6_1_bn)


upconv_2 = upconv_concat(conv_6_1_do, conv_1_1_do, n_filter=64, pool_size=2, stride=2) 

conv_9_1 = SeparableConv2D(filters = 64, kernel_size = 3, activation='relu', padding='same')(upconv_2)
conv_9_1_bn = BatchNormalization()(conv_9_1)
conv_9_1_do = Dropout(droprate)(conv_9_1_bn)


ae_output = Conv2D(num_classes, kernel_size=1, strides = (1,1), activation="softmax")(conv_9_1_do)
这是训练部分

ae_model = Model(input_img, ae_output)
ae_model.compile(optimizer='adadelta', loss='binary_crossentropy')
ae_model.fit(X_train, X_train,
                epochs=5,
                batch_size=16,
                shuffle=True,
                validation_data=(X_val, X_val))
如果有人需要模型摘要

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 224, 224, 1)  0                                            
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 224, 224, 64) 640         input_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 224, 224, 64) 256         conv2d_1[0][0]                   
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 224, 224, 64) 0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 112, 112, 64) 0           dropout_1[0][0]                  
__________________________________________________________________________________________________
separable_conv2d_1 (SeparableCo (None, 112, 112, 512 33856       max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 112, 112, 512 2048        separable_conv2d_1[0][0]         
__________________________________________________________________________________________________
dropout_2 (Dropout)             (None, 112, 112, 512 0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 56, 56, 512)  0           dropout_2[0][0]                  
__________________________________________________________________________________________________
separable_conv2d_2 (SeparableCo (None, 56, 56, 1024) 529920      max_pooling2d_2[0][0]            
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 56, 56, 1024) 4096        separable_conv2d_2[0][0]         
__________________________________________________________________________________________________
dropout_3 (Dropout)             (None, 56, 56, 1024) 0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
conv2d_transpose_1 (Conv2DTrans (None, 112, 112, 512 2097664     dropout_3[0][0]                  
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 112, 112, 102 0           conv2d_transpose_1[0][0]         
                                                                 dropout_2[0][0]                  
__________________________________________________________________________________________________
separable_conv2d_3 (SeparableCo (None, 112, 112, 512 534016      concatenate_1[0][0]              
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 112, 112, 512 2048        separable_conv2d_3[0][0]         
__________________________________________________________________________________________________
dropout_4 (Dropout)             (None, 112, 112, 512 0           batch_normalization_4[0][0]      
__________________________________________________________________________________________________
conv2d_transpose_2 (Conv2DTrans (None, 224, 224, 64) 131136      dropout_4[0][0]                  
__________________________________________________________________________________________________
concatenate_2 (Concatenate)     (None, 224, 224, 128 0           conv2d_transpose_2[0][0]         
                                                                 dropout_1[0][0]                  
__________________________________________________________________________________________________
separable_conv2d_4 (SeparableCo (None, 224, 224, 64) 9408        concatenate_2[0][0]              
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 224, 224, 64) 256         separable_conv2d_4[0][0]         
__________________________________________________________________________________________________
dropout_5 (Dropout)             (None, 224, 224, 64) 0           batch_normalization_5[0][0]      
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 224, 224, 1)  65          dropout_5[0][0]                  
==================================================================================================
Total params: 3,345,409
Trainable params: 3,341,057
Non-trainable params: 4,352
__________________________________________________________________________________________________
我已经彻底检查了X_火车的图像,看看我是否错误地发送了空白图像。但是没有。我只发送正确的数据

问题是

当我尝试测试模型时,它给出了一个空白图像

img = cv2.imread('./test/a184.jpg')
img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
resized = cv2.resize(img, (224,224), interpolation = cv2.INTER_AREA)
resized = resized/255
resized = resized[:, :, np.newaxis]
resized = resized[np.newaxis, :, :] 
现在,已调整大小的
具有形状
(1224224,1)

给我这个图像:
但是
image
变量中的值都是1s

我用的是tf.Keras


请帮我做这个。无法找到问题所在以及如何调试以找到问题。

黑色图像来自您激活上一个conv层的方式。
在这里,您希望预测每个像素的值介于0和1之间,因此您需要的是Sigmoid激活,而不是Softmax激活

请尝试以下方法:

ae_output = Conv2D(num_classes, kernel_size=1, strides = (1,1), activation="sigmoid")(conv_9_1_do)

该错误是由于复制意大利面引起的。如果你有一个单独的函数来做图像预处理,而不是有多个代码副本,其中一个有bug,你就不会有这个问题。好的,谢谢,我会试试。不起作用。谢谢你的帮助,谢谢你。我现在可以看到重建的图像了。有没有关于如何将其用作分割模型的想法,比如预测每个像素为0、1、2(3类)的概率?更多的是语义分割问题。这就是我把softmax图层放在那里的原因。为此,我引入了一个变量
num\u classes
,但我有点不确定如何将其包含在模型中。我从未实现过分段模型,因此在这一点上我无法真正帮助您抱歉:/
ae_output = Conv2D(num_classes, kernel_size=1, strides = (1,1), activation="sigmoid")(conv_9_1_do)