Python 为什么Keras会出现这种错误;“断开连接的图形”;当我想把整个网络分成两个模型时?

Python 为什么Keras会出现这种错误;“断开连接的图形”;当我想把整个网络分成两个模型时?,python,tensorflow,keras,Python,Tensorflow,Keras,我在keras中有一个自动编码器,我需要为每个零件定义一个不同的模型,因为我的网络有两个输出,我想在测试期间为每个输出有两个单独的网络,但当我这样做时,它会产生以下错误 回溯(最近一次呼叫最后一次): 文件“”,第99行,在 wext=模型(输入=解码噪声,输出=预测噪声) 文件 “D:\software\Anaconda3\envs\py36\lib\site packages\keras\legacy\interfaces.py”, 第91行,在包装器中 返回函数(*args,**kwarg

我在keras中有一个自动编码器,我需要为每个零件定义一个不同的模型,因为我的网络有两个输出,我想在测试期间为每个输出有两个单独的网络,但当我这样做时,它会产生以下错误

回溯(最近一次呼叫最后一次):

文件“”,第99行,在 wext=模型(输入=解码噪声,输出=预测噪声)

文件 “D:\software\Anaconda3\envs\py36\lib\site packages\keras\legacy\interfaces.py”, 第91行,在包装器中 返回函数(*args,**kwargs)

文件 “D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\network.py”, 第93行,在init 自初始化图网络(*args,**kwargs)

文件 “D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\network.py”, 第231行,在初始图网络中 自输入、自输出)

文件 “D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\network.py”, 第1443行,在地图图形网络中 str(带有完整输入的图层)

ValueError:图形已断开连接:无法获取张量的值 层上的张量(“输入_8:0”,形状=(?,28,28,1),dtype=float32) “输入8”。访问以下以前的层时没有问题: []

我想在测试期间有两个网络,其中一个从编码器到解码器的末端,第二个用于w提取部分。有什么问题?多谢各位

#-----------------------encoder------------------------------------------------
wtm=Input((28,28,1))
image = Input((28, 28, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e',dilation_rate=(2,2))(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e',dilation_rate=(2,2))(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e',dilation_rate=(2,2))(conv2)
BN=BatchNormalization()(conv3)
encoded =  Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I',dilation_rate=(2,2))(BN)


add_const = Kr.layers.Lambda(lambda x: x[0] + x[1])
encoded_merged = add_const([encoded,wtm])

#-----------------------decoder------------------------------------------------
deconv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1d',dilation_rate=(2,2))(encoded_merged)
deconv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2d',dilation_rate=(2,2))(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='relu',padding='same', name='convl3d',dilation_rate=(2,2))(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='relu',padding='same', name='convl4d',dilation_rate=(2,2))(deconv3)
BNd=BatchNormalization()(deconv3)

decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output',dilation_rate=(2,2))(BNd) 

model1=Model(inputs=[image,wtm],outputs=decoded)

decoded_noise = GaussianNoise(0.5)(decoded)

#----------------------w extraction------------------------------------
convw1 = Conv2D(64, (3,3), activation='relu', padding='same', name='conl1w',dilation_rate=(2,2))(decoded_noise)
convw2 = Conv2D(64, (3, 3), activation='relu', padding='same', name='convl2w',dilation_rate=(2,2))(convw1)
convw3 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl3w',dilation_rate=(2,2))(convw2)
convw4 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl4w',dilation_rate=(2,2))(convw3)
convw5 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl5w',dilation_rate=(2,2))(convw4)
convw6 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl6w',dilation_rate=(2,2))(convw5)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(convw6)  
wext=Model(inputs=decoded_noise,outputs=pred_w)
final=Model(inputs=[image,wtm],outputs=[decoded,pred_w])
修改代码:

from keras.layers import Input, Concatenate, GaussianNoise,Cropping2D,Activation,Dropout,BatchNormalization,MaxPool2D,AveragePooling2D,ZeroPadding2D
from keras.layers import Conv2D, AtrousConv2D
from keras.models import Model
from keras.datasets import mnist
from keras.callbacks import TensorBoard
from keras import backend as K
from keras import layers
import matplotlib.pyplot as plt
import tensorflow as tf
import keras as Kr
from keras.optimizers import SGD,RMSprop,Adam
from keras.callbacks import ReduceLROnPlateau
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
import numpy as np
import pylab as pl
import matplotlib.cm as cm
import keract
from matplotlib import pyplot
from keras import optimizers
from keras import regularizers

from tensorflow.python.keras.layers import Lambda;
w_expand=np.zeros((49999,28,28),dtype='float32')
wv_expand=np.zeros((9999,28,28),dtype='float32')
wt_random=np.random.randint(2, size=(49999,4,4))
wt_random=wt_random.astype(np.float32)
wv_random=np.random.randint(2, size=(9999,4,4))
wv_random=wv_random.astype(np.float32)
w_expand[:,:4,:4]=wt_random
wv_expand[:,:4,:4]=wv_random
x,y,z=w_expand.shape
w_expand=w_expand.reshape((x,y,z,1))
x,y,z=wv_expand.shape
wv_expand=wv_expand.reshape((x,y,z,1))

#-----------------building w test---------------------------------------------
w_test = np.random.randint(2,size=(1,4,4))
w_test=w_test.astype(np.float32)
wt_expand=np.zeros((1,28,28),dtype='float32')
wt_expand[:,0:4,0:4]=w_test
wt_expand=wt_expand.reshape((1,28,28,1))
#-----------------------encoder------------------------------------------------
#------------------------------------------------------------------------------
wtm=Input((28,28,1))
image = Input((28, 28, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e',dilation_rate=(2,2))(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e',dilation_rate=(2,2))(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e',dilation_rate=(2,2))(conv2)
BN=BatchNormalization()(conv3)
encoded =  Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I',dilation_rate=(2,2))(BN)


add_const = Kr.layers.Lambda(lambda x: x[0] + x[1])
encoded_merged = add_const([encoded,wtm])

#-----------------------decoder------------------------------------------------
deconv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1d',dilation_rate=(2,2))(encoded_merged)
deconv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2d',dilation_rate=(2,2))(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='relu',padding='same', name='convl3d',dilation_rate=(2,2))(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='relu',padding='same', name='convl4d',dilation_rate=(2,2))(deconv3)
BNd=BatchNormalization()(deconv3)

decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output',dilation_rate=(2,2))(BNd) 

model1=Model(inputs=[image,wtm],outputs=decoded)
decoded_input=Input((28,28,1))

#----------------------w extraction------------------------------------
convw1 = Conv2D(64, (3,3), activation='relu', padding='same', name='conl1w',dilation_rate=(2,2))(decoded_input)
convw2 = Conv2D(64, (3, 3), activation='relu', padding='same', name='convl2w',dilation_rate=(2,2))(convw1)
convw3 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl3w',dilation_rate=(2,2))(convw2)
convw4 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl4w',dilation_rate=(2,2))(convw3)
convw5 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl5w',dilation_rate=(2,2))(convw4)
convw6 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl6w',dilation_rate=(2,2))(convw5)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(convw6)  
decoded_noise = GaussianNoise(0.5)(decoded)
wext=Model(inputs=decoded_input, outputs=pred_w)
pred_w = wext(decoded_noise)

w_extraction=Model(inputs=[image,wtm],outputs=[decoded,pred_w])
#----------------------training the model-----------------------------------

(x_train, _), (x_test, _) = mnist.load_data()
x_validation=x_train[1:10000,:,:]
x_train=x_train[10001:60000,:,:]
#
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_validation = x_validation.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1))  # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))  # adapt this if using `channels_first` image data format
x_validation = np.reshape(x_validation, (len(x_validation), 28, 28, 1))

#---------------------compile and train the model------------------------------
opt=SGD(momentum=0.99,lr=0.0001)
w_extraction.compile(optimizer='adam', loss={'imageprim':'mse','wprimmain':'binary_crossentropy'}, loss_weights={'imageprim': 1.0, 'wprimmain': 1.0},metrics=['mae'])
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=40)
#rlrp = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=20, min_delta=1E-4, verbose=1)
mc = ModelCheckpoint('los4x4_con_tile_convolw_FBN_SigAct_SandPAttack.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True)
history=w_extraction.fit([x_train,w_expand], [x_train,w_expand],
          epochs=1,
          batch_size=32, 
          validation_data=([x_validation,wv_expand], [x_validation,wv_expand]),
          callbacks=[TensorBoard(log_dir='/home/jamalm8/tensorboardGNWLoss/', histogram_freq=0, write_graph=False),es,mc])
w_extraction.summary()
产生的错误:

回溯(最近一次呼叫最后一次):

文件“”,第113行,在 compile(optimizer='adam',loss={'imageprim':'mse','wprimmain':'binary\u crossentropy'}, 损失权重={'imageprim':1.0,'wprimmain':1.0},度量=['mae'])

文件 “D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\training.py”, 第119行,在编译中 str(自输出_名称))

ValueError:丢失字典中的未知条目:“imageprim”。只有 需要以下键:['decoder_output','model_29']


问题是,
decoded\u noise
不是输入层,因此在定义
wext
模型时不能将其作为输入。相反,为您的
wext
模型定义一个新的输入层:

#----------------------w extraction------------------------------------
# Here we define a new input to be used by the wext model
decoded_input = Input((28,28,1)) 
convw1 = Conv2D(64, (3,3), ...)(decoded_input)
convw2 = ...
...
pred_w = ...

wext=Model(inputs=decoded_input, outputs=pred_w)

# Final model: pass the gaussian noise through the wext model
decoded_noise = GaussianNoise(0.5)(decoded)
pred_w = wext(decoded_noise)

final=Model(inputs=[image, wtm], outputs=[decoded, pred_w])

问题是,
decoded\u noise
不是输入层,因此在定义
wext
模型时不能将其作为输入。相反,为您的
wext
模型定义一个新的输入层:

#----------------------w extraction------------------------------------
# Here we define a new input to be used by the wext model
decoded_input = Input((28,28,1)) 
convw1 = Conv2D(64, (3,3), ...)(decoded_input)
convw2 = ...
...
pred_w = ...

wext=Model(inputs=decoded_input, outputs=pred_w)

# Final model: pass the gaussian noise through the wext model
decoded_noise = GaussianNoise(0.5)(decoded)
pred_w = wext(decoded_noise)

final=Model(inputs=[image, wtm], outputs=[decoded, pred_w])

我根据您的建议修改了代码,但它产生了上述错误。我知道为什么会发生此错误,但我不确定它是否会像以前一样执行我没有分离网络?是否可以将模型的输出发送到损耗函数?我们通常将层的输出发送到损耗函数中?!你应该把它作为一个单独的问题发布,因为它与你原来的问题无关。如果您想使用多重损失,可以查看以下指南:。简短回答:您需要在输出层中添加一个名称(例如“imageprim”和“wprimmain”),然后在定义损失时使用此名称。在拟合模型时,您还应该为输入数据提供此名称。我根据您的建议修改了代码,但它会产生上述错误。我知道为什么会发生此错误,但我不确定它是否会像以前一样执行我没有分离网络?是否可以将模型的输出发送到损耗函数?我们通常将层的输出发送到损耗函数中?!你应该把它作为一个单独的问题发布,因为它与你原来的问题无关。如果您想使用多重损失,可以查看以下指南:。简短回答:您需要在输出层中添加一个名称(例如“imageprim”和“wprimmain”),然后在定义损失时使用此名称。拟合模型时,还应为输入数据提供此名称。