Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/287.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 检查目标时出错:期望lambda_1有1个维度,但得到了形状为(60000,10)的数组_Python_Tensorflow_Keras_Tensor - Fatal编程技术网

Python 检查目标时出错:期望lambda_1有1个维度,但得到了形状为(60000,10)的数组

Python 检查目标时出错:期望lambda_1有1个维度,但得到了形状为(60000,10)的数组,python,tensorflow,keras,tensor,Python,Tensorflow,Keras,Tensor,我试图创建一个可逆网络,当它向后时,权重矩阵是向前过程中权重矩阵的转置。所以我定义了一个自定义层 class Backwardlayer(Dense): def __init__(self,output_dim,b_layer,activation=None,use_bias=True,kernel_initializer='glorot_uniform',bias_initializer='zeros',kernel_regularizer=None,bias_regula

我试图创建一个可逆网络,当它向后时,权重矩阵是向前过程中权重矩阵的转置。所以我定义了一个自定义层

  class Backwardlayer(Dense):
        def __init__(self,output_dim,b_layer,activation=None,use_bias=True,kernel_initializer='glorot_uniform',bias_initializer='zeros',kernel_regularizer=None,bias_regularizer=None,activity_regularizer=None,kernel_constraint=None,bias_constraint=None,**kwargs):
            self.output_dim = output_dim
            self.b_layer = b_layer

            super(Backwardlayer, self).__init__(output_dim,**kwargs)

        def build(self, input_shape):
            assert len(input_shape) >= 2
            input_dim = input_shape[-1]

            self.kernel = K.transpose(self.b_layer.kernel)

            if self.use_bias:
                self.bias = self.add_weight(shape=(self.output_dim,),initializer=self.bias_initializer, name='bias', regularizer=self.bias_regularizer, constraint=self.bias_constraint)
            else:
                self.bias = None
            self.built = True

def direction_cosine(x):
    return K.sqrt(K.sum(x, axis=-1, keepdims=None))
def abs(x):
    return K.abs(x)




        input_img = Input(shape=(784,))
        layer_1 = Dense(512, activation=abs)
        layer_2 = Dense(512, activation=abs)
        layer_3 = Dense(256, activation=abs)
        layer_4 = Dense(128, activation=abs)
        layer_5 = Dense(10, activation=abs)

        encoder_layer_1 = layer_1(input_img)
        encoder_layer_2 = layer_2(encoder_layer_1)
        encoder_layer_3 = layer_3(encoder_layer_2)
        encoder_layer_4 = layer_4(encoder_layer_3)
        encoder_layer_5 = layer_5(encoder_layer_4)

    decoder_layer_1 = Backwardlayer(128,b_layer=encoder_layer_5,activation=abs,name='dl1')(encoder_layer_5)
decoder_layer_2 = Backwardlayer(256,b_layer=layer_4,activation=abs,name='dl2')(decoder_layer_1)
    decoder_layer_3 = Backwardlayer(512,b_layer=layer_3,activation=abs,name='dl3')(decoder_layer_2)
    decoder_layer_4 = Backwardlayer(512,b_layer=layer_2,activation=abs,name='dl4')(decoder_layer_3)
    reconstructed_img = Backwardlayer(784,b_layer=layer_1,activation=abs,name='dl5')
rms = RMSprop()
(x_train,y_train), (x_test,y_test) = mnist.load_data()

x_train = x_train.reshape(-1,784)
x_test = x_test.reshape(-1,784)  

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

AANN = Model(input=input_img, output=[reconstructed_img,normalised_output])

AANN.summary()

AANN.compile(optimizer=rms,loss=['mse','categorical_crossentropy'],loss_weights=[1,1])

history = AANN.fit(x_train,[x_train,y_train],epochs=3,batch_size=128,verbose=2,validation_data=(x_test,y_test))
下面是错误:检查目标时出错:期望lambda_1有1个维度,但得到了形状为(60000,10)的数组

回溯(最近一次呼叫最后一次):

文件“AANN.py”,第95行,在
历史=AANN.fit(x_序列,[x_序列,y_序列],历代=3,批量=128,详细=2,验证数据=(x_测试,y_测试))
文件“/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/training.py”,第955行,适合
批次大小=批次大小)
文件“/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site packages/keras/engine/training.py”,第792行,在用户数据中
异常(前缀='target')
文件“/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site packages/keras/engine/training\u utils.py”,第126行,标准化输入数据
“带形状”+str(数据形状))
ValueError:检查目标时出错:期望lambda_1有1个维度,但得到了形状为(60000,10)的数组

错误是自我解释的:“张量没有核”

层有核

事实并非如此:

encoder_layer_1 = layer_1(input_img)
encoder_layer_2 = layer_2(encoder_layer_1)
encoder_layer_3 = layer_3(encoder_layer_2)
encoder_layer_4 = layer_4(encoder_layer_3)
encoder_layer_5 = layer_5(encoder_layer_4)
这是事实:

layer_1_output_tensor = layer_1(input_img)
layer_2_output_tensor = layer_2(layer_1_output_tensor)
layer_3_output_tensor = layer_3(layer_2_output_tensor)
layer_4_output_tensor = layer_4(layer_3_output_tensor)
layer_5_output_tensor = layer_5(layer_4_output_tensor) 
那么你需要:

decoder_layer_output_1 = Backwardlayer(128,b_layer=layer_5,activation=abs,name='dl1')(layer_5_output_TENSOR)

老实说,我认为你的方法行不通。你需要的是“矩阵除法”,而不是矩阵乘法


也许如果你努力寻找内核^(-1)而不是转置,并在应用它之前减去偏差

你能回答我的另一个错误吗?我有最新情况。谢谢数据的形状不等于模型的输出形状。但是你忽略了相关的部分。(y_列和标准化输出)--将
model.summary()
中的输出形状与
x_列.shape
y_列.shape
进行比较。
decoder_layer_output_1 = Backwardlayer(128,b_layer=layer_5,activation=abs,name='dl1')(layer_5_output_TENSOR)