model.fit期间Keras中的尺寸不匹配

model.fit期间Keras中的尺寸不匹配,keras,keras-layer,autoencoder,Keras,Keras Layer,Autoencoder,我用Keras中的密集神经网络组装了一个VAE。在model.fit期间我得到一个维度不匹配,但不确定是什么导致代码丢失。下面是我的代码 from keras.layers import Lambda, Input, Dense from keras.models import Model from keras.datasets import mnist from keras.losses import mse, binary_crossentropy from keras.utils impo

我用Keras中的密集神经网络组装了一个VAE。在
model.fit期间
我得到一个维度不匹配,但不确定是什么导致代码丢失。下面是我的代码

from keras.layers import Lambda, Input, Dense
from keras.models import Model
from keras.datasets import mnist
from keras.losses import mse, binary_crossentropy
from keras.utils import plot_model
from keras import backend as K
import keras

import numpy as np
import matplotlib.pyplot as plt
import argparse
import os

(x_train, y_train), (x_test, y_test) = mnist.load_data()

image_size = x_train.shape[1]
original_dim = image_size * image_size
x_train = np.reshape(x_train, [-1, original_dim])
x_test = np.reshape(x_test, [-1, original_dim])
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255

# network parameters
input_shape = (original_dim, )
intermediate_dim = 512
batch_size = 128
latent_dim = 2
epochs = 50


x = Input(batch_shape=(batch_size, original_dim))
h = Dense(intermediate_dim, activation='relu')(x)
z_mean = Dense(latent_dim)(h)
z_log_sigma = Dense(latent_dim)(h)

def sampling(args):
    z_mean, z_log_sigma = args
    #epsilon = K.random_normal(shape=(batch, dim))
    epsilon = K.random_normal(shape=(batch_size, latent_dim))
    return z_mean + K.exp(z_log_sigma) * epsilon

# note that "output_shape" isn't necessary with the TensorFlow backend
# so you could write `Lambda(sampling)([z_mean, z_log_sigma])`
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_sigma])

decoder_h = Dense(intermediate_dim, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)

print('X Decoded Mean shape: ', x_decoded_mean.shape)

# end-to-end autoencoder
vae = Model(x, x_decoded_mean)

# encoder, from inputs to latent space
encoder = Model(x, z_mean)

# generator, from latent space to reconstructed inputs
decoder_input = Input(shape=(latent_dim,))
_h_decoded = decoder_h(decoder_input)
_x_decoded_mean = decoder_mean(_h_decoded)
generator = Model(decoder_input, _x_decoded_mean)

def vae_loss(x, x_decoded_mean):
    xent_loss = keras.metrics.binary_crossentropy(x, x_decoded_mean)
    kl_loss = - 0.5 * K.mean(1 + z_log_sigma - K.square(z_mean) - K.exp(z_log_sigma), axis=-1)
    return xent_loss + kl_loss

vae.compile(optimizer='rmsprop', loss=vae_loss)


print('X train shape: ', x_train.shape)
print('X test shape: ', x_test.shape)

vae.fit(x_train, x_train,
        shuffle=True,
        epochs=epochs,
        batch_size=batch_size,
        validation_data=(x_test, x_test)) 
下面是调用
model.fit
时看到的堆栈跟踪

File "/home/asattar/workspace/projects/keras-examples/blogautoencoder/VariationalAutoEncoder.py", line 81, in <module>
    validation_data=(x_test, x_test))
  File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/engine/training.py", line 1047, in fit
    validation_steps=validation_steps)
  File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/engine/training_arrays.py", line 195, in fit_loop
    outs = fit_function(ins_batch)
  File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/backend/tensorflow_backend.py", line 2897, in __call__
    return self._call(inputs)
  File "/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/keras/backend/tensorflow_backend.py", line 2855, in _call
    fetched = self._callable_fn(*array_vals)
  File "/home/asattar/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1439, in __call__
    run_metadata_ptr)
  File "/home/asattar/.local/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [128,784] vs. [96,784]
     [[{{node training/RMSprop/gradients/loss/dense_5_loss/logistic_loss/mul_grad/BroadcastGradientArgs}} = BroadcastGradientArgs[T=DT_INT32, _class=["loc:@train...ad/Reshape"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](training/RMSprop/gradients/loss/dense_5_loss/logistic_loss/mul_grad/Shape, training/RMSprop/gradients/loss/dense_5_loss/logistic_loss/mul_grad/Shape_1)]]
文件“/home/asattar/workspace/projects/keras examples/blogautocoder/variationautoencoder.py”,第81行,在
验证数据=(x检验,x检验)
文件“/usr/local/lib/python2.7/dist-packages/Keras-2.2.4-py2.7.egg/Keras/engine/training.py”,第1047行,适合
验证步骤=验证步骤)
文件“/usr/local/lib/python2.7/dist packages/Keras-2.2.4-py2.7.egg/Keras/engine/training_arrays.py”,第195行,在fit_循环中
outs=配合功能(ins\U批量)
文件“/usr/local/lib/python2.7/dist packages/Keras-2.2.4-py2.7.egg/Keras/backend/tensorflow_backend.py”,第2897行,在调用中__
返回自调用(输入)
文件“/usr/local/lib/python2.7/dist packages/Keras-2.2.4-py2.7.egg/Keras/backend/tensorflow_backend.py”,第2855行,in_call
fetched=self.\u可调用\u fn(*array\u vals)
文件“/home/asattar/.local/lib/python2.7/site packages/tensorflow/python/client/session.py”,第1439行,在调用中__
运行_元数据_ptr)
文件“/home/asattar/.local/lib/python2.7/site packages/tensorflow/python/framework/errors\u impl.py”,第528行,在退出中__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors\u impl.InvalidArgumentError:不兼容的形状:[128784]与[96784]
[{{node training/RMSprop/gradients/loss/densite_5_loss/logistic_loss/mul_grad/BroadcastGradientArgs}}=BroadcastGradientArgs[T=DT_INT32,[u class=[“loc:@train…ad/reformate”],[u device=“/job:localhost/replica:0/任务:0/设备:CPU:0”](训练/RMSprop/梯度/损失/密集度/后勤损失/多梯度/形状,训练/RMSprop/梯度/损失/密集度/后勤损失/多梯度/形状1)]]
请注意堆栈跟踪中靠近跟踪末尾的“不兼容形状:[128784]vs.[96784]”。

根据,最好使用
model.fit\u generator
而不是
model.fit

要使用
model.fit_generator
,应定义自己的生成器对象。 以下是一个例子:

from keras.utils import Sequence
import math

class Generator(Sequence):
    # Class is a dataset wrapper for better training performance
    def __init__(self, x_set, y_set, batch_size=256):
        self.x, self.y = x_set, y_set
        self.batch_size = batch_size
        self.indices = np.arange(self.x.shape[0])

    def __len__(self):
        return math.floor(self.x.shape[0] / self.batch_size)

    def __getitem__(self, idx):
        inds = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size]
        batch_x = self.x[inds]
        batch_y = self.y[inds]
        return batch_x, batch_y

    def on_epoch_end(self):
        np.random.shuffle(self.indices)

train_datagen = Generator(x_train, x_train, batch_size)
test_datagen = Generator(x_test, x_test, batch_size)

vae.fit_generator(train_datagen,
    steps_per_epoch=len(x_train)//batch_size,
    validation_data=test_datagen,
    validation_steps=len(x_test)//batch_size,
    epochs=epochs)

从中采用的代码。

只是尝试复制,并发现当您定义

x=输入(批次形状=(批次大小,原始尺寸))

您正在设置批大小,但它在开始验证时导致不匹配。更改为

x = Input(shape=input_shape)

你应该准备好了。

@anand_v.singh:你的评论是否暗示无状态网络应该使用
fit()
,避免使用
fit\u generator()
?或者这是一个错误的结论吗?@Markus您得出的结论是正确的,是的,当时的陈述也是错误的,我已经删除了这个,感谢您指出这一点,我想知道我在评论部分还犯了什么错误,也许我会在几个周末内浏览我的评论,删除不准确的信息