Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/339.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何修复传递到model.fit_生成器中输入_1参数的验证_数据?_Python_Numpy_Tensorflow_Keras - Fatal编程技术网

Python 如何修复传递到model.fit_生成器中输入_1参数的验证_数据?

Python 如何修复传递到model.fit_生成器中输入_1参数的验证_数据?,python,numpy,tensorflow,keras,Python,Numpy,Tensorflow,Keras,我正在尝试在具有多个输入的函数API上编写一个混合模型。问题是,在训练模型时,我在一个历元后得到ValueError: ValueError: Error when checking input: expected input_1 to have shape (168, 5) but got array with shape (5808, 5) 我感到困惑的是,验证_数据(shape(5808,5))是如何传递到model.fit_生成器中的输入_1(shape(168,5))参数的 我已经回

我正在尝试在具有多个输入的函数API上编写一个混合模型。问题是,在训练模型时,我在一个历元后得到ValueError:

ValueError: Error when checking input: expected input_1 to have shape (168, 5) but got array with shape (5808, 5)
我感到困惑的是,验证_数据(shape(5808,5))是如何传递到model.fit_生成器中的输入_1(shape(168,5))参数的

我已经回滚到一个顺序模型,以查看并确认是否存在此问题,但它训练得很好

这是模型拟合函数:

%%time
model.fit_generator(generator=generator,
                    epochs=10,
                    steps_per_epoch=30,
                    validation_data=validation_data,
                    callbacks=callbacks)
它与顺序数据中的情况相同,可以正常工作

模型本身:

# first input model
input_1 = Input(shape=((168,5)))
dense_1 = Dense(50)(input_1)

# second input model
input_2 = Input(shape=((168,7)))
lstm_1 = LSTM(units=64, return_sequences=True, input_shape=(None, 7,))(input_2)

# merge input models
merge = concatenate([dense_1, lstm_1])
output = Dense(num_y_signals, activation='sigmoid')(merge)
model = Model(inputs=[input_1, input_2], outputs=output)
# summarize layers
print(model.summary())
模型摘要:

Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 168, 5)]     0                                            
__________________________________________________________________________________________________
input_2 (InputLayer)            [(None, 168, 7)]     0                                            
__________________________________________________________________________________________________
dense (Dense)                   (None, 168, 50)      300         input_1[0][0]                    
__________________________________________________________________________________________________
lstm (LSTM)                     (None, 168, 64)      18432       input_2[0][0]                    
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 168, 114)     0           dense[0][0]                      
                                                                 lstm[0][0]                       
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 168, 1)       115         concatenate[0][0]                
==================================================================================================
验证数据:

validation_data = ([np.expand_dims(x_test1_scaled, axis=0),
                    np.expand_dims(x_test2_scaled, axis=0)],
                   np.expand_dims(y_test_scaled, axis=0))
请注意,我必须通过包含5808个观察值的整个测试数据

数据发生器

def batch_generator(batch_size, sequence_length):
    """
    Generator function for creating random batches of training-data.
    """

    # Infinite loop.
    while True:
        # Allocate a new array for the batch of input-signals.
        x_shape = (batch_size, sequence_length, num_x_signals)
        x_batch = np.zeros(shape=x_shape, dtype=np.float16)

        # Allocate a new array for the batch of output-signals.
        y_shape = (batch_size, sequence_length, num_y_signals)
        y_batch = np.zeros(shape=y_shape, dtype=np.float16)

        # Fill the batch with random sequences of data.
        for i in range(batch_size):
            # Get a random start-index.
            # This points somewhere into the training-data.
            idx = np.random.randint(num_train - sequence_length)

            # Copy the sequences of data starting at this index.
            x_batch[i] = x_train_scaled[idx:idx+sequence_length]
            y_batch[i] = y_train_scaled[idx:idx+sequence_length]

        x_batch_1 = x_batch[ :, :, 0:5]
        x_batch_2 = x_batch[ :, :, 5:12]
        yield ([x_batch_1, x_batch_2], y_batch)


我希望验证数据被传递到验证数据参数,而不是输入数据参数。

当您使用一些
验证数据进行
拟合时,该验证数据将在网络中向前传递(这就是为什么验证在某一点通过
输入数据)。因此,它必须符合模型的要求,在模型中,您指定输入数据的形状必须为
(168,5)
,在numpy数组中,该形状将为
(batch\u size,168,5)
。我想您可能有点困惑,在网络的输入形状中添加了
batch_size
(我可以通过您的问题判断出来,还因为您在2D输入上应用了密集层)。在keras中,您不需要这样做,只需根据实例指定形状即可。
我想如果你这样做了,你就不必再扩展你输入的维度了。

'对不起,我不太明白。正如我在上面发布的,输入形状确实是(168,5)。我需要密集层(MLP与LSTM连接),我不确定你所说的在实例基础上指定形状是什么意思。你能用文字描述你的数据吗?例如,您的培训集中有多少个样本?输入参数是什么?就像我说的,我想因为你在你的输入上应用了密集层,输入形状可能是1D。我有34896个样本用于训练集。x_train_scaled、y_train_scaled是我对数据生成器的输入,它们的形状分别为(34896,12)和(34896,1)。我从数据生成器输入到模型的是[x_batch_1,x_batch_2],y_batch的形状分别为(5808,5)、(5808,7)和(5808,1)。5808是测试集的样本数。好的,
input_1
的输入维度是
(5,)
。在keras中指定输入形状时,不需要指定批次大小,只需指定单个smaple的形状即可。之后指定了批量大小。我分别将input_1和input_2的输入形状更改为(5,)和(7,)。但是出现错误:“层lstm的输入0与层不兼容:预期ndim=3,发现ndim=2”。tf.U是一条路吗?具体地说:input_2=tf.expand_dims(input_2,axis=-1)
batch_size = 32
sequence_length = 24 * 7
generator = batch_generator(batch_size=batch_size,
                            sequence_length=sequence_length)