Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/362.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 在具有两个输出的模型中使用自定义keras图层创建时出错_Python_Keras_Neural Network_Lstm - Fatal编程技术网

Python 在具有两个输出的模型中使用自定义keras图层创建时出错

Python 在具有两个输出的模型中使用自定义keras图层创建时出错,python,keras,neural-network,lstm,Python,Keras,Neural Network,Lstm,我编写了一个自定义层来处理TimeDistributed密集层的结果 我选择使用一个层,而不是对NN的结果进行后处理,因为我想将处理后的结果用于度量,然后将其作为损失函数的一部分(请注意,目前我不使用处理后的结果,因此我为自定义层的输出提供了0.0的损失权重 我修改了train_生成器和val_生成器,两次生成标签(在列表中),以适应两个输出的存在 但是,我得到以下错误: File "/home/user/experiments/LSTM/2/S1B.py", line 324, in &l

我编写了一个自定义层来处理
TimeDistributed
密集层的结果

我选择使用一个层,而不是对NN的结果进行后处理,因为我想将处理后的结果用于度量,然后将其作为损失函数的一部分(请注意,目前我不使用处理后的结果,因此我为自定义层的输出提供了
0.0
的损失权重

我修改了
train_生成器
val_生成器
,两次生成标签(在列表中),以适应两个输出的存在

但是,我得到以下错误:

  File "/home/user/experiments/LSTM/2/S1B.py", line 324, in <module>
    main()
  File "/home/user/experiments/LSTM/2/S1B.py", line 118, in main
    history=model.fit_generator(train_generator(train_list), steps_per_epoch=len(train_list), epochs=30, verbose=1,validation_data=val_generator(val_list),validation_steps=len(val_list),callbacks=callbacks_list)
  File "/home/user/.local/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/home/user/.local/lib/python3.6/site-packages/keras/engine/training.py", line 1418, in fit_generator
    initial_epoch=initial_epoch)
  File "/home/user/.local/lib/python3.6/site-packages/keras/engine/training_generator.py", line 217, in fit_generator
    class_weight=class_weight)
  File "/home/user/.local/lib/python3.6/site-packages/keras/engine/training.py", line 1211, in train_on_batch
    class_weight=class_weight)
  File "/home/user/.local/lib/python3.6/site-packages/keras/engine/training.py", line 789, in _standardize_user_data
    exception_prefix='target')
  File "/home/user/.local/lib/python3.6/site-packages/keras/engine/training_utils.py", line 102, in standardize_input_data
    str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ......

所以这似乎不是问题的根源。

我的错误是没有使用正确的矩阵乘法工具。 具体地说,在
ScoringLayer
中,我应该使用

answer=max_val*x
而不是使用

answer=K.batch_dot (max_val,x)
这就解决了问题

使用的新架构更改了
历史记录中使用的名称,因此我还必须更改我的
报告,替换

acc = history.history['acc']
val_acc = history.history['val_acc']


loss = history.history['loss']
val_loss = history.history['val_loss']

acc_5=history.history['my_3D_top_5']
val_acc_5=history.history['val_my_3D_top_5']

acc_10=history.history['my_3D_top_10']
val_acc_10=history.history['val_my_3D_top_10']

answer=K.batch_dot (max_val,x)
acc = history.history['acc']
val_acc = history.history['val_acc']


loss = history.history['loss']
val_loss = history.history['val_loss']

acc_5=history.history['my_3D_top_5']
val_acc_5=history.history['val_my_3D_top_5']

acc_10=history.history['my_3D_top_10']
val_acc_10=history.history['val_my_3D_top_10']
acc = history.history['SoftDense_acc']
val_acc = history.history['val_SoftDense_acc']

loss = history.history['SoftDense_loss']
val_loss = history.history['val_SoftDense_loss']

acc_5=history.history['SoftDense_my_3D_top_5']
val_acc_5=history.history['val_SoftDense_my_3D_top_5']

acc_10=history.history['SoftDense_my_3D_top_10']
val_acc_10=history.history['val_SoftDense_my_3D_top_10']