Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/311.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Keras文档:无法遵循多输入和多输出模型_Python_Keras - Fatal编程技术网

Python Keras文档:无法遵循多输入和多输出模型

Python Keras文档:无法遵循多输入和多输出模型,python,keras,Python,Keras,我遵循页面上的示例: 该模型用于预测新闻标题将收到多少转发和喜欢。因此,main_输出预测了转发的数量,aux_输出预测了类似的内容 from keras.layers import Input, Embedding, LSTM, Dense from keras.models import Model headline_data=[[i for i in range(100)]] additional_data=[[100,200]] labels=[1,2] # Headline inpu

我遵循页面上的示例:

该模型用于预测新闻标题将收到多少转发和喜欢。因此,main_输出预测了转发的数量,aux_输出预测了类似的内容

from keras.layers import Input, Embedding, LSTM, Dense
from keras.models import Model

headline_data=[[i for i in range(100)]]
additional_data=[[100,200]]
labels=[1,2]
# Headline input: meant to receive sequences of 100 integers, between 1 and 10000.

# Note that we can name any layer by passing it a "name" argument.
main_input = Input(shape=(100,), dtype='int32', name='main_input')

# This embedding layer will encode the input sequence
# into a sequence of dense 512-dimensional vectors.
x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input)

# A LSTM will transform the vector sequence into a single vector,
# containing information about the entire sequence
lstm_out = LSTM(32)(x)


auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out)

auxiliary_input = Input(shape=(5,), name='aux_input')
x = keras.layers.concatenate([lstm_out, auxiliary_input])

# We stack a deep densely-connected network on top
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)

# And finally we add the main logistic regression layer
main_output = Dense(1, activation='sigmoid', name='main_output')(x)


# This defines a model with two inputs and two outputs:
model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output])

# We compile the model and assign a weight of 0.2 to the auxiliary loss. 
# To specify different  loss_weights or loss for each different output, 
# you can use a list or a dictionary. Here we pass a single loss as the loss argument, 
# so the same loss will be used on all outputs.

# Since our inputs and outputs are named (we passed them a "name" argument), We could also have compiled the model via:
model.compile(optimizer='rmsprop',
              loss={'main_output': 'binary_crossentropy', 'aux_output': 'binary_crossentropy'},
              loss_weights={'main_output': 1., 'aux_output': 0.2})

# And trained it via:
model.fit({'main_input': headline_data, 'aux_input': additional_data},
          {'main_output': labels, 'aux_output': labels},
          epochs=50, batch_size=32)

AttributeError出现错误:“list”对象没有属性“ndim”

您的输入/输出必须是NumPy数组,其中第一个维度是批大小。例如:

headline_data = np.random.randint(1, 10000 + 1, size=(32, 100))
additional_data = np.random.randint(1, 10000 + 1, size=(32, 5))
labels = np.random.randint(0, 1 + 1, size=(32, 1))
请注意,这是一个玩具示例,我们随机生成输入