Python Keras中的多输出
我有一个问题,当给定一个预测向量时,预测两个输出。 假设预测向量看起来像Python Keras中的多输出,python,tensorflow,keras,deep-learning,neural-network,Python,Tensorflow,Keras,Deep Learning,Neural Network,我有一个问题,当给定一个预测向量时,预测两个输出。 假设预测向量看起来像x1,y1,att1,att2,…,attn,表示x1,y1是坐标,att的是附加到x1,y1坐标出现处的其他属性。基于这个预测集,我想预测x2,y2。这是一个时间序列问题,我正试图用多元回归来解决。 我的问题是如何设置keras,它可以在最后一层中为我提供两个输出。您可以使用 from keras.models import Model from keras.layers import * #inp is a "
x1,y1,att1,att2,…,attn
,表示x1,y1
是坐标,att的
是附加到x1,y1
坐标出现处的其他属性。基于这个预测集,我想预测x2,y2
。这是一个时间序列问题,我正试图用多元回归来解决。
我的问题是如何设置keras,它可以在最后一层中为我提供两个输出。您可以使用
from keras.models import Model
from keras.layers import *
#inp is a "tensor", that can be passed when calling other layers to produce an output
inp = Input((10,)) #supposing you have ten numeric values as input
#here, SomeLayer() is defining a layer,
#and calling it with (inp) produces the output tensor x
x = SomeLayer(blablabla)(inp)
x = SomeOtherLayer(blablabla)(x) #here, I just replace x, because this intermediate output is not interesting to keep
#here, I want to keep the two different outputs for defining the model
#notice that both left and right are called with the same input x, creating a fork
out1 = LeftSideLastLayer(balbalba)(x)
out2 = RightSideLastLayer(banblabala)(x)
#here, you define which path you will follow in the graph you've drawn with layers
#notice the two outputs passed in a list, telling the model I want it to have two outputs.
model = Model(inp, [out1,out2])
model.compile(optimizer = ...., loss = ....) #loss can be one for both sides or a list with different loss functions for out1 and out2
model.fit(inputData,[outputYLeft, outputYRight], epochs=..., batch_size=...)
tf.keras.Model
from sklearn.datasets import load_iris
from tensorflow.keras.layers import Dense
from tensorflow.keras import Input, Model
import tensorflow as tf
data, target = load_iris(return_X_y=True)
X = data[:, (0, 1, 2)]
Y = data[:, 3]
Z = target
inputs = Input(shape=(3,), name='input')
x = Dense(16, activation='relu', name='16')(inputs)
x = Dense(32, activation='relu', name='32')(x)
output1 = Dense(1, name='cont_out')(x)
output2 = Dense(3, activation='softmax', name='cat_out')(x)
model = Model(inputs=inputs, outputs=[output1, output2])
model.compile(loss={'cont_out': 'mean_absolute_error',
'cat_out': 'sparse_categorical_crossentropy'},
optimizer='adam',
metrics={'cat_out': tf.metrics.SparseCategoricalAccuracy(name='acc')})
history = model.fit(X, {'cont_out': Y, 'cat_out': Z}, epochs=10, batch_size=8)
以下是一个简化版本:
from sklearn.datasets import load_iris
from tensorflow.keras.layers import Dense
from tensorflow.keras import Input, Model
data, target = load_iris(return_X_y=True)
X = data[:, (0, 1, 2)]
Y = data[:, 3]
Z = target
inputs = Input(shape=(3,))
x = Dense(16, activation='relu')(inputs)
x = Dense(32, activation='relu')(x)
output1 = Dense(1)(x)
output2 = Dense(3, activation='softmax')(x)
model = Model(inputs=inputs, outputs=[output1, output2])
model.compile(loss=['mae', 'sparse_categorical_crossentropy'], optimizer='adam')
history = model.fit(X, [Y, Z], epochs=10, batch_size=8)
下面是相同的示例,子类化tf.keras.Model
,并使用自定义训练循环:
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras import Model
from sklearn.datasets import load_iris
tf.keras.backend.set_floatx('float64')
iris, target = load_iris(return_X_y=True)
X = iris[:, :3]
y = iris[:, 3]
z = target
ds = tf.data.Dataset.from_tensor_slices((X, y, z)).shuffle(150).batch(8)
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.d0 = Dense(16, activation='relu')
self.d1 = Dense(32, activation='relu')
self.d2 = Dense(1)
self.d3 = Dense(3, activation='softmax')
def call(self, x, training=None, **kwargs):
x = self.d0(x)
x = self.d1(x)
a = self.d2(x)
b = self.d3(x)
return a, b
model = MyModel()
loss_obj_reg = tf.keras.losses.MeanAbsoluteError()
loss_obj_cat = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
loss_reg = tf.keras.metrics.Mean(name='regression loss')
loss_cat = tf.keras.metrics.Mean(name='categorical loss')
error_reg = tf.keras.metrics.MeanAbsoluteError()
error_cat = tf.keras.metrics.SparseCategoricalAccuracy()
@tf.function
def train_step(inputs, y_reg, y_cat):
with tf.GradientTape() as tape:
pred_reg, pred_cat = model(inputs)
reg_loss = loss_obj_reg(y_reg, pred_reg)
cat_loss = loss_obj_cat(y_cat, pred_cat)
gradients = tape.gradient([reg_loss, cat_loss], model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
loss_reg(reg_loss)
loss_cat(cat_loss)
error_reg(y_reg, pred_reg)
error_cat(y_cat, pred_cat)
for epoch in range(50):
for xx, yy, zz in ds:
train_step(xx, yy, zz)
template = 'Epoch {:>2}, SCCE: {:>5.2f},' \
' MAE: {:>4.2f}, SAcc: {:>5.1%}'
print(template.format(epoch+1,
loss_cat.result(),
error_reg.result(),
error_cat.result()))
loss_reg.reset_states()
loss_cat.reset_states()
error_reg.reset_states()
error_cat.reset_states()
因此,如果我理解正确,那么你的意思是:
InputShape=(10,)
model\u 1=Sequential()model\u 1.add(Dense(250,activation='tanh',input\u shape=(InputShape)))model\u 1.add(Dense(2,activation='relu'))model\u 1.compile(optimizer='adam',loss='mse',metrics=['accurity'])model\u 1.fit(预测器、目标、纪元=无论什么,…)
。我的问题是,这与您的情况有什么不同,您只指定了两个输出。在我的答案中添加了注释:)--您不能使用顺序模型创建分支,这是根本不可能的。@Daniel嗨Daniel,您能详细说明一下吗?我想要的是有一个网络,它试图预测两种不同的东西,所以我想象了一个发生在倒数第二层的分支,它馈送到两个不同的softmax层,然后连接这两个层的结果,然后反向传播。这在keras中是不可能的吗?如果你知道两边的真实值,你就不需要连接它们。模型将自动完成所有操作。(我能想到的连接两个分支的唯一原因是:1-您的真实数据已经连接;2-您希望添加更多层,并将其作为输入)。我只是好奇拥有双输入的好处是什么?有两个独立的模型(分类和回归)会更好吗?Thx..好处是神经网络可以在数据中学习对这两项任务都有用的结构。所以参数少了,我想我只是想看看和两个单独的任务相比,它的性能如何。我试试看。感谢您的代码。网络如何解释平均绝对误差可能比交叉熵小得多的事实,特别是当输出标准化为0-1范围(MAE 1)时?