Python tensorflow中是否存在具有顺序输入的模型?

Python tensorflow中是否存在具有顺序输入的模型?,python,python-3.x,tensorflow,tensorflow2.0,Python,Python 3.x,Tensorflow,Tensorflow2.0,我正在尝试制作一个模型,它将输入分为多个层次,这样我就可以得到不同大小的输入。因为我不能让它工作,我把模型分成五个部分,这样我可以得到3个不同大小的输入和3个不同大小的输出 我想知道的是使用另一个模型的输出作为另一个模型的输入 @tf.function def train_step(images, labels): with tf.GradientTape() as tape: inputs = images if images.shape[2] >

我正在尝试制作一个模型,它将输入分为多个层次,这样我就可以得到不同大小的输入。因为我不能让它工作,我把模型分成五个部分,这样我可以得到3个不同大小的输入和3个不同大小的输出

我想知道的是使用另一个模型的输出作为另一个模型的输入

@tf.function
def train_step(images, labels):
    with tf.GradientTape() as tape:
        inputs = images
        if images.shape[2] >= 256:
            tape.watch(model_l1.trainable_variables)
            fwd_1, inputs = model_l1(inputs, training=True)
        if images.shape[2] >= 128:
            tape.watch(model_l2.trainable_variables)
            fwd_2, inputs = model_l2(inputs, training=True)
        tape.watch(model_m.trainable_variables)
        inputs, predictions = model_m(inputs, training=True)
        if images.shape[2] <= 128:
            tape.watch(model_r2.trainable_variables)
            inputs, predictions = model_r2(fwd_2, inputs, training=True)
        if images.shape[2] <= 256:
            tape.watch(model_r1.trainable_variables)
            inputs, predictions = model_r1(fwd_1, inputs, training=True)
        
        loss = loss_func(labels, predictions)
    gradients = tape.gradient(loss, model_m.trainable_variables)
    optimizer.apply_gradients(zip(G_SCALE * gradients, model_m.trainable_variables))

    if images.shape[2] <= 128:
        gradients = tape.gradient(loss, model_l2.trainable_variables)
        optimizer.apply_gradients(zip(G_SCALE * gradients, model_l2.trainable_variables))

        gradients = tape.gradient(loss, model_r2.trainable_variables)
        optimizer.apply_gradients(zip(G_SCALE * gradients, model_r2.trainable_variables))

    if images.shape[2] <= 256:
        gradients = tape.gradient(loss, model_l1.trainable_variables)
        optimizer.apply_gradients(zip(G_SCALE * gradients, model_l1.trainable_variables))

        gradients = tape.gradient(loss, model_r1.trainable_variables)
        optimizer.apply_gradients(zip(G_SCALE * gradients, model_r1.trainable_variables))

    return loss
我猜像fwd_1/fwd_2这样的变量正在变得没有。有没有人有类似的问题,或者看到这里有什么问题?谢谢大家!

x = self.conc([fwd_2, inputs])
    /N/soft/rhel7/deeplearning/Python-3.7.6/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:887 __call__
        self._maybe_build(inputs)
    /N/soft/rhel7/deeplearning/Python-3.7.6/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:2141 _maybe_build
        self.build(input_shapes)
    /N/soft/rhel7/deeplearning/Python-3.7.6/lib/python3.7/site-packages/tensorflow_core/python/keras/utils/tf_utils.py:306 wrapper
        output_shape = fn(instance, input_shape)
    /N/soft/rhel7/deeplearning/Python-3.7.6/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/merge.py:378 build
        raise ValueError('A `Concatenate` layer should be called '

    ValueError: A `Concatenate` layer should be called on a list of at least 2 inputs