Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Rllib TensorFlow自定义LSTM模型提供无效参数接口:不兼容形状错误(在LSTM层中)_Python_Tensorflow_Keras_Ray_Rllib - Fatal编程技术网

Python Rllib TensorFlow自定义LSTM模型提供无效参数接口:不兼容形状错误(在LSTM层中)

Python Rllib TensorFlow自定义LSTM模型提供无效参数接口:不兼容形状错误(在LSTM层中),python,tensorflow,keras,ray,rllib,Python,Tensorflow,Keras,Ray,Rllib,我一直在尝试用RLLib建立一个自定义的LSTM模型,但由于某些原因,我在尝试训练时在LSTM层中遇到了一个不兼容的形状错误。特别是,此错误似乎与批次大小有关,因为不兼容形状列出的尺寸与“批次大小”值呈线性变化。我的型号代码如下: class CustomLSTMModel(RecurrentNetwork): """Example of using the Keras functional API to define a RNN model."&

我一直在尝试用RLLib建立一个自定义的LSTM模型,但由于某些原因,我在尝试训练时在LSTM层中遇到了一个不兼容的形状错误。特别是,此错误似乎与批次大小有关,因为不兼容形状列出的尺寸与“批次大小”值呈线性变化。我的型号代码如下:

class CustomLSTMModel(RecurrentNetwork):
    """Example of using the Keras functional API to define a RNN model."""

    def __init__(
        self,
        obs_space,
        action_space,
        num_outputs,
        model_config,
        name,
        hiddens_size=64,
        cell_size=64,
    ):
        super(CustomLSTMModel, self).__init__(
            obs_space, action_space, num_outputs, model_config, name
        )
        self.cell_size = cell_size

        # Define input layers
        input_layer = tf.keras.layers.Input(
            shape=(obs_space.shape[0], obs_space.shape[1]), name="inputs"
        )
        state_in_h = tf.keras.layers.Input(shape=(cell_size,), name="h")
        state_in_c = tf.keras.layers.Input(shape=(cell_size,), name="c")

        # FC layer
        dense1 = tf.keras.layers.Dense(hiddens_size, name="dense_1")(input_layer)
  
        # LSTM layer
        lstm_1_out, state_1_h, state_1_c = tf.keras.layers.LSTM(
            cell_size,
            return_state=True,
            name="lstm_1",
        )(
            inputs=dense1,
            initial_state=[state_in_h, state_in_c],
        )

        # Postprocess 
        logits = tf.keras.layers.Dense(
            self.num_outputs,
            activation=None,
            name="logits",
        )(lstm_1_out)
        values = tf.keras.layers.Dense(
            1,
            activation=None,
            name="values",
        )(lstm_1_out)

        # Create the RNN model
        self.rnn_model = tf.keras.Model(
            inputs=[input_layer, state_in_h, state_in_c],
            outputs=[logits, values, state_1_h, state_1_c],
        )
        self.register_variables(self.rnn_model.variables)
        self.rnn_model.summary()

    @override(ModelV2)
    def forward(self, input_dict, state, seq_lens):
        """Custom forward pass for inputs that already have time dimension."""
        inputs = input_dict["obs"]
        output, new_state = self.forward_rnn(inputs, state, seq_lens)
        return tf.reshape(output, [-1, self.num_outputs]), new_state

    @override(RecurrentNetwork)
    def forward_rnn(self, inputs, state, seq_lens):
        model_out, self._value_out, h, c = self.rnn_model([inputs] + state)
        return model_out, [h, c]

    @override(ModelV2)
    def get_initial_state(self):
        return [
            np.zeros(self.cell_size, np.float32),
            np.zeros(self.cell_size, np.float32),
        ]

    @override(ModelV2)
    def value_function(self):
        return tf.reshape(self._value_out, [-1])
这非常接近于RNN模型的现有RLLib代码,位于以下位置:

我必须做的主要更改是在重写的向前传递函数中,因为作为该函数输入的输入已经添加了时间维度,所以输入的维度是(?,5,50),(?,10,50),等等。因此,我不需要他们用来添加新时间维度的add_time_维度函数。我认为另一个重要的变化是如何在模型类中定义输入层的形状。我得到的错误实际上与此基本相同,只是形状不匹配的值不同:

任何想法都将不胜感激