Machine learning 使用子类化的深层Keras RNN

Machine learning 使用子类化的深层Keras RNN,machine-learning,keras,tensorflow2.0,subclass,recurrent-neural-network,Machine Learning,Keras,Tensorflow2.0,Subclass,Recurrent Neural Network,我正在做一个语音识别项目。我想构建一个具有可变数量RNN层的深层RNN 我成功地构建并测试了这个简单的模型 一项实验性培训工作产生了一条有希望的损失曲线(模型_1曲线) 因此,我决定使用子分类方法,以便将来更容易使用L层RNN构建此类模型 这个方法以前在一个更简单的问题上和我一起使用过,但我不确定为什么它不适用于前面提到的定义 class DeepRNN_Transcription(tf.keras.Model): def __init__(self, input_dim

我正在做一个语音识别项目。我想构建一个具有可变数量RNN层的深层RNN

我成功地构建并测试了这个简单的模型

一项实验性培训工作产生了一条有希望的损失曲线(模型_1曲线)

因此,我决定使用子分类方法,以便将来更容易使用L层RNN构建此类模型

这个方法以前在一个更简单的问题上和我一起使用过,但我不确定为什么它不适用于前面提到的定义

class DeepRNN_Transcription(tf.keras.Model):
    
    def __init__(self, input_dimension = 161, output_dimension = 29, 
                 RNN_Mode = ['GRU'], HiddenNeurons = [200], 
                 HiddenActivations = ['tanh']):
        """
        Initialize fully customized deep neural network for Voice Recogintion. 
        The default numbers are for English Trasncription.
        
        Input >> RNN1 >> Batch Normalizer 1 >> .... >>
                    RNN L >> Batch Normalizer L >> TimeDistriuted >> Softmax
        

        Parameters
        ----------
        input_dimension : int, optional
            Total number of features for the given record. 
            The default is 161 extracted using Spectograms. 
            Another possibility could be N using MFCC (e.g., N = 13)
            
        output_dimension : int, optional
            Total number of target characters. 
            The default is 29 (26 char plus [space, ' and empty character for padding purposes ])
        
        RNN_Mode : list, optional
            Type of RNN Cell. The default is ['GRU']. Other options are 'LSTM' and 'SimpleRNN'
        HiddenNeurons : list, optional
            Number of neurons per layer. The default is [200].
            
        HiddenActivations : TYPE, optional
            Activation function per layer. The default is ['tanh'].

        Returns
        -------
        None.

        """
        super().__init__()
        tf.keras.backend.clear_session() 
        self.inputlayer = tf.keras.layers.Input(name='the_input', 
                                                shape=(None, input_dimension))
        i = 0
        self.hidden_layers = dict()

        for i, (d, a) in enumerate(zip(HiddenNeurons, HiddenActivations)):
            self.hidden_layers['hidden_'+str(i)] = tf.keras.layers.GRU(
                            units = a, activation = a, 
                            return_sequences=True, 
                            implementation=2)
            self.hidden_layers['Batch_'+str(i)] = tf.keras.layers.BatchNormalization()
            
        self.TimeDistributed = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(output_dimension))    
        self.outputlayer = tf.keras.layers.Dense(output_dimension, activation = 'softmax')
        self.num_layers = len(HiddenNeurons) + 3
        
    def call(self, inputtensor):
        """
        Perform forward evaluation for the constructed Neural network given 
        input tensor. This method is called whenever you build the model.

        Parameters
        ----------
        inputtensor : tuple
            Shape of the input tensor.

        Returns
        -------
        x : tensor
            The output generated from the forward evaluation.

        """
        x = self.inputlayer(inputtensor)
        
        for k, v in self.hidden_layers.items():
            x = v(x)
            
        x = self.outputlayer(x)
        
        return x
    
    def detailed_print(self):
        """
        print Weights and Biases Details of the constucted model.  

        Returns
        -------
        None.

        """
        for i, layer in enumerate(self.layers):
    
            if len(layer.get_weights()) > 0:
                w = layer.get_weights()[0]
                b = layer.get_weights()[1]
                
                print('\nLayer {}: {}\n'.format(i, layer.name))
                print('\u2022 Weights:\n', w)
                print('\n\u2022 Biases:\n', b)
                print('\nThis layer has a total of {:,} weights and {:,} biases'.format(w.size, b.size))
                print('\n------------------------')
            
            else:
                print('\nLayer {}: {}\n'.format(i, layer.name))
                print('This layer has no weights or biases.')
                print('\n------------------------')
当我尝试构建它时:

Model1 = DeepRNN_Transcription()
Model1.build((None,161))
它抛出了这个错误

ValueError: You cannot build your model by calling `build` if your layers do not support float type inputs. Instead, in order to instantiate and build your model, `call` your model on real tensor data (of the correct dtype).
我有点怀疑输入层是这里有问题的部分,但我不确定

那么,有人能帮忙吗

ValueError: You cannot build your model by calling `build` if your layers do not support float type inputs. Instead, in order to instantiate and build your model, `call` your model on real tensor data (of the correct dtype).