Keras自定义多输入中的非类型错误

Keras自定义多输入中的非类型错误,keras,deep-learning,Keras,Deep Learning,我正试图用一个CNN和一个使用functional类的Transformer在Keras中构建一个多输入模型,但每次我试图定义模型时,都会出现这个错误。我使用的是GoogleColab,因此Keras、tensorflow和Python版本都是Colab提供的 下面是代码和错误,后面是用于运行代码的Transformer模型类 import numpy as np import cv2 from keras.models import Seque

我正试图用一个CNN和一个使用functional类的Transformer在Keras中构建一个多输入模型,但每次我试图定义模型时,都会出现这个错误。我使用的是GoogleColab,因此Keras、tensorflow和Python版本都是Colab提供的

下面是代码和错误,后面是用于运行代码的Transformer模型类


        import numpy as np
        import cv2
        from keras.models import Sequential
        from keras.layers import Dense, Dropout, Flatten
        from keras.layers import Conv2D
        from keras.optimizers import Adam
        from keras.layers import MaxPooling2D
        from keras.preprocessing.image import ImageDataGenerator
        from keras.layers import Input
        from keras.layers.merge import add 
        
        
        
        
        embed_dim = 32  # Embedding size for each token
        num_heads_text = 2  # Number of attention heads
        ff_dim = 32  # Hidden layer size in feed forward network inside transformer
            
           
        def define_model(vocab_size, max_length):
            
            inputs1 = Input(shape=(299,299,3))
        
            x= Conv2D(32, kernel_size=(3, 3), activation='relu')(inputs1)
            x= Conv2D(32, kernel_size=(3, 3), activation='relu')(x)
            features= MaxPooling2D(pool_size=(2, 2))(x)
            encoder= Dense(256, activation='relu')(features)
            encoder1= Dense(16, activation='relu')
            inputs2 = layers.Input(shape=(max_length,))
            embedding_layer = TokenAndPositionEmbedding(max_length, vocab_size, embed_dim)
            x = embedding_layer(inputs2)
            transformer_block = TransformerBlock(embed_dim, num_heads_text, ff_dim)
            x = transformer_block(x)
            x = layers.GlobalAveragePooling1D()(x)
            x = layers.Dropout(0.1)(x)
            encoder2 = layers.Dense(16, activation="relu")(x)
            # Merging both models
            decoder1 = add([encoder1,encoder2])
            decoder2 = Dense(16, activation='relu')(decoder1)
            outputs = Dense(vocab_size, activation='softmax')(decoder2)
            # tie it together [image, seq] [word]
            model = Model(inputs=(inputs1, inputs2), outputs=outputs)
            model.compile(loss='categorical_crossentropy', optimizer='adam')
            # summarize model
            print(model.summary())
            plot_model(model, to_file='model.png', show_shapes=True)
            return model
        
        
        vocab_size = 7577
        max_length = 32
    
    model = define_model(vocab_size, max_length)
    ```
    
    and I am getting the None type is not subscriptable error 
    
    
    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-18-cc4e62ad8c8c> in <module>()
    ----> 1 model = define_model(vocab_size, max_length)
          2 
    
    8 frames
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/layers/merge.py in build(self, input_shape)
         90   def build(self, input_shape):
         91     # Used purely for shape validation.
    ---> 92     if not isinstance(input_shape[0], tuple):
         93       raise ValueError('A merge layer should be called on a list of inputs.')
         94     if len(input_shape) < 2:
    
    TypeError: 'NoneType' object is not subscriptable
    
    
    
    the transformer model code is as follows 
    
    ```
    
        class TokenAndPositionEmbedding(layers.Layer):
            def __init__(self, maxlen, vocab_size, embed_dim):
                super(TokenAndPositionEmbedding, self).__init__()
                self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim)
                self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim)
        
            def call(self, x):
                maxlen = tf.shape(x)[-1]
                positions = tf.range(start=0, limit=maxlen, delta=1)
                positions = self.pos_emb(positions)
                x = self.token_emb(x)
                return x + positions
        
        
        
        class TransformerBlock(layers.Layer):
            def __init__(self, embed_dim, num_heads_text, ff_dim, rate=0.1):
                super(TransformerBlock, self).__init__()
                self.att = layers.MultiHeadAttention(num_heads=num_heads_text, key_dim=embed_dim)
                self.ffn = keras.Sequential(
                    [layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),]
                )
                self.layernorm1 = layers.LayerNormalization(epsilon=1e-6)
                self.layernorm2 = layers.LayerNormalization(epsilon=1e-6)
                self.dropout1 = layers.Dropout(rate)
                self.dropout2 = layers.Dropout(rate)
        
            def call(self, inputs, training):
                attn_output = self.att(inputs, inputs)
                attn_output = self.dropout1(attn_output, training=training)
                out1 = self.layernorm1(inputs + attn_output)
                ffn_output = self.ffn(out1)
                ffn_output = self.dropout2(ffn_output, training=training)
                return self.layernorm2(out1 + ffn_output)
        
        ```

Precise answers about the error is appreciated and please keep the solutions to the point



将numpy作为np导入
进口cv2
从keras.models导入顺序
从keras.layers导入致密、脱落、平坦
从keras.layers导入Conv2D
从keras.optimizers导入Adam
从keras.layers导入MaxPoolig2D
从keras.preprocessing.image导入ImageDataGenerator
从keras.layers导入输入
从keras.layers.merge导入添加
嵌入尺寸=32#每个令牌的嵌入尺寸
num_heads_text=2#注意头数
ff_dim=32#变压器内部前馈网络中的隐藏层尺寸
def define_模型(声音大小、最大长度):
输入1=输入(形状=(299,3))
x=Conv2D(32,内核大小=(3,3),激活='relu')(inputs1)
x=Conv2D(32,内核大小=(3,3),激活='relu')(x)
features=MaxPoolig2D(池大小=(2,2))(x)
编码器=密集(256,激活='relu')(功能)
编码器1=密集(16,激活='relu')
inputs2=层。输入(形状=(最大长度)
嵌入层=标记和位置嵌入(最大长度、声音大小、嵌入尺寸)
x=嵌入层(输入2)
transformer\u block=TransformerBlock(嵌入尺寸、数量头尺寸、文本尺寸)
x=变压器组(x)
x=layers.globalAveragePoolg1d()(x)
x=层。辍学(0.1)(x)
encoder2=层。密集(16,activation=“relu”)(x)
#合并两个模型
decoder1=添加([encoder1,encoder2])
decoder2=密集(16,激活='relu')(decoder1)
输出=密集(声音大小,激活='softmax')(解码2)
#把它绑在一起[图片,序号][文字]
模型=模型(输入=(输入1,输入2),输出=输出)
compile(loss='classifical\u crossentropy',optimizer='adam')
#总结模型
打印(model.summary())
plot_model(model,to_file='model.png',show_shapes=True)
回归模型
声音大小=7577
最大长度=32
模型=定义模型(声音大小,最大长度)
```
我得到的是None类型是不可下标的错误
---------------------------------------------------------------------------
TypeError回溯(最近一次调用上次)
在()
---->1模型=定义模型(声音大小,最大长度)
2.
8帧
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/layers/merge.py in build(self,input_shape)
90 def构建(自、输入_形状):
91#仅用于形状验证。
--->92如果不存在(输入_形[0],元组):
93 raise VALUERROR('应在输入列表上调用合并层')
94如果透镜(输入_形)<2:
TypeError:“非类型”对象不可下标
变压器型号代码如下所示
```
类标记和位置嵌入(layers.Layer):
定义初始值(自、最大、语音大小、嵌入尺寸):
super(标记和位置嵌入,self)。\uuuu init\uuuu()
self.token\u emb=layers.Embedding(输入尺寸=语音尺寸,输出尺寸=嵌入尺寸)
self.pos\u emb=layers.Embedding(输入尺寸=maxlen,输出尺寸=embed尺寸)
def呼叫(自我,x):
maxlen=tf.shape(x)[-1]
位置=tf.范围(开始=0,极限=最大,增量=1)
职位=自身职位(职位)
x=自身令牌\u emb(x)
返回x+位置
类TransformerBlock(layers.Layer):
定义初始值(自、嵌入尺寸、数量头、文本、ff尺寸、速率=0.1):
super(TransformerBlock,self)。\uuuu init\uuuuu()
self.att=layers.MultiHeadAttention(num\u heads=num\u heads\u text,key\u dim=embed\u dim)
self.ffn=keras.Sequential(
[layers.Dense(ff_dim,activation=“relu”),layers.Dense(embed_dim),]
)
self.layernorm1=layers.LayerNormalization(ε=1e-6)
self.layernorm2=层.层规范化(ε=1e-6)
self.dropout1=层.退出(速率)
self.dropout2=层.辍学(速率)
def呼叫(自我、输入、培训):
attn_输出=自身att(输入,输入)
attn_输出=自身辍学1(attn_输出,培训=培训)
out1=自身层或M1(输入+附件输出)
ffn_输出=self.ffn(out1)
ffn_输出=自身辍学2(ffn_输出,培训=培训)
返回自分层ORM2(out1+ffn\U输出)
```
对于错误的准确答案,我们表示感谢,请将解决方案保留到重点