Tensorflow 理解Keras模型体系结构(张量指数)

Tensorflow 理解Keras模型体系结构(张量指数),tensorflow,keras,Tensorflow,Keras,此脚本使用 产生以下输出 ____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ===================================

此脚本使用

产生以下输出

____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
A_input (InputLayer)             (None, 100)           0                                            
____________________________________________________________________________________________________
B_dense (Dense)                  (None, 20)            2020        A_input[0][0]                    
____________________________________________________________________________________________________
C_dense_shared (Dense)           (None, 20)            420         B_dense[0][0]                    
                                                                   B_dense[0][0]                    
____________________________________________________________________________________________________
D_concat (Concatenate)           (None, 40)            0           C_dense_shared[0][0]             
                                                                   C_dense_shared[1][0]             
====================================================================================================
我的问题涉及连接到列的
的内容。
我明白

在这种情况下,
C\u densite\u shared
有两个节点,
D\u concat
连接到这两个节点(
C\u densite\u shared[0][0]
C\u densite\u shared[1][0]
)。所以第一个索引(node_index
)对我来说很清楚。但第二个指数意味着什么?从中我了解到这是张量索引:

layer_name[node_index][tensor_index]
但是,张量指数是什么意思?在什么情况下,它可以有一个不同于
0
的值?

我认为
节点
类的属性非常清楚:

    tensor_indices: a list of integers,
        the same length as `inbound_layers`.
        `tensor_indices[i]` is the index of `input_tensors[i]` within the
        output of the inbound layer
        (necessary since each inbound layer might
        have multiple tensor outputs, with each one being
        independently manipulable).
如果一个层有多个输出张量,
tensor\u index
将非零。这不同于多个“数据流”(例如层共享)的情况,其中层具有多个出站节点。例如,如果给定
return\u state=True
,则
LSTM
层将返回3个张量:

  • 上一个时间步的隐藏状态,或所有隐藏状态(如果
    return\u sequences=True
  • 上一时间步的隐藏状态
  • 最后一个时间步的存储单元
  • 作为另一个示例,特征转换可以实现为
    Lambda
    层:

    def generate_powers(x):
        return [x, K.sqrt(x), K.square(x)]
    
    model_input = Input(shape=(10,))
    powers = Lambda(generate_powers)(model_input)
    x = Concatenate()(powers)
    x = Dense(10, activation='relu')(x)
    x = Dense(1, activation='sigmoid')(x)
    model = Model(model_input, x)
    
    model.summary()
    ,您可以看到
    concatenate_5
    连接到
    lambda_7[0][0]
    lambda_7[0][1]
    lambda_7[0][2]

    ____________________________________________________________________________________________________
    Layer (type)                     Output Shape          Param #     Connected to                     
    ====================================================================================================
    input_7 (InputLayer)             (None, 10)            0                                            
    ____________________________________________________________________________________________________
    lambda_7 (Lambda)                [(None, 10), (None, 1 0           input_7[0][0]                    
    ____________________________________________________________________________________________________
    concatenate_5 (Concatenate)      (None, 30)            0           lambda_7[0][0]                   
                                                                       lambda_7[0][1]                   
                                                                       lambda_7[0][2]                   
    ____________________________________________________________________________________________________
    dense_8 (Dense)                  (None, 10)            310         concatenate_5[0][0]              
    ____________________________________________________________________________________________________
    dense_9 (Dense)                  (None, 1)             11          dense_8[0][0]                    
    ====================================================================================================
    Total params: 321 
    Trainable params: 321 
    Non-trainable params: 0
    ____________________________________________________________________________________________________
    

    谢谢。这是有道理的。我还没有使用Lambda或LSTM。我会在14小时内把赏金判给你,因为这是不可能提前完成的。
    ____________________________________________________________________________________________________
    Layer (type)                     Output Shape          Param #     Connected to                     
    ====================================================================================================
    input_7 (InputLayer)             (None, 10)            0                                            
    ____________________________________________________________________________________________________
    lambda_7 (Lambda)                [(None, 10), (None, 1 0           input_7[0][0]                    
    ____________________________________________________________________________________________________
    concatenate_5 (Concatenate)      (None, 30)            0           lambda_7[0][0]                   
                                                                       lambda_7[0][1]                   
                                                                       lambda_7[0][2]                   
    ____________________________________________________________________________________________________
    dense_8 (Dense)                  (None, 10)            310         concatenate_5[0][0]              
    ____________________________________________________________________________________________________
    dense_9 (Dense)                  (None, 1)             11          dense_8[0][0]                    
    ====================================================================================================
    Total params: 321 
    Trainable params: 321 
    Non-trainable params: 0
    ____________________________________________________________________________________________________