Deep learning 多输入DNN的石灰图像分类解释

Deep learning 多输入DNN的石灰图像分类解释,deep-learning,conv-neural-network,lime,multiple-input,Deep Learning,Conv Neural Network,Lime,Multiple Input,我对深度学习相当陌生,但我成功地构建了一个多分支图像分类体系结构,产生了相当令人满意的结果 不那么重要:我正在研究KKBox customer chorn(),在这里我将客户行为、交易和静态数据转换为热图,并尝试根据热图对客户进行分类 分类本身工作得很好。当我试着用石灰来看看结果来自哪里时,我的问题就出现了。在这里执行代码时:除了我使用输入列表[members[0]、transactions[0]、user_logs[0]]之外,我得到以下错误:AttributeError:“list”对象没有

我对深度学习相当陌生,但我成功地构建了一个多分支图像分类体系结构,产生了相当令人满意的结果

不那么重要:我正在研究KKBox customer chorn(),在这里我将客户行为、交易和静态数据转换为热图,并尝试根据热图对客户进行分类

分类本身工作得很好。当我试着用石灰来看看结果来自哪里时,我的问题就出现了。在这里执行代码时:除了我使用输入列表[members[0]、transactions[0]、user_logs[0]]之外,我得到以下错误:AttributeError:“list”对象没有属性“shape”

我想到的是,LIME可能不是为我这样的多输入架构而设计的。另一方面,Microsoft Azure也有一个多分支体系结构(),据称他们使用LIME来解释结果()

我曾尝试将图像连接到单个输入中,但这种方法产生的结果比多输入方法差得多。不过,莱姆也适用于这种方法(尽管不像通常的图像识别那样容易理解)

DNN体系结构:

# Members
members_input = Input(shape=(61,4,3), name='members_input')
x1 = Dropout(0.2)(members_input)
x1 = Conv2D(32, kernel_size = (61,4), padding='valid', activation='relu', strides=1)(x1)
x1 = GlobalMaxPooling2D()(x1)

# Transactions
transactions_input = Input(shape=(61,39,3), name='transactions_input')
x2 = Dropout(0.2)(transactions_input)
x2 = Conv2D(32, kernel_size = (61,1,), padding='valid', activation='relu', strides=1)(x2)
x2 = Conv2D(32, kernel_size = (1,39,), padding='valid', activation='relu', strides=1)(x2)
x2 = GlobalMaxPooling2D()(x2)

# User logs
userlogs_input = Input(shape=(61,7,3), name='userlogs_input')
x3 = Dropout(0.2)(userlogs_input)
x3 = Conv2D(32, kernel_size = (61,1,), padding='valid', activation='relu', strides=1)(x3)
x3 = Conv2D(32, kernel_size = (1,7,), padding='valid', activation='relu', strides=1)(x3)
x3 = GlobalMaxPooling2D()(x3)

# User_logs + Transactions + Members
merged = keras.layers.concatenate([x1,x2,x3]) # Merged layer
out = Dense(2)(merged)
out_2 = Activation('softmax')(out)

model = Model(inputs=[members_input, transactions_input, userlogs_input], outputs=out_2)
model.compile(optimizer="adam", loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
尝试的石灰利用率:

explainer = lime_image.LimeImageExplainer()

explanation = explainer.explain_instance([members_test[0],transactions_test[0],user_logs_test[0]], model.predict, top_labels=2, hide_color=0, num_samples=1000)
模型摘要:

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
transactions_input (InputLayer) (None, 61, 39, 3)    0                                            
__________________________________________________________________________________________________
userlogs_input (InputLayer)     (None, 61, 7, 3)     0                                            
__________________________________________________________________________________________________
members_input (InputLayer)      (None, 61, 4, 3)     0                                            
__________________________________________________________________________________________________
dropout_2 (Dropout)             (None, 61, 39, 3)    0           transactions_input[0][0]         
__________________________________________________________________________________________________
dropout_3 (Dropout)             (None, 61, 7, 3)     0           userlogs_input[0][0]             
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 61, 4, 3)     0           members_input[0][0]              
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 1, 39, 32)    5888        dropout_2[0][0]                  
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 1, 7, 32)     5888        dropout_3[0][0]                  
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 1, 1, 32)     23456       dropout_1[0][0]                  
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 1, 1, 32)     39968       conv2d_2[0][0]                   
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 1, 1, 32)     7200        conv2d_4[0][0]                   
__________________________________________________________________________________________________
global_max_pooling2d_1 (GlobalM (None, 32)           0           conv2d_1[0][0]                   
__________________________________________________________________________________________________
global_max_pooling2d_2 (GlobalM (None, 32)           0           conv2d_3[0][0]                   
__________________________________________________________________________________________________
global_max_pooling2d_3 (GlobalM (None, 32)           0           conv2d_5[0][0]                   
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 96)           0           global_max_pooling2d_1[0][0]     
                                                                 global_max_pooling2d_2[0][0]     
                                                                 global_max_pooling2d_3[0][0]     
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 2)            194         concatenate_1[0][0]              
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 2)            0           dense_1[0][0]                    
==================================================================================================
因此我的问题是:是否有人有多输入DNN架构和LIME的经验?是否有我没有看到的解决方法?我是否可以使用其他可解释的模型

多谢各位