Python 3.x 将GlobalMapooling1D池层的输出映射回keras中的输入位置

Python 3.x 将GlobalMapooling1D池层的输出映射回keras中的输入位置,python-3.x,keras,deep-learning,nlp,conv-neural-network,Python 3.x,Keras,Deep Learning,Nlp,Conv Neural Network,我试图将keras的GlobalMappooling1D层的输出映射回输入序列位置,但似乎样本输入上的卷积没有给出与GlobalMappooling1D输出相同的结果。预期输出和获得的输出如下所示 这里的主要目标是映射输入中的重要位置,并将其分类为积极贡献位置。本文讨论了一个类似的问题,本文使用了随时间变化的最大池() 我们的网络有三层Conv、GlobalMapooling1D(GMP)和密集型。运行模型后,我获得了Conv层中使用的20个过滤器的权重。对于测试样品,获得GMP层的输出。我试图

我试图将keras的GlobalMappooling1D层的输出映射回输入序列位置,但似乎样本输入上的卷积没有给出与GlobalMappooling1D输出相同的结果。预期输出和获得的输出如下所示

这里的主要目标是映射输入中的重要位置,并将其分类为积极贡献位置。本文讨论了一个类似的问题,本文使用了随时间变化的最大池()

我们的网络有三层Conv、GlobalMapooling1D(GMP)和密集型。运行模型后,我获得了Conv层中使用的20个过滤器的权重。对于测试样品,获得GMP层的输出。我试图通过将过滤器权重与输入进行卷积来重新创建GMP层的输出,以查看它是否与输出匹配

# Packages
import keras
import tensorflow as tf
from keras import backend as k
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, 
inter_op_parallelism_threads=1)
sess = tf.Session(graph = tf.get_default_graph(), config = session_conf)
k.set_session(sess)

import numpy as np
np.random.seed(37)
import pandas as pd
tf.set_random_seed(89)
import random as rn
rn.seed(1254)

import matplotlib.pyplot as plt
from keras.layers import *
from keras.layers import Activation
from keras.layers.core import Dense, Flatten
from keras.optimizers import Adam
from keras.metrics import categorical_crossentropy
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import *
import itertools
import math
from keras.models import Sequential, Model
from keras.layers import Input, Flatten, Dense, Dropout, Convolution2D, Conv2D, MaxPooling2D, Lambda, GlobalMaxPooling2D, GlobalAveragePooling2D, BatchNormalization, Activation, AveragePooling2D, Concatenate
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras.utils import np_utils
from keras.callbacks import CSVLogger
#%matplotlib inline
keras.backend.set_image_data_format('channels_last')
from keras import initializers

# DATA
# 1000 training example data:: each example is 1803 dimensional long
# where dimensional element is encoded using 4 dimensional one hot encoding
# for demonstration lets's use randomly initiated integers
X = np.random.random_integers(0, 1, size=(1000, 1803, 4))
y = np.random.random_integers(0, 1, size=(1000, 1))

print("X.shape : ", X.shape) ## (2237, 95, 95, 1)
print("y.shape : ", y.shape) ## (2237, 1)

train_index = 950
test_index = 950

# HYPERPARAMETERS
ACTIVATION = 'relu'
KERNEL_INITIALIZER = initializers.glorot_uniform(seed=90)
BIAS_INITIALIZER='zeros'
INPUT_SHAPE=X.shape[1:]
OUTPUT_ACTIVATION = 'sigmoid'

OPTIMIZER = Adam
LEARNING_RATE = 0.0001
BETA_1 = 0.9
BETA_2 = 0.999
EPSILON = None
DECAY = 0.0
AMSGRAD = False
LOSS = 'binary_crossentropy'
METRICS = ['accuracy']

BATCH_SIZE = 64
EPOCHS = 20
SHUFFLE = True
VALIDATION_SPLIT = 0.05

FIRST_CONV_LYR = 'conv_lyr_1'
FIRST_CONV_LYR_FILTERS = 20

FIRST_DENSE_LYR = 'dense_lyr_1'
FIRST_DENSE_LYR_UNITS = 50

VALID_PADDING = 'valid'
SAME_PADDING = 'same'
USE_BIAS = True
VERBOSE = 2
DILATION_RATE=1

# MODEL
model_21 = Sequential()
model_21.add(Conv1D(filters=FIRST_CONV_LYR_FILTERS, kernel_size = (2), strides=(1), activation = ACTIVATION, 
                    padding = VALID_PADDING, input_shape =INPUT_SHAPE , kernel_initializer=KERNEL_INITIALIZER, 
                    dilation_rate=DILATION_RATE, bias_initializer=BIAS_INITIALIZER, name =FIRST_CONV_LYR ))

model_21.add(GlobalMaxPooling1D()) # input_shape=(None, 1802, 20)

#model_21.add(Flatten()) ::: GMP does not need flatten()

model_21.add(Dense(1, activation = OUTPUT_ACTIVATION))

model_21.compile(optimizer=OPTIMIZER(lr = LEARNING_RATE, beta_1=BETA_1, beta_2=BETA_2, epsilon=EPSILON, decay=DECAY, 
                                     amsgrad=AMSGRAD), loss = LOSS, metrics = METRICS,) 

mdl_21 = model_21.fit(X[:train_index], y[:train_index], batch_size = BATCH_SIZE, epochs = EPOCHS, verbose = VERBOSE, 
                      shuffle=SHUFFLE, validation_split=VALIDATION_SPLIT) 

accuracy_21 = model_21.evaluate(X[test_index:], y[test_index:])

print(model_21.metrics_names[1], accuracy_21[1]*100)

print("The accuracy of Model 21 is : ", accuracy_21)

# PREDICTION
prediction_21 = model_21.predict(X[test_index:]) # predict() takes <class 'numpy.ndarray'> as input
predicted_classes_21 = model_21.predict_classes(X[test_index:])
print("The predicted classes are : ", predicted_classes_21[:, 0])
print("Actual classes are : ", y[test_index:][:, 0]) # [:, 0]
测试样本上的GMP层输出 输出: conv和致密层的权重和偏差 问题就在这里 输出 实际值和预期值不匹配。我是否正确地找到GMP值,该值取所获得输出的最大值()

校正后,我将映射在最后一个密集层中获得的分类输出,以关联从所有序列的输入序列中获得的位置。这将帮助我找到与输出正相关的序列位置。请让我知道,如果我可以使用一些现有的代码找到这样的位置,或在这样的问题生成热图。抱歉发了这么长的帖子,提前谢谢

The output of training above model is

    Train on 902 samples, validate on 48 samples
    Epoch 1/20
     - 3s - loss: 0.7790 - acc: 0.4878 - val_loss: 0.7631 - val_acc: 0.5000
    Epoch 2/20
     - 3s - loss: 0.7682 - acc: 0.4878 - val_loss: 0.7537 - val_acc: 0.5000
    Epoch 3/20
     - 3s - loss: 0.7588 - acc: 0.4878 - val_loss: 0.7463 - val_acc: 0.5000
    Epoch 4/20
     - 3s - loss: 0.7503 - acc: 0.4878 - val_loss: 0.7384 - val_acc: 0.5000
    Epoch 5/20
     - 3s - loss: 0.7422 - acc: 0.4878 - val_loss: 0.7313 - val_acc: 0.5000
    Epoch 6/20
     - 3s - loss: 0.7348 - acc: 0.4878 - val_loss: 0.7249 - val_acc: 0.5000
    Epoch 7/20
     - 3s - loss: 0.7282 - acc: 0.4878 - val_loss: 0.7192 - val_acc: 0.5000
    Epoch 8/20
     - 3s - loss: 0.7227 - acc: 0.4878 - val_loss: 0.7148 - val_acc: 0.5000
    Epoch 9/20
     - 3s - loss: 0.7180 - acc: 0.4878 - val_loss: 0.7109 - val_acc: 0.5000
    Epoch 10/20
     - 3s - loss: 0.7137 - acc: 0.4878 - val_loss: 0.7075 - val_acc: 0.5000
    Epoch 11/20
     - 3s - loss: 0.7106 - acc: 0.4878 - val_loss: 0.7050 - val_acc: 0.5000
    Epoch 12/20
     - 3s - loss: 0.7074 - acc: 0.4878 - val_loss: 0.7026 - val_acc: 0.5000
    Epoch 13/20
     - 3s - loss: 0.7048 - acc: 0.4878 - val_loss: 0.7006 - val_acc: 0.5000
    Epoch 14/20
     - 3s - loss: 0.7026 - acc: 0.4878 - val_loss: 0.6988 - val_acc: 0.5000
    Epoch 15/20
     - 3s - loss: 0.7006 - acc: 0.4878 - val_loss: 0.6975 - val_acc: 0.5000
    Epoch 16/20
     - 3s - loss: 0.6992 - acc: 0.4878 - val_loss: 0.6966 - val_acc: 0.5000
    Epoch 17/20
     - 3s - loss: 0.6981 - acc: 0.4878 - val_loss: 0.6958 - val_acc: 0.5000
    Epoch 18/20
     - 3s - loss: 0.6971 - acc: 0.4878 - val_loss: 0.6950 - val_acc: 0.5000
    Epoch 19/20
     - 3s - loss: 0.6962 - acc: 0.4878 - val_loss: 0.6945 - val_acc: 0.5000
    Epoch 20/20
     - 3s - loss: 0.6953 - acc: 0.4878 - val_loss: 0.6940 - val_acc: 0.5000
    50/50 [==============================] - 0s 439us/step
    acc 48.00000047683716
    The accuracy of Model 21 is :  [0.6956488680839539, 0.48000000476837157]


    The predicted values for test dataset is::
    The predicted classes are :  [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]
    Actual classes are :  [1 0 1 0 1 0 1 0 0 0 1 0 1 1 1 1 0 1 1 1 1 0 1 1 0 1 0 1 1 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 1 0 0]
from keras.models import Model

model = model_21  # include here your original model

layer_name = 'global_max_pooling1d_1'
intermediate_layer_model = Model(inputs=model.input, outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(X[test_index:test_index+1])
print("intermediate_output shape : \n", intermediate_output.shape)
print("intermediate_output : \n",intermediate_output[0])
intermediate_output shape : 
(1, 20)

intermediate_output : 
[[0.83934546 0.4251969  0.6270695  1.3905973  0.5667287  0.76842016
  0.8666611  0.84354174 0.5817993  0.82427627 0.4142136  0.79649013
  0.61913747 1.2524168  1.1239575  0.5584184  0.35370556 1.0718826
  0.96888304 0.51134074]]
# WEIGHTS & BIASES of conv and dense layer
first_layer_weights = model.layers[0].get_weights()[0]
first_layer_biases  = model.layers[0].get_weights()[1]
third_layer_weights = model.layers[2].get_weights()[0]
third_layer_biases  = model.layers[2].get_weights()[1]
# I tried to recreate the output for all 20 filters used in conv layer
# to map them with output of GMP on test data
# conv_val holds the convoluted value of 20 filters 
conv_val = [[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]]
for i in range(len(intermediate_output[0])):
    print("Finding the postion of input which resulted into position value : {:d} of GAP ouput".format(i+1))
    for j in range(len(X[test_index])-1): 
    # each sample data is 1803 vocabulary long and each vocabulary is encoded using a 4 dimensional vector representation
    # we are using kernel size of 2 in conv1d layer so each filter is 2*4
        conv_result = np.sum((X[test_index][j:j+2] * first_layer_weights[:,:,i]) + first_layer_biases[i])
        conv_val[i].append(max(0, conv_result))

    posn = np.argmax(conv_val[i]) 

    print("position of max element obtained from convolution with text input: posn : ", posn)
    print("Actual o/p: vale in resulting convlution on test data ",conv_val[i][posn])
    #print("np.max : ", np.max(conv_val[i]))

    print("Expected op : The max value obtained from GAP layer on test data ", intermediate_output[0][i])

    if ((conv_val[i][posn]) == intermediate_output[0][i]):
        print("Convolved at the index : ", j)
        print("\n")
    else:
        print("not found :( ")
        print("\n")
======= Find postion in input which resulted into GMP output position value 1 ======


position of max element obtained from convolution with text input: posn :  612


Actual o/p: vale in resulting convlution on test data  0.9715582579374313


Expected op : The max value obtained from GAP layer on test data  0.83934546
not found :( 


======= Find postion in input which resulted into GMP output position value 2 ======


position of max element obtained from convolution with text input: posn :  63


Actual o/p: vale in resulting convlution on test data  0.29808690398931503


Expected op : The max value obtained from GAP layer on test data  0.4251969
not found :( 


======= Find postion in input which resulted into GMP output position value 3 ======


position of max element obtained from convolution with text input: posn :  412


Actual o/p: vale in resulting convlution on test data  0.7571525424718857


Expected op : The max value obtained from GAP layer on test data  0.6270695
not found :( 


======= Find postion in input which resulted into GMP output position value 4 ======


position of max element obtained from convolution with text input: posn :  292


Actual o/p: vale in resulting convlution on test data  1.5215912163257599


Expected op : The max value obtained from GAP layer on test data  1.3905973
not found :( 


======= Find postion in input which resulted into GMP output position value 5 ======


position of max element obtained from convolution with text input: posn :  40


Actual o/p: vale in resulting convlution on test data  0.6967360526323318


Expected op : The max value obtained from GAP layer on test data  0.5667287
not found :( 


======= Find postion in input which resulted into GMP output position value 6 ======


position of max element obtained from convolution with text input: posn :  371


Actual o/p: vale in resulting convlution on test data  0.6413831561803818


Expected op : The max value obtained from GAP layer on test data  0.76842016
not found :( 


======= Find postion in input which resulted into GMP output position value 7 ======


position of max element obtained from convolution with text input: posn :  166


Actual o/p: vale in resulting convlution on test data  0.9993664026260376


Expected op : The max value obtained from GAP layer on test data  0.8666611
not found :( 


======= Find postion in input which resulted into GMP output position value 8 ======


position of max element obtained from convolution with text input: posn :  149


Actual o/p: vale in resulting convlution on test data  0.7193388789892197


Expected op : The max value obtained from GAP layer on test data  0.84354174
not found :( 


======= Find postion in input which resulted into GMP output position value 9 ======


position of max element obtained from convolution with text input: posn :  93


Actual o/p: vale in resulting convlution on test data  0.7193349152803421


Expected op : The max value obtained from GAP layer on test data  0.5817993
not found :( 


======= Find postion in input which resulted into GMP output position value 10 ======


position of max element obtained from convolution with text input: posn :  30


Actual o/p: vale in resulting convlution on test data  0.6970454901456833


Expected op : The max value obtained from GAP layer on test data  0.82427627
not found :( 


======= Find postion in input which resulted into GMP output position value 11 ======


position of max element obtained from convolution with text input: posn :  27


Actual o/p: vale in resulting convlution on test data  0.2878176420927048


Expected op : The max value obtained from GAP layer on test data  0.4142136
not found :( 


======= Find postion in input which resulted into GMP output position value 12 ======


position of max element obtained from convolution with text input: posn :  96


Actual o/p: vale in resulting convlution on test data  0.9302686732262373


Expected op : The max value obtained from GAP layer on test data  0.79649013
not found :( 


======= Find postion in input which resulted into GMP output position value 13 ======


position of max element obtained from convolution with text input: posn :  2


Actual o/p: vale in resulting convlution on test data  0.7492209821939468


Expected op : The max value obtained from GAP layer on test data  0.61913747
not found :( 


======= Find postion in input which resulted into GMP output position value 14 ======


position of max element obtained from convolution with text input: posn :  82


Actual o/p: vale in resulting convlution on test data  1.3909660689532757


Expected op : The max value obtained from GAP layer on test data  1.2524168
not found :( 


======= Find postion in input which resulted into GMP output position value 15 ======


position of max element obtained from convolution with text input: posn :  82


Actual o/p: vale in resulting convlution on test data  1.0048598730936646


Expected op : The max value obtained from GAP layer on test data  1.1239575
not found :( 


======= Find postion in input which resulted into GMP output position value 16 ======


position of max element obtained from convolution with text input: posn :  986


Actual o/p: vale in resulting convlution on test data  0.43241868913173676


Expected op : The max value obtained from GAP layer on test data  0.5584184
not found :( 


======= Find postion in input which resulted into GMP output position value 17 ======


position of max element obtained from convolution with text input: posn :  52


Actual o/p: vale in resulting convlution on test data  0.22693601995706558


Expected op : The max value obtained from GAP layer on test data  0.35370556
not found :( 


======= Find postion in input which resulted into GMP output position value 18 ======


position of max element obtained from convolution with text input: posn :  51


Actual o/p: vale in resulting convlution on test data  0.9564715027809143


Expected op : The max value obtained from GAP layer on test data  1.0718826
not found :( 


======= Find postion in input which resulted into GMP output position value 19 ======


position of max element obtained from convolution with text input: posn :  106


Actual o/p: vale in resulting convlution on test data  0.8425523824989796


Expected op : The max value obtained from GAP layer on test data  0.96888304
not found :( 


======= Find postion in input which resulted into GMP output position value 20 ======


position of max element obtained from convolution with text input: posn :  410


Actual o/p: vale in resulting convlution on test data  0.641106590628624


Expected op : The max value obtained from GAP layer on test data  0.51134074
not found :(