如何在卷积神经网络文本分类TensorFlow示例中添加更多层?

如何在卷积神经网络文本分类TensorFlow示例中添加更多层?,tensorflow,Tensorflow,根据文件,中提供的模型类似于以下文件: “” 我发现原始模型(本文中给出)包含9层,其中6层为卷积层,3层为完全连接层,但实现的示例仅包含两层卷积层: with tf.variable_scope('CNN_Layer1'): # Apply Convolution filtering on input sequence. conv1 = tf.contrib.layers.convolution2d( byte_list, N_FILTERS,

根据文件,中提供的模型类似于以下文件: “”

我发现原始模型(本文中给出)包含9层,其中6层为卷积层,3层为完全连接层,但实现的示例仅包含两层卷积层:

with tf.variable_scope('CNN_Layer1'):
    # Apply Convolution filtering on input sequence.
    conv1 = tf.contrib.layers.convolution2d(
                 byte_list, N_FILTERS, FILTER_SHAPE1, padding='VALID')
    # Add a RELU for non linearity.
    conv1 = tf.nn.relu(conv1)
    # Max pooling across output of Convolution+Relu.
    pool1 = tf.nn.max_pool(
            conv1,
            ksize=[1, POOLING_WINDOW, 1, 1],
            strides=[1, POOLING_STRIDE, 1, 1],
            padding='SAME')
    # Transpose matrix so that n_filters from convolution becomes width.
    pool1 = tf.transpose(pool1, [0, 1, 3, 2])
with tf.variable_scope('CNN_Layer2'):
    # Second level of convolution filtering.
    conv2 = tf.contrib.layers.convolution2d(
                 pool1, N_FILTERS, FILTER_SHAPE2, padding='VALID')
    # Max across each filter to get useful features for classification.
    pool2 = tf.squeeze(tf.reduce_max(conv2, 1), squeeze_dims=[1])
如果有人能帮我把这个模型扩展到更多层

  • 与BVLC Caffenet类似:

    def bvlc_caffenet(IMG、重量、偏差): #平均减法 平均值=tf.常数([123.68116.779103.939],数据类型=tf.float32,形状=[1,1,1,3],name='img_-mean') 图像=imgs平均值

    #conv1 conv1=tf.nn.conv2d(图像,权重['c1'],[1,3,3,1],padding='VALID') out1=tf.nn.relu(tf.nn.bias_add(conv1,bias['b1'])) pool1=tf.nn.max_pool(out1,ksize=[1,3,3,1],步长=[1,2,2,1],padding='VALID')

    #conv2 conv2=tf.nn.conv2d(pool1,weights['c2'],[1,1,1,1],padding='VALID') out2=tf.nn.relu(tf.nn.bias_add(conv2,bias['b2'])) pool2=tf.nn.max_pool(out2,ksize=[1,3,3,1],步长=[1,2,2,1],padding='VALID')

    #conv3 conv3=tf.nn.conv2d(pool2,权重['c3'],[1,1,1,1],padding='VALID') out3=tf.nn.relu(tf.nn.bias_add(conv3,bias['b3']))

    #conv4 conv4=tf.nn.conv2d(out3,权重['c4'],[1,1,1,1],padding='VALID') out4=tf.nn.relu(tf.nn.bias_add(conv4,bias['b4']))

    #说服 conv5=tf.nn.conv2d(out4,权重['c5'],[1,1,1,1],padding='VALID') out5=tf.nn.relu(tf.nn.bias_add(conv5,bias['b5'])) pool5=tf.nn.max_pool(out5,ksize=[1,3,3,1],步长=[1,2,2,1],padding='VALID')

    #压扁 shape=int(np.prod(pool5.get_shape()[1:])) 池5_flat=tf.重塑(池5,[-1,形状])

    #fc6 fc6=tf.matmul(池5_扁平,重量['f6']) out6=tf.nn.relu(tf.nn.bias_add(fc6,bias['b6'])) out6=tf.nn.辍学(out6,0.5)

    #fc7 fc7=tf.matmul(out6,权重['f7']) out7=tf.nn.relu(tf.nn.bias_add(fc7,bias['b7'])) out7=tf.nn.辍学率(out7,0.5)

    #fc8 fc8=tf.matmul(out7,权重['f8']) out8=tf.nn.relu(tf.nn.bias_add(fc8,bias['b8'])) out8=tf.nn.辍学(out8,0.5)

    probs=tf.nn.softmax(out8) 返回问题

    网络的初始化权重和偏差 权重={ “c1”:tf.变量(tf.截断_normal([7,7,3,96],stddev=0.1)), “c2”:tf.变量(tf.截断_normal([5,5,96256],stddev=0.1)), “c3”:tf.变量(tf.截断_normal([3,3256384],stddev=0.1)), “c4”:tf.变量(tf.截断_normal([3,3384384],stddev=0.1)), “c5”:tf.变量(tf.截断的_normal([3,3384256],stddev=0.1)), “f6”:tf.变量(tf.截断_normal([40962048],stddev=0.1)), “f7”:tf.变量(tf.截断_normal([20482048],stddev=0.1)), “f8”:tf.变量(tf.截断的_normal([20481000],stddev=0.1)) } 偏差={ 'b1':tf.变量(tf.常量(0.1,形状=[96]), “b2”:tf.Variable(tf.constant(0.1,shape=[256]), “b3”:tf.变量(tf.常量(0.1,shape=[384]), “b4”:tf.变量(tf.常量(0.1,shape=[384]), “b5”:tf.Variable(tf.constant(0.1,shape=[256]), “b6”:tf.Variable(tf.constant(0.1,shape=[2048]), “b7”:tf.Variable(tf.constant(0.1,shape=[2048]), “b8”:tf.Variable(tf.constant(0.1,shape=[1000])) }

  • 遵循以下(另一种格式):
  • 这些有用吗