Python tensorflow中脱落层的奇异行为

Python tensorflow中脱落层的奇异行为,python,tensorflow,Python,Tensorflow,我用tensorflow制作了一个CNN模型,它实现了退出层。 我在网络函数中传递了is_训练参数,因此在测试阶段将禁用辍学,并且我意识到禁用它时错误会显著增加。 如果我用dropout函数(这不是逻辑)测试模型,平均误差为0.01,而当我通过指定is_training为False(但仍使用dropout训练)来测试模型时,平均误差为0.8。 我不明白我的错误在哪里 以下是模型函数: def conv_net(x, arch, is_training=False): # MNIST d

我用tensorflow制作了一个CNN模型,它实现了退出层。 我在网络函数中传递了is_训练参数,因此在测试阶段将禁用辍学,并且我意识到禁用它时错误会显著增加。 如果我用dropout函数(这不是逻辑)测试模型,平均误差为0.01,而当我通过指定is_training为False(但仍使用dropout训练)来测试模型时,平均误差为0.8。 我不明白我的错误在哪里

以下是模型函数:

def conv_net(x, arch, is_training=False):

    # MNIST data input is a 1-D vector of 784 features (28*28 pixels)
    # Reshape to match picture format [Height x Width x Channel]
    # Tensor input becomes 4-D: [Batch Size, Height, Width, Channel]
    x = tf.reshape(x, shape=[-1, 28, 28, 1])

    ### YOUR CODE STARTS HERE ###

    # Convolution Layer with F1 filters, a kernel size of K1 and ReLU activations
    pad = 'same'

    conv1 = tf.layers.conv2d(x, arch['conv1'][0], arch['conv1'][1], activation=tf.nn.relu)
    conv2 = tf.layers.conv2d(conv1, arch['conv2'][0], arch['conv2'][1], activation=tf.nn.relu)
    pool1 = tf.layers.max_pooling2d(conv2, arch['pool1'][0], arch['pool1'][0])
    drop1 = tf.layers.dropout(pool1, arch['dropout1'], training=is_training)

    conv3 = tf.layers.conv2d(drop1, arch['conv3'][0], arch['conv3'][1], activation=tf.nn.relu)  # # TODO: add padding
    drop1_2 = tf.layers.dropout(conv3, arch['dropout1'], training=is_training)
    conv4 = tf.layers.conv2d(drop1_2, arch['conv4'][0], arch['conv4'][1], activation=tf.nn.relu)
    pool2 = tf.layers.max_pooling2d(conv4, arch['pool2'][0], arch['pool2'][0])

    drop2 = tf.layers.dropout(pool2, arch['dropout2'], training=is_training)

    flat = tf.contrib.layers.flatten(drop2)

    fc1 = tf.layers.dense(flat, arch['N'])

    out = tf.layers.dense(fc1, n_classes)
    ### YOUR CODE ENDS HERE ###

    return out
以及培训职能:

def train_test_model(hypers, save_final_model=False):
    # Running the training session
    print("Starting training session...")
    with tf.Session() as sess:

        # Run the initializer
        sess.run(init)
        total_batch = int(mnist.train.num_examples / hypers.batch_size)
        # Training cycle
        try:
            for epoch in range(hypers.n_epochs):
                avg_cost = 0.

                # Loop over all batches
                for i in range(total_batch):
                    batch_x, batch_y = mnist.train.next_batch(hypers.batch_size)
                    # Run optimization op (backprop) and cost op (to get loss value)
                    _, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
                                                                  y: batch_y})
                    # Compute average loss
                    avg_cost += c / total_batch
                # Display logs per epoch step
                if epoch % display_step == 0:

                    # Test model
                    correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))

                    # Calculate accuracy
                    # ORIGINAL:
                    # accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
                    # train_err = 1-accuracy.eval({x: mnist.train.images, y: mnist.train.labels})
                    # valid_err = 1-accuracy.eval({x: mnist.validation.images, y: mnist.validation.labels})

                    # WITH BATCHES FOR LESS MEM ALLOC
                    accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
                    train_acc = 0
                    for i in range(total_batch):
                        batch_x, batch_y = mnist.train.next_batch(hypers.batch_size)
                        train_acc += accuracy.eval(feed_dict={x:batch_x,
                                                              y:batch_y})
                    train_acc /= total_batch

                    train_err = 1 - train_acc
                    valid_err = 1 - accuracy.eval({x: mnist.validation.images, y: mnist.validation.labels})
                    # Display accuracy
                    print("Epoch:", '%05d' % (epoch + 1), ", cost=",
                          "{:.9f}".format(avg_cost), ", train_err=", "{:.4f}".format(train_err), ", valid_err=",
                          "{:.4f}".format(valid_err))

                if epoch % 5 == 0:
                    v = input('Do you want to stop the model? [Y/n]')
                    if 'y' in v.lower():
                        raise KeyboardInterrupt

        except KeyboardInterrupt:
            hypers.n_epochs = epoch
            print("SIGINT Received, interrupting the training")



        print("\nOptimization Finished!\n")

        # Test model
        correct_prediction = tf.equal(tf.argmax(test_pred, 1), tf.argmax(y, 1))
        # Calculate accuracy
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
        # modified to batches
        train_acc = 0
        for i in range(total_batch):
            batch_x, batch_y = mnist.train.next_batch(hypers.batch_size)
            train_acc += accuracy.eval(feed_dict={x: batch_x,
                                                  y: batch_y})
        train_acc /= total_batch
        train_err = 1 - train_acc
        #
        valid_err = 1 - accuracy.eval({x: mnist.validation.images, y: mnist.validation.labels})
        print("Optimized for ", '%05d' % (epoch + 1), "epochs, to obtain training error", "{:.4f}".format(train_err),
              ", and validation error", "{:.4f}".format(valid_err))
        confusion = tf.confusion_matrix(tf.argmax(pred, 1), tf.argmax(y, 1))
        print("\nValidation Confusion matrix:\n",
              confusion.eval({x: mnist.validation.images, y: mnist.validation.labels}))

如果我错了,请更正,但我看不到对conv_net()的调用。你能提供完整的代码片段吗?顺便说一句,您可以为tf.layers.dropout函数的“training”属性指定一个is_training占位符。如果你这样做,你就可以直接把is_训练值输入到feed字典中,这样就不需要重新定义训练循环和测试循环之间的关系图了。我没有发现任何明显的错误。如果你能提供一个完整的小例子来重现这个问题,这将是非常有帮助的。这是完整的代码,如果有一些混乱,我很抱歉,我仍在学习BTW@OlivierDehaene谢谢你的提示,我没想过。