Python 使用tf.slim和inception_v1进行模型验证时出现意外行为

Python 使用tf.slim和inception_v1进行模型验证时出现意外行为,python,tensorflow,tf-slim,Python,Tensorflow,Tf Slim,我试图使用tf.slim中编写的inception_v1模块在CIFAR 10数据集上训练模型 下面是在数据集上训练和评估模型的代码 # test_data = (data['images_test'], data['labels_test']) train_data = (train_x, train_y) val_data = (val_x, val_y) # create two datasets, one for training and one for tes

我试图使用tf.slim中编写的inception_v1模块在CIFAR 10数据集上训练模型

下面是在数据集上训练和评估模型的代码

# test_data = (data['images_test'], data['labels_test'])
    train_data = (train_x, train_y)
    val_data = (val_x, val_y)

    # create two datasets, one for training and one for test
train_dataset = tf.data.Dataset.from_tensor_slices(train_data).shuffle(buffer_size=10000).batch(BATCH_SIZE).map(preprocess)

    # train_dataset = train_dataset.shuffle(buffer_size=10000).batch(BATCH_SIZE).map(preprocess)
    val_dataset = tf.data.Dataset.from_tensor_slices(val_data).batch(BATCH_SIZE).map(preprocess)
    # test_dataset = tf.data.Dataset.from_tensor_slices(test_data).batch(BATCH_SIZE).map(preprocess)

    # create a _iterator of the correct shape and type
    _iter = tf.data.Iterator.from_structure(
            train_dataset.output_types,
            train_dataset.output_shapes
            )
    features, labels = _iter.get_next()

    # create the initialization operations
    train_init_op = _iter.make_initializer(train_dataset)
    val_init_op = _iter.make_initializer(val_dataset)
    # test_init_op = _iter.make_initializer(test_dataset)

    # Placeholders which evaluate in the session
    training_mode = tf.placeholder(shape=None, dtype=tf.bool)
    dropout_prob = tf.placeholder_with_default(1.0, shape=())
    reuse_bool = tf.placeholder_with_default(True, shape=())

    # Init the saver Object which handles saves and restores of
    # model weights
    # saver = tf.train.Saver()

    # Initialize the model inside the arg_scope to define the batch
    # normalization layer and the appropriate parameters
    with slim.arg_scope(inception_v1_arg_scope(use_batch_norm=True)) as scope:
        logits, end_points = inception_v1(features,
                                          reuse=None,
                                          dropout_keep_prob=dropout_prob,                                       is_training=training_mode)

    # Create the cross entropy loss function
    cross_entropy = tf.reduce_mean(
        tf.losses.softmax_cross_entropy(tf.one_hot(labels, 10), logits))

    train_op = tf.train.AdamOptimizer(1e-2).minimize(loss=cross_entropy)
    # train_op = slim.learning.create_train_op(cross_entropy, optimizer, global_step=)

    # Define the accuracy metric
    preds = tf.argmax(logits, axis=-1, output_type=tf.int64)
    acc = tf.reduce_mean(tf.cast(tf.equal(preds, labels), tf.float32))

    # Count the iterations for each set
    n_train_batches = train_y.shape[0] // BATCH_SIZE
    n_val_batches = val_y.shape[0] // BATCH_SIZE

    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        # saver = tf.train.Saver([v for v in tf.all_variables()][:-1])
        # for v in tf.all_variables():
        #     print(v.name)
        # saver.restore(sess, tf.train.latest_checkpoint('./', latest_filename='inception_v1.ckpt'))
        for i in range(EPOCHS):
            total_loss = 0
            total_acc = 0

            # Init train session
            sess.run(train_init_op)
            with tqdm(total=n_train_batches * BATCH_SIZE) as pbar:
                for batch in range(n_train_batches):
                    _, loss, train_acc = sess.run([train_op, cross_entropy, acc], feed_dict={training_mode: True, dropout_prob: 0.2})
                    total_loss += loss
                    total_acc += train_acc
                    pbar.update(BATCH_SIZE)
            print("Epoch: {} || Loss: {:.5f} || Acc: {:.5f} %".\
                    format(i+1, total_loss / n_train_batches, (total_acc / n_train_batches)*100))

            # Switch to validation
            total_val_loss = 0
            total_val_acc = 0
            sess.run(val_init_op)
            for batch in range(n_val_batches):
                val_loss, val_acc = sess.run([cross_entropy, acc], feed_dict={training_mode: False})
                total_val_loss += val_loss
                total_val_acc += val_acc
            print("Epoch: {} || Validation Loss: {:.5f} || Val Acc: {:.5f} %".\
                    format(i+1, total_val_loss / n_val_batches, (total_val_acc / n_val_batches) * 100))
矛盾之处在于,在验证集上对模型进行培训和评估时,我得到了以下结果:

纪元:1 |损失:2.29436 |会计科目:23.61750% │历元:1 |验证损失:1158854431554614016.00000 | Val Acc:10.03000%
│100%|███████████████████████████████████████████████████| 40000/40000[03:52我找到了解决问题的方法。这个问题涉及到两个方面。 第一个是设置较小的批处理范数衰减,因为比imagenet数据集小,我应该将其降低到
0.99

batch\u norm\u decay=0.99

另一件事是使用以下行来跟踪批量规范化层的可训练参数

train\u op=slim.learning.create\u train\u op(交叉熵,优化器)