Machine learning 在Tensorflow中实现Resnet的精度不理想

Machine learning 在Tensorflow中实现Resnet的精度不理想,machine-learning,neural-network,computer-vision,deep-learning,resnet,Machine Learning,Neural Network,Computer Vision,Deep Learning,Resnet,我是深度学习的初学者,最近我尝试实现34层残差神经网络。我用CIFAR-10图像训练神经网络,但测试精度没有预期的高,大约65%,如下面的截图所示 基本上,我实现剩余块的方式如下: """ Convolution Layers 3 starts here. Convolution Layers 3, Sub Unit 0 """ conv_weights_3_0 = tf.Variable(tf.random_nor

我是深度学习的初学者,最近我尝试实现34层残差神经网络。我用CIFAR-10图像训练神经网络,但测试精度没有预期的高,大约65%,如下面的截图所示

基本上,我实现剩余块的方式如下:

"""
    Convolution Layers 3 starts here.
    Convolution Layers 3, Sub Unit 0
"""

conv_weights_3_0 = tf.Variable(tf.random_normal([3,3,128,256]),dtype=tf.float32)
conv_3_0 = tf.nn.conv2d(conv_2_out, conv_weights_3_0, strides=[1,2,2,1], padding="SAME")

axis = list(range(len(conv_3_0.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_3_0, axis)

beta = tf.Variable(tf.zeros(conv_3_0.get_shape()[-1:]),dtype=tf.float32)
gamma = tf.Variable(tf.ones(conv_3_0.get_shape()[-1:]),dtype=tf.float32)

conv_3_0 = tf.nn.batch_normalization(conv_3_0, mean, variance, beta, gamma, 0.001)

conv_3_0 = tf.nn.relu(conv_3_0)

conv_weights_3_1 = tf.Variable(tf.random_normal([3,3,256,256]),dtype=tf.float32)
conv_3_1 = tf.nn.conv2d(conv_3_0, conv_weights_3_1, strides=[1,1,1,1], padding="SAME")

axis = list(range(len(conv_3_1.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_3_1, axis)

beta = tf.Variable(tf.zeros(conv_3_1.get_shape()[-1:]),dtype=tf.float32)
gamma = tf.Variable(tf.ones(conv_3_1.get_shape()[-1:]),dtype=tf.float32)

conv_3_1 = tf.nn.batch_normalization(conv_3_1, mean, variance, beta, gamma, 0.001)

conv_weights_3_pre = tf.Variable(tf.ones([1,1,128,256]),dtype=tf.float32,trainable=False)
conv_3_pre = tf.nn.conv2d(conv_2_out, conv_weights_3_pre, strides=[1,2,2,1], padding="SAME")

axis = list(range(len(conv_3_pre.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_3_pre, axis)

conv_3_pre = tf.nn.batch_normalization(conv_3_pre, mean, variance, None, None, 0.001)

conv_3_1 = conv_3_1 + conv_3_pre

conv_3_1 = tf.nn.relu(conv_3_1)
对于未增加尺寸的剩余块,如下所示:

"""
    Convolution Layers 1, Sub Unit 1
"""

conv_weights_1_2 = tf.Variable(tf.random_normal([3,3,64,64]),dtype=tf.float32)
conv_1_2 = tf.nn.conv2d(conv_1_1, conv_weights_1_2, strides=[1,1,1,1], padding="SAME")

axis = list(range(len(conv_1_2.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_1_2, axis)

beta = tf.Variable(tf.zeros(conv_1_2.get_shape()[-1:]),dtype=tf.float32)
gamma = tf.Variable(tf.ones(conv_1_2.get_shape()[-1:]),dtype=tf.float32)

conv_1_2 = tf.nn.batch_normalization(conv_1_2, mean, variance, beta, gamma, 0.001)

conv_1_2 = tf.nn.relu(conv_1_2)

conv_weights_1_3 = tf.Variable(tf.random_normal([3,3,64,64]),dtype=tf.float32)
conv_1_3 = tf.nn.conv2d(conv_1_2, conv_weights_1_3, strides=[1,1,1,1], padding="SAME")

axis = list(range(len(conv_1_3.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_1_3, axis)

beta = tf.Variable(tf.zeros(conv_1_3.get_shape()[-1:]),dtype=tf.float32)
gamma = tf.Variable(tf.ones(conv_1_3.get_shape()[-1:]),dtype=tf.float32)

conv_1_3 = tf.nn.batch_normalization(conv_1_3, mean, variance, beta, gamma, 0.001)

conv_1_3 = conv_1_3 + conv_1_1

conv_1_3 = tf.nn.relu(conv_1_3)
对于尺寸增加的块,如下所示:

"""
    Convolution Layers 3 starts here.
    Convolution Layers 3, Sub Unit 0
"""

conv_weights_3_0 = tf.Variable(tf.random_normal([3,3,128,256]),dtype=tf.float32)
conv_3_0 = tf.nn.conv2d(conv_2_out, conv_weights_3_0, strides=[1,2,2,1], padding="SAME")

axis = list(range(len(conv_3_0.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_3_0, axis)

beta = tf.Variable(tf.zeros(conv_3_0.get_shape()[-1:]),dtype=tf.float32)
gamma = tf.Variable(tf.ones(conv_3_0.get_shape()[-1:]),dtype=tf.float32)

conv_3_0 = tf.nn.batch_normalization(conv_3_0, mean, variance, beta, gamma, 0.001)

conv_3_0 = tf.nn.relu(conv_3_0)

conv_weights_3_1 = tf.Variable(tf.random_normal([3,3,256,256]),dtype=tf.float32)
conv_3_1 = tf.nn.conv2d(conv_3_0, conv_weights_3_1, strides=[1,1,1,1], padding="SAME")

axis = list(range(len(conv_3_1.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_3_1, axis)

beta = tf.Variable(tf.zeros(conv_3_1.get_shape()[-1:]),dtype=tf.float32)
gamma = tf.Variable(tf.ones(conv_3_1.get_shape()[-1:]),dtype=tf.float32)

conv_3_1 = tf.nn.batch_normalization(conv_3_1, mean, variance, beta, gamma, 0.001)

conv_weights_3_pre = tf.Variable(tf.ones([1,1,128,256]),dtype=tf.float32,trainable=False)
conv_3_pre = tf.nn.conv2d(conv_2_out, conv_weights_3_pre, strides=[1,2,2,1], padding="SAME")

axis = list(range(len(conv_3_pre.get_shape()) - 1))
mean, variance = tf.nn.moments(conv_3_pre, axis)

conv_3_pre = tf.nn.batch_normalization(conv_3_pre, mean, variance, None, None, 0.001)

conv_3_1 = conv_3_1 + conv_3_pre

conv_3_1 = tf.nn.relu(conv_3_1)
我使用学习率为0.001的AdamOptimizer对来自CIFAR-10的所有50000个训练图像进行训练,并使用这些10000个测试图像进行测试。在图中,训练几乎是1000个历元,每个历元有500个批次(每批次100张图像)。在每个时代之前,我洗牌了所有50000张训练图像。同样,在很长一段时间内,直到它完成,测试精度几乎保持在65%左右


完整的代码可在中找到。我的实现有什么问题吗?我期待任何关于改进我的实施的建议。

为什么您在这条线上选择了avg_池而不是max_池<代码>663:c_pre=tf.nn.avg_pool(conv_4_out,ksize=[1,1,1,1],strips=[1,1,1],padding=“SAME”)我也找不到您的正则化。tensorflow团队展示了许多使用dropout的示例。我建议在计算fc1后的第668行添加
keep\u prob=tf.placeholder(tf.float32)\n fc1\u drop=tf.nn.dropout(fc1,keep\u prob)
,然后使用fc1\u drop。在培训期间,您必须将keep_prob设置为0.5左右,而在入职培训期间,您必须将keep_prob设置为1.0左右。您的培训精度是否达到您希望的测试精度?如果你想考虑添加正则化,如果不是,你可能想看看网络的其他部分,比如学习率。我正在尝试你的建议,并会得到新的结果。@ThomasPinetz非常感谢你。你说得对,我还要看训练的准确性。这次我增加了一名辍学者,我也在关注训练的准确性。