Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
当我使用tensorflow实现一个GAN时,出现了一些值错误_Tensorflow_Mnist - Fatal编程技术网

当我使用tensorflow实现一个GAN时,出现了一些值错误

当我使用tensorflow实现一个GAN时,出现了一些值错误,tensorflow,mnist,Tensorflow,Mnist,我的代码中存在一些值错误:ValueError:Variable d_conv2d1/weights/Adam/不存在,或者不是使用tf.get_Variable()创建的。您的意思是在VarScope中设置重用=无吗 我的tensorflow版本是1.2。数据集为mnist,代码如下: 错误信息如下所示: 回溯(最近一次呼叫最后一次): 文件“/home/zhoupinbyi/DCGAN/test_gan.py”,第370行,在 列车() 文件“/home/zhoupinbyi/DCGAN/t

我的代码中存在一些值错误:ValueError:Variable d_conv2d1/weights/Adam/不存在,或者不是使用tf.get_Variable()创建的。您的意思是在VarScope中设置重用=无吗

我的tensorflow版本是1.2。数据集为mnist,代码如下:

错误信息如下所示:

回溯(最近一次呼叫最后一次):
文件“/home/zhoupinbyi/DCGAN/test_gan.py”,第370行,在
列车()
文件“/home/zhoupinbyi/DCGAN/test_gan.py”,297行,列车中
d_optim=优化器。最小化(d_损失,全局_步骤=全局_步骤,变量列表=d_变量)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/training/optimizer.py”,第325行
名称=名称)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/training/optimizer.py”,第446行,应用梯度
self.\u创建\u插槽([\u获取变量\u用于变量列表中的(v)用于v])
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/training/adam.py”,第128行,位于“创建”插槽中
self.\u零槽(v,“m”,self.\u名称)
文件“/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py”,第766行,位于零槽中
命名的插槽[\u var\u key(var)]=插槽创建者。创建零插槽(var,op\u name)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/training/slot\u creator.py”,第174行,位于create\u zeros\u slot中
与_primary合并=与_primary合并)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/training/slot\u creator.py”,第146行,在带有初始值设定项的create\u slot\u中
数据类型)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/training/slot\u creator.py”,第66行,在
验证形状=验证形状)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/ops/variable\u scope.py”,第1065行,在get\u变量中
使用资源=使用资源,自定义获取者=自定义获取者)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/ops/variable_scope.py”,第962行,在get_variable中
使用资源=使用资源,自定义获取者=自定义获取者)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/ops/variable\u scope.py”,第367行,在get\u变量中
验证形状=验证形状,使用资源=使用资源)
文件“/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable\u scope.py”,第352行,在_-true\u-getter中
使用资源=使用资源)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/ops/variable\u scope.py”,第682行,在“get\u single\u variable”中
“VarScope?”%name)
ValueError:变量d_conv2d1/weights/Adam/不存在,或者不是使用tf.get_Variable()创建的。您的意思是在VarScope中设置重用=无吗?

该代码可以在tensorflow 0.11中运行,但在1.2中,必须添加代码:

使用tf.variable\u scope(tf.get\u variable\u scope(),重用=无):

以上是您使用的生成器和判别器功能

这真的是一个最小的工作示例吗?您是否可以尝试删除尽可能多仍会产生此问题的代码?
# -*- coding: utf-8 -*-
import os
import numpy as np
import scipy.misc
import tensorflow as tf
from tensorflow.contrib.layers.python.layers import batch_norm as batch_norm
from tensorflow.examples.tutorials.mnist import input_data
import string

BATCH_SIZE = 64



def read_data():

    # 
    mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)

    (x_train, y_train), (x_test, y_test) = tf.contrib.keras.datasets.mnist.load_data()

    x_train = x_train.reshape((60000,784))
    x_test  = x_test.reshape((10000,784))

    y_train_vec = np.zeros((len(y_train), 10), dtype=np.float)
    for i, label in enumerate(y_train):
        y_train_vec[i, int(y_train[i])] = 1.0

    y_test_vec = np.zeros((len(y_test), 10), dtype=np.float)
    for i, label in enumerate(y_test):
        y_test_vec[i, int(y_test[i])] = 1.0



    # 
    # 
    X = np.concatenate((x_train, x_test), axis=0)
    y = np.concatenate((y_train_vec, y_test_vec), axis=0)

    # 
    seed = 547
    np.random.seed(seed)
    np.random.shuffle(X)
    np.random.seed(seed)
    np.random.shuffle(y)



    return X/255., y


# 常数偏置
def bias(name, shape, bias_start = 0.0, trainable = True):

    dtype = tf.float32
    var = tf.get_variable(name, shape, tf.float32, trainable = trainable, 
                          initializer = tf.constant_initializer(
                                                  bias_start, dtype = dtype))
    return var

# 随机权重
def weight(name, shape, stddev = 0.02, trainable = True):

    dtype = tf.float32
    var = tf.get_variable(name, shape, tf.float32, trainable = trainable, 
                          initializer = tf.random_normal_initializer(
                                              stddev = stddev, dtype = dtype))
    return var

# 全连接层
def fully_connected(value, output_shape, name = 'fully_connected', with_w = False):

    shape = value.get_shape().as_list()

    with tf.variable_scope(name):
        weights = weight('weights', [shape[1], output_shape], 0.02)
        biases = bias('biases', [output_shape], 0.0)

    if with_w:
        return tf.matmul(value, weights) + biases, weights, biases
    else:
        return tf.matmul(value, weights) + biases

# Leaky-ReLu 层
def lrelu(x, leak=0.2, name = 'lrelu'):

    with tf.variable_scope(name):
        return tf.maximum(x, leak*x, name = name)

# ReLu 层
def relu(value, name = 'relu'):
    with tf.variable_scope(name):
        return tf.nn.relu(value)

# 解卷积层
def deconv2d(value, output_shape, k_h = 5, k_w = 5, strides =[1, 2, 2, 1], 
             name = 'deconv2d', with_w = False):

    with tf.variable_scope(name):
        weights = weight('weights', 
                         [k_h, k_w, output_shape[-1], value.get_shape()[-1]])
        deconv = tf.nn.conv2d_transpose(value, weights, 
                                        output_shape, strides = strides)
        biases = bias('biases', [output_shape[-1]])
        deconv = tf.reshape(tf.nn.bias_add(deconv, biases), deconv.get_shape())
        if with_w:
            return deconv, weights, biases
        else:
            return deconv

# 卷积层            
def conv2d(value, output_dim, k_h = 5, k_w = 5, 
            strides =[1, 2, 2, 1], name = 'conv2d'):

    with tf.variable_scope(name):
        weights = weight('weights', 
                         [k_h, k_w, value.get_shape()[-1], output_dim])
        conv = tf.nn.conv2d(value, weights, strides = strides, padding = 'SAME')
        biases = bias('biases', [output_dim])
        conv = tf.reshape(tf.nn.bias_add(conv, biases), conv.get_shape())

        return conv

# 把约束条件串联到 feature map
def conv_cond_concat(value, cond, name = 'concat'):

    # 把张量的维度形状转化成 Python 的 list
    value_shapes = value.get_shape().as_list()
    cond_shapes = cond.get_shape().as_list()

    # 在第三个维度上(feature map 维度上)把条件和输入串联起来,
    # 条件会被预先设为四维张量的形式,假设输入为 [64, 32, 32, 32] 维的张量,
    # 条件为 [64, 32, 32, 10] 维的张量,那么输出就是一个 [64, 32, 32, 42] 维张量
    with tf.variable_scope(name):        
        return tf.concat( [value, cond * tf.ones(value_shapes[0:3] + cond_shapes[3:])],3)

# Batch Normalization 层        
def batch_norm_layer(value, is_train = True, name = 'batch_norm'):

    with tf.variable_scope(name) as scope:
        if is_train:        
            return batch_norm(value, decay = 0.9, epsilon = 1e-5, scale = True, 
                              is_training = is_train, 
                              updates_collections = None, scope = scope)
        else:
            return batch_norm(value, decay = 0.9, epsilon = 1e-5, scale = True, 
                              is_training = is_train, reuse = True, 
                              updates_collections = None, scope = scope)


# 保存图片函数
def save_images(images, size, path):

    """
    Save the samples images
    The best size number is
            int(max(sqrt(image.shape[0]),sqrt(image.shape[1]))) + 1
    example:
        The batch_size is 64, then the size is recommended [8, 8]
        The batch_size is 32, then the size is recommended [6, 6]
    """

    # 图片归一化,主要用于生成器输出是 tanh 形式的归一化
    img = (images + 1.0) / 2.0
    h, w = img.shape[1], img.shape[2]

    # 产生一个大画布,用来保存生成的 batch_size 个图像
    merge_img = np.zeros((h * size[0], w * size[1], 3))

    # 循环使得画布特定地方值为某一幅图像的值
    for idx, image in enumerate(images):
        i = idx % size[1]
        j = idx // size[1]
        merge_img[j*h:j*h+h, i*w:i*w+w, :] = image

    # 保存画布
    return scipy.misc.imsave(path, merge_img)

# 定义生成器
def generator(z, y, train = True):
    # y 是一个 [BATCH_SIZE, 10] 维的向量,把 y 转成四维张量
    yb = tf.reshape(y, [BATCH_SIZE, 1, 1, 10], name = 'yb')
    # 把 y 作为约束条件和 z 拼接起来
    z = tf.concat([z, y], 1, name = 'z_concat_y')
    # 经过一个全连接,BN 和激活层 ReLu
    h1 = tf.nn.relu(batch_norm_layer(fully_connected(z, 1024, 'g_fully_connected1'), 
                                     is_train = train, name = 'g_bn1'))
    # 把约束条件和上一层拼接起来
    h1 = tf.concat([h1, y], 1, name = 'active1_concat_y')

    h2 = tf.nn.relu(batch_norm_layer(fully_connected(h1, 128 * 49, 'g_fully_connected2'), 
                                     is_train = train, name = 'g_bn2'))
    h2 = tf.reshape(h2, [64, 7, 7, 128], name = 'h2_reshape')
    # 把约束条件和上一层拼接起来
    h2 = conv_cond_concat(h2, yb, name = 'active2_concat_y')

    h3 = tf.nn.relu(batch_norm_layer(deconv2d(h2, [64,14,14,128], 
                                              name = 'g_deconv2d3'), 
                                              is_train = train, name = 'g_bn3'))
    h3 = conv_cond_concat(h3, yb, name = 'active3_concat_y')

    # 经过一个 sigmoid 函数把值归一化为 0~1 之间,
    h4 = tf.nn.sigmoid(deconv2d(h3, [64, 28, 28, 1], 
                                name = 'g_deconv2d4'), name = 'generate_image')

    return h4

# 定义判别器    
def discriminator(image, y, reuse = False):

    # 因为真实数据和生成数据都要经过判别器,所以需要指定 reuse 是否可用
    if reuse:
        tf.get_variable_scope().reuse_variables()

    # 同生成器一样,判别器也需要把约束条件串联进来
    yb = tf.reshape(y, [BATCH_SIZE, 1, 1, 10], name = 'yb')
    x = conv_cond_concat(image, yb, name = 'image_concat_y')

    # 卷积,激活,串联条件。
    h1 = lrelu(conv2d(x, 11, name = 'd_conv2d1'), name = 'lrelu1')
    h1 = conv_cond_concat(h1, yb, name = 'h1_concat_yb')

    h2 = lrelu(batch_norm_layer(conv2d(h1, 74, name = 'd_conv2d2'), 
                                name = 'd_bn2'), name = 'lrelu2')
    h2 = tf.reshape(h2, [BATCH_SIZE, -1], name = 'reshape_lrelu2_to_2d')
    h2 = tf.concat( [h2, y],1, name = 'lrelu2_concat_y')

    h3 = lrelu(batch_norm_layer(fully_connected(h2, 1024, name = 'd_fully_connected3'), 
                                name = 'd_bn3'), name = 'lrelu3')
    h3 = tf.concat([h3, y], 1,name = 'lrelu3_concat_y')

    # 全连接层,输出以为 loss 值
    h4 = fully_connected(h3, 1, name = 'd_result_withouts_sigmoid')

    return tf.nn.sigmoid(h4, name = 'discriminator_result_with_sigmoid'), h4

# 定义训练过程中的采样函数    
def sampler(z, y, train = True):
    tf.get_variable_scope().reuse_variables()
    return generator(z, y, train = train)


def train():

    # 设置 global_step ,用来记录训练过程中的 step        
    global_step = tf.Variable(0, name = 'global_step', trainable = False)
    # 训练过程中的日志保存文件
    train_dir = '/home/Zhoupinyi/DCGAN/logs'

    # 放置三个 placeholder,y 表示约束条件,images 表示送入判别器的图片,
    # z 表示随机噪声
    y= tf.placeholder(tf.float32, [BATCH_SIZE, 10], name='y')
    images = tf.placeholder(tf.float32, [64, 28, 28, 1], name='real_images')
    z = tf.placeholder(tf.float32, [None, 100], name='z')

    # 由生成器生成图像 G
    G = generator(z, y)
    # 真实图像送入判别器
    D, D_logits  = discriminator(images, y)
    # 采样器采样图像
    samples = sampler(z, y)
    # 生成图像送入判别器
    D_, D_logits_ = discriminator(G, y, reuse = True)

    # 损失计算
    d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits( labels =  tf.ones_like(D),logits = D_logits))
    d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels = tf.zeros_like(D_), logits = D_logits_))
    d_loss = d_loss_real + d_loss_fake
    g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels = tf.ones_like(D_), logits = D_logits_))

    # 总结操作
    z_sum = tf.summary.histogram("z", z)
    d_sum = tf.summary.histogram("d", D)
    d__sum = tf.summary.histogram("d_", D_)
    G_sum = tf.summary.image("G", G)

    d_loss_real_sum = tf.summary.scalar("d_loss_real", d_loss_real)
    d_loss_fake_sum = tf.summary.scalar("d_loss_fake", d_loss_fake)
    d_loss_sum = tf.summary.scalar("d_loss", d_loss)                                                
    g_loss_sum = tf.summary.scalar("g_loss", g_loss)

    # 合并各自的总结
    g_sum = tf.summary.merge([z_sum, d__sum, G_sum, d_loss_fake_sum, g_loss_sum])
    d_sum = tf.summary.merge([z_sum, d_sum, d_loss_real_sum, d_loss_sum])

    # 生成器和判别器要更新的变量,用于 tf.train.Optimizer 的 var_list


    t_vars = tf.trainable_variables()
    d_vars = [var for var in t_vars if 'd_' in var.name]
    g_vars = [var for var in t_vars if 'g_' in var.name]

    saver = tf.train.Saver()

    # 优化算法采用 Adam
    optimizer = tf.train.AdamOptimizer(learning_rate = 0.0002, beta1 = 0.5)

    d_optim = optimizer.minimize(d_loss, global_step = global_step, var_list = d_vars)
    g_optim = optimizer.minimize(g_loss, global_step = global_step, var_list = g_vars)


    os.environ['CUDA_VISIBLE_DEVICES'] = str(0)
    config = tf.ConfigProto()
    config.gpu_options.per_process_gpu_memory_fraction = 0.2
    sess = tf.InteractiveSession(config=config)

    init = tf.initialize_all_variables()   
    writer = tf.train.SummaryWriter(train_dir, sess.graph)

    # 这个自己理解吧
    data_x, data_y = read_data()
    sample_z = np.random.uniform(-1, 1, size=(BATCH_SIZE, 100))
#    sample_images = data_x[0: 64]
    sample_labels = data_y[0: 64]
    sess.run(init)    

    # 循环 25 个 epoch 训练网络
    for epoch in range(25):
        batch_idxs = 1093
        for idx in range(batch_idxs):        
            batch_images = data_x[idx*64: (idx+1)*64]
            batch_labels = data_y[idx*64: (idx+1)*64]
            batch_z = np.random.uniform(-1, 1, size=(BATCH_SIZE, 100))            

            # 更新 D 的参数
            _, summary_str = sess.run([d_optim, d_sum], 
                                      feed_dict = {images: batch_images, 
                                                   z: batch_z, 
                                                   y: batch_labels})
            writer.add_summary(summary_str, idx+1)

            # 更新 G 的参数
            _, summary_str = sess.run([g_optim, g_sum], 
                                      feed_dict = {z: batch_z, 
                                                   y: batch_labels})
            writer.add_summary(summary_str, idx+1)

            # 更新两次 G 的参数确保网络的稳定
            _, summary_str = sess.run([g_optim, g_sum], 
                                      feed_dict = {z: batch_z,
                                                   y: batch_labels})
            writer.add_summary(summary_str, idx+1)

            # 计算训练过程中的损失,打印出来
            errD_fake = d_loss_fake.eval({z: batch_z, y: batch_labels})
            errD_real = d_loss_real.eval({images: batch_images, y: batch_labels})
            errG = g_loss.eval({z: batch_z, y: batch_labels})

            if idx % 20 == 0:
                print("Epoch: [%2d] [%4d/%4d] d_loss: %.8f, g_loss: %.8f" \
                        % (epoch, idx, batch_idxs, errD_fake+errD_real, errG))

            # 训练过程中,用采样器采样,并且保存采样的图片到 
            # /home/your_name/TensorFlow/DCGAN/samples/
            if idx % 100 == 1:
                sample = sess.run(samples, feed_dict = {z: sample_z, y: sample_labels})
                samples_path = '/home/your_name/TensorFlow/DCGAN/samples/'
                save_images(sample, [8, 8], 
                            samples_path + 'test_%d_epoch_%d.png' % (epoch, idx))
                print 'save down'

            # 每过 500 次迭代,保存一次模型
            if idx % 500 == 2:
                checkpoint_path = os.path.join(train_dir, 'DCGAN_model.ckpt')
                saver.save(sess, checkpoint_path, global_step = idx+1)

    sess.close()


if __name__ == '__main__':
    train()
Traceback (most recent call last):
  File "/home/zhoupinbyi/DCGAN/test_gan.py", line 370, in <module>
    train()
  File "/home/zhoupinbyi/DCGAN/test_gan.py", line 297, in train
    d_optim = optimizer.minimize(d_loss, global_step = global_step, var_list = d_vars)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 325, in minimize
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 446, in apply_gradients
    self._create_slots([_get_variable_for(v) for v in var_list])
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/adam.py", line 128, in _create_slots
    self._zeros_slot(v, "m", self._name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 766, in _zeros_slot
    named_slots[_var_key(var)] = slot_creator.create_zeros_slot(var, op_name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/slot_creator.py", line 174, in create_zeros_slot
    colocate_with_primary=colocate_with_primary)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/slot_creator.py", line 146, in create_slot_with_initializer
    dtype)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/slot_creator.py", line 66, in _create_slot_var
    validate_shape=validate_shape)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 1065, in get_variable
    use_resource=use_resource, custom_getter=custom_getter)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 962, in get_variable
    use_resource=use_resource, custom_getter=custom_getter)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 367, in get_variable
    validate_shape=validate_shape, use_resource=use_resource)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 352, in _true_getter
    use_resource=use_resource)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 682, in _get_single_variable
    "VarScope?" % name)
ValueError: Variable d_conv2d1/weights/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?