Python TensorFlow必须为占位符tensor';占位符2';使用dtype float

Python TensorFlow必须为占位符tensor';占位符2';使用dtype float,python,numpy,tensorflow,Python,Numpy,Tensorflow,我的代码失败了,臭名昭著: InvalidArgumentError:必须为带有dtype float的占位符张量“占位符2”提供一个值 [[Node:Placeholder_2=Placeholder dType=DT_FLOAT,shape=[],_device=“/job:localhost/replica:0/task:0/cpu:0”]] 这是我的密码: logits = LeNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with

我的代码失败了,臭名昭著:

InvalidArgumentError:必须为带有dtype float的占位符张量“占位符2”提供一个值 [[Node:Placeholder_2=Placeholder dType=DT_FLOAT,shape=[],_device=“/job:localhost/replica:0/task:0/cpu:0”]]

这是我的密码:

logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)

def LeNet(x):    
    # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
    mu = 0
    sigma = 0.1

    # SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
    conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
    conv1_b = tf.Variable(tf.zeros(6))
    conv1   = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b

    # SOLUTION: Activation.
    conv1 = tf.nn.relu(conv1)

    #Hardcoded dropout
    conv1 = tf.nn.dropout(conv1,0.9)

    # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
    conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
    conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
    conv2_b = tf.Variable(tf.zeros(16))
    conv2   = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b

    # SOLUTION: Activation.
    conv2 = tf.nn.relu(conv2)

    # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
    conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # SOLUTION: Flatten. Input = 5x5x16. Output = 400.
    fc0   = flatten(conv2)

    # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
    fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
    fc1_b = tf.Variable(tf.zeros(120))
    fc1   = tf.matmul(fc0, fc1_W) + fc1_b

    # SOLUTION: Activation.
    fc1    = tf.nn.relu(fc1)

    # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
    fc2_W  = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
    fc2_b  = tf.Variable(tf.zeros(84))
    fc2    = tf.matmul(fc1, fc2_W) + fc2_b

    # SOLUTION: Activation.
    fc2    = tf.nn.relu(fc2)

    #Dropout layer 
    fc2 = tf.nn.dropout(fc2, keep_prob)

    # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.
    fc3_W  = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
    fc3_b  = tf.Variable(tf.zeros(43))
    logits = tf.matmul(fc2, fc3_W) + fc3_b

    return logits

x = tf.placeholder(tf.float32, (None, 32, 32, 1))
grayscaleimage = np.reshape(image2Gray(image), (1,32,32,1))
# doesn't matter whether i use the below 2 lines or not
# ideally i should be able to just put the grayscaleimage ndarray into
# tensorflow as if I try to put something else, it complains that 
# type should be ... or ... or...etc or ndarray
own_images = np.empty([0, 32, 32, 1], dtype = np.float32)
own_images = np.append(own_images, grayscaleimage, axis = 0)

output = tf.argmax(logits, 1)

with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))
    output = sess.run(output, feed_dict={x: (own_images)})
    print(output)

我解决了这个问题

Logits = LeNet(x)
LeNet(x)的定义使用了一个“keep_prob”变量,该变量没有被输入

将代码更改为:

output = sess.run(output, feed_dict={x: own_images, keep_prob:1.0})
解决了这个问题

Logits = LeNet(x)

然而,这是一个警告。如果试图注释掉LeNet函数定义中的keep_prob变量,则可能无法解决此问题,因为您还必须刷新函数定义和其他单元格中的调用。

您的代码不够完整,无法查看丢失的占位符所在的位置<代码>日志突然出现。@etarion添加了日志部分…这足够了吗?您是否还可以包括收到错误消息时打印的完整堆栈跟踪?它应该包括指示哪个
run()
调用失败,以及定义了
tf.placeholder()
的位置。