Python 张量流神经网络模型的实现
我试图在张量流上实现一个神经网络模型,但似乎占位符的形状有问题。我是TF的新手,所以这可能只是一个简单的误解。以下是我的代码和数据示例:Python 张量流神经网络模型的实现,python,tensorflow,neural-network,Python,Tensorflow,Neural Network,我试图在张量流上实现一个神经网络模型,但似乎占位符的形状有问题。我是TF的新手,所以这可能只是一个简单的误解。以下是我的代码和数据示例: _data=[[0.4,0.5,0.6,1],[0.7,0.8,0.9,0],....] 数据由4列数组组成,每个数组的最后一列是标签。我想将每个数组分类为标签0、标签1或标签2 import tensorflow as tf import numpy as np _data = datamatrix X = tf.placeholder(tf.float
_data=[[0.4,0.5,0.6,1],[0.7,0.8,0.9,0],....]
数据由4列数组组成,每个数组的最后一列是标签。我想将每个数组分类为标签0、标签1或标签2
import tensorflow as tf
import numpy as np
_data = datamatrix
X = tf.placeholder(tf.float32, [None, 3])
W = tf.Variable(tf.zeros([3, 1]))
b = tf.Variable(tf.zeros([3]))
init = tf.global_variables_initializer()
Y = tf.nn.softmax(tf.matmul(X, W) + b)
# placeholder for correct labels
Y_ = tf.placeholder(tf.float32, [None, 1])
# loss function
import time
start=time.time()
cross_entropy = -tf.reduce_sum(Y_ * tf.log(Y))
# % of correct answers found in batch
is_correct = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))
optimizer = tf.train.GradientDescentOptimizer(0.003)
train_step = optimizer.minimize(cross_entropy)
sess = tf.Session()
sess.run(init)
for i in range(1000):
# load batch of images and correct answers
batch_X, batch_Y = [x[:3] for x in _data[:2000]],[x[-1] for x in _data[:2000]]
train_data={X: batch_X, Y_: batch_Y}
# train
sess.run(train_step, feed_dict=train_data)
# success ?
a,c = sess.run([accuracy, cross_entropy], feed_dict=train_data)
运行代码后,我收到以下错误消息:
ValueError: Cannot feed value of shape (2000,) for Tensor 'Placeholder_1:0', which has shape '(?, 1)'
我期望的输出应该是使用交叉熵的模型的性能;下面代码行中的精度值:
a,c = sess.run([accuracy, cross_entropy], feed_dict=train_data)
如果您能就如何改进模型或更适合我的数据的模型提出任何建议,我将不胜感激。占位符1:0的形状与错误消息指定的输入数据
批次Y
不匹配。请注意1-D与2-D阵列
因此,您应该定义一维占位符:
Y_ = tf.placeholder(tf.float32, [None])
或准备二维数据:
batch_X, batch_Y = [x[:3] for x in _data[:2000]],[x[-1:] for x in _data[:2000]]
不客气。你可以标记这个问题的答案吗?