Tensorflow 连续输出预测的卷积神经网络
我的问题如下:我实现了一个简单的FNN前馈网络,它接受90个输入,并产生一个连续值作为输出。FNN中的一切看起来都很好,但我的任务是利用CNN做一个类似的网络。从我能想到的是,我将输入我的90个功能作为9x10矩阵,从这里所有变得不清楚。我不知道如何制作CONV和POL层以及它们应该有多少层?另外,对我来说一个大问题是如何制作最后一层,以便它可以给我一个连续的值作为输出,而不是类别 你能给我一些CNN做这种事情的地方吗? 我正在使用以下模板并对其进行修改:Tensorflow 连续输出预测的卷积神经网络,tensorflow,neural-network,conv-neural-network,convolutional-neural-network,Tensorflow,Neural Network,Conv Neural Network,Convolutional Neural Network,我的问题如下:我实现了一个简单的FNN前馈网络,它接受90个输入,并产生一个连续值作为输出。FNN中的一切看起来都很好,但我的任务是利用CNN做一个类似的网络。从我能想到的是,我将输入我的90个功能作为9x10矩阵,从这里所有变得不清楚。我不知道如何制作CONV和POL层以及它们应该有多少层?另外,对我来说一个大问题是如何制作最后一层,以便它可以给我一个连续的值作为输出,而不是类别 你能给我一些CNN做这种事情的地方吗? 我正在使用以下模板并对其进行修改: # Training Paramete
# Training Parameters
learning_rate = 0.01
num_steps = 200
batch_size = 5000
display_step = 10
# Network Parameters
num_input = 90 # MNIST data input (img shape: 28*28)
# num_classes = 10 # MNIST total classes (0-9 digits)
n_out = 1
dropout = 0.0 # Dropout, probability to keep units
total_len = X_train.shape[0]
# tf Graph input
X = tf.placeholder(tf.float32, [None, num_input])
Y = tf.placeholder(tf.float32, [None])
keep_prob = tf.placeholder(tf.float32) # dropout (keep probability)
# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding='SAME')
# Create model
def conv_net(x, weights, biases, dropout):
# MNIST data input is a 1-D vector of 784 features (28*28 pixels)
# Reshape to match picture format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape=[-1, 9, 10, 1])
# Convolution Layer
conv1 = conv2d(x, weights['wc1'], biases['bc1'])
# Max Pooling (down-sampling)
conv1 = maxpool2d(conv1, k=2)
# Convolution Layer
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
# Max Pooling (down-sampling)
conv2 = maxpool2d(conv2, k=2)
# Fully connected layer
# Reshape conv2 output to fit fully connected layer input
fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)
# Output, class prediction
out_layer = tf.matmul(fc1, weights['wout']) + biases['bout']
return out_layer
# Store layers weight & bias
weights = {
# 5x5 conv, 1 input, 32 outputs
'wc1': tf.Variable(tf.random_normal([5, 5, 90, 92])),
# 5x5 conv, 32 inputs, 64 outputs
'wc2': tf.Variable(tf.random_normal([5, 5, 92, 94])),
# fully connected, 7*7*64 inputs, 1024 outputs
'wd1': tf.Variable(tf.random_normal([9 * 10 * 2, 90])),
# 1024 inputs, 10 outputs (class prediction)
'wout': tf.Variable(tf.random_normal([1024, n_out]))
}
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bd1': tf.Variable(tf.random_normal([1024])),
'bout': tf.Variable(tf.random_normal([n_out]))
}
# Construct model
logits = conv_net(X, weights, biases, keep_prob)
prediction = tf.nn.softmax(logits)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
for step in range(1, num_steps + 1):
total_batch = int(total_len / batch_size) # 500/10
# Loop over all batches
for i in range(total_batch):
batch_x = X_train[i * batch_size:(i + 1) * batch_size]
batch_y = Y_train[i * batch_size:(i + 1) * batch_size]
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y, keep_prob: 0.8})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
loss, acc = sess.run([loss_op, accuracy], feed_dict={
X: batch_x,
Y: batch_y,
keep_prob: 1.0
})
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
print("Optimization Finished!")
# Calculate accuracy for 256 MNIST test images
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={
X: X_test,
Y: Y_test,
keep_prob: 1.0
}))
Softmax和交叉熵损失在这里没有意义,
考虑使用均方根损失:
cost = tf.reduce_mean(tf.square(output-ys))
有关详细信息,请参见问题评论中链接的教程。请用谷歌搜索“tensorflow教程回归神经网络”,第一个点击:我已经完成了,我需要知道如何将CNN的最后一层从分类器输出为连续数。我必须使用有线电视新闻网来解决我的问题,这是我的要求。这不是我目前认为的问题。如果我的成本函数不是最好的,我会努力使它能够运行,然后,我会开始改进。目前,他给了我一个恼人的错误:“InvalidArgumentError(回溯见上文):登录和标签的大小必须相同:”
cost = tf.reduce_mean(tf.square(output-ys))