Tensorflow 如何在张力板上可视化嵌入?非MNIST数据
我试图在Tensorboard嵌入件上创建可视化图形,我使用的是csv数据,而不是MNIST数据,csv中的数据如下所示:Tensorflow 如何在张力板上可视化嵌入?非MNIST数据,tensorflow,tensorboard,Tensorflow,Tensorboard,我试图在Tensorboard嵌入件上创建可视化图形,我使用的是csv数据,而不是MNIST数据,csv中的数据如下所示: 0.266782506,"1,0" 0.361942522,"0,1" 0.862076491,"0,1" 第一列中的数据(如0.366782506)是样本输入数据x,“0,1”是一个热标签y。而0 我试图参考如何通过在Tensorboard上嵌入投影仪来创建可视化图形,但我只找到了使用MNIST数据的示例,因此,如果有人能就如何在Tensorb
0.266782506,"1,0"
0.361942522,"0,1"
0.862076491,"0,1"
第一列中的数据(如0.366782506)是样本输入数据x,“0,1”是一个热标签y。而0
我试图参考如何通过在Tensorboard上嵌入投影仪来创建可视化图形,但我只找到了使用MNIST数据的示例,因此,如果有人能就如何在Tensorboard上创建可视化嵌入图形提供指导,我将寻求帮助。
我可以在Tensorboard上可视化标量、图形和直方图,代码如下:
# coding=utf-8
import tensorflow as tf
import numpy
import os
import csv
import shutil
from tensorflow.contrib.tensorboard.plugins import projector
#Reading data from csv:
filename = open('D:\Program Files (x86)\logistic\sample_1.csv', 'r')
reader = csv.reader(filename)
t_X, t_Y,c = [],[],[]
a,b=0,0
for i in reader:
t_X.append(i[0])
a= int(i[1][0])
b= int(i[1][2])
c= list([a,b])
t_Y.extend([c])
t_X = numpy.asarray(t_X)
t_Y = numpy.asarray(t_Y)
t_XT = numpy.transpose([t_X])
filename.close()
# Parameters
learning_rate = 0.01
training_epochs = 5
batch_size = 50
display_step = 1
n_samples = t_X.shape[0]
sess = tf.InteractiveSession()
with tf.name_scope('Input'):
with tf.name_scope('x_input'):
x = tf.placeholder(tf.float32, [None, 1],name='x_input')
with tf.name_scope('y_input'):
y = tf.placeholder(tf.float32, [None, 2],name='y_input')
# Weight
with tf.name_scope('layer1'):
with tf.name_scope('weight'):
W = tf.Variable(tf.random_normal([1, 2],dtype=tf.float32),name='weight')
with tf.name_scope('bias'):
b = tf.Variable(tf.random_normal([2], dtype=tf.float32),name='bias')
# model
with tf.name_scope('Model'):
with tf.name_scope('pred'):
pred = tf.nn.softmax(tf.matmul(x, W) + b, name='pred')
with tf.name_scope('cost'):
cost = tf.reduce_mean(-tf.reduce_sum(y * tf.log(pred), reduction_indices=1),name='cost')
tf.summary.scalar('cost',cost)
tf.summary.histogram('cost',cost)
with tf.name_scope('optimizer'):
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Calculate accuracy
with tf.name_scope('accuracy_count'):
with tf.name_scope('correct_prediction'):
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
with tf.name_scope('accuracy'):
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar('accuracy',accuracy)
tf.summary.histogram('accuracy', accuracy)
init = tf.global_variables_initializer()
merged = tf.summary.merge_all()
sess.run(init)
writer = tf.summary.FileWriter('D:\Tensorlogs\logs',sess.graph)
for epoch in range(training_epochs):
avg_cost = 0
total_batch = int(n_samples / batch_size)
i = 0
for anc in range(total_batch):
m,n = [],[]
m = t_X[i:i+batch_size]
n = t_Y[i:i+batch_size]
m = numpy.asarray(m)
n = numpy.asarray(n)
m = numpy.transpose([m])
summary, predr, o, c = sess.run([merged, pred, optimizer, cost],feed_dict={x: m, y: n})
avg_cost += c / total_batch
i = i + batch_size
writer.add_summary(summary, epoch+1)
if (epoch + 1) % display_step == 0:
print("Epoch:", '%04d' % (epoch + 1), "cost=", avg_cost,"W=",wr,"b=",br,"accuracy_s=",accuracy_s.eval(feed_dict={x: t_XT, y: t_Y}))
print("Optimization Finished!")
非常感谢 你看过这个教程了吗?是的,我已经看过了,但仍然不知道如何正确使用tf.train.saver、投影仪等代码。本教程给出的示例使用MNIST数据,但在我的示例中,它是csv数据。您看过本教程吗?是的,我已经看过了,但仍然不知道如何正确使用tf.train.saver、投影仪等代码。本教程给出的示例使用MNIST数据,但在我的示例中,它是csv数据。