Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/postgresql/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 批量生产tensorflow_Python_Tensorflow_Batch Processing - Fatal编程技术网

Python 批量生产tensorflow

Python 批量生产tensorflow,python,tensorflow,batch-processing,Python,Tensorflow,Batch Processing,所以我在代码中遇到了批处理的问题,问题是,我试图搜索如何进行批处理,但我所发现的只是使用了一些方法,比如MNIST示例程序中的下一批。如果有人能给我一些建议,告诉我如何在下面的程序中进行批处理,我将不胜感激 import tensorflow as tf import numpy as np from sklearn import cross_validation import pandas as pd np.random.seed(20160612) tf.set_random_seed(20

所以我在代码中遇到了批处理的问题,问题是,我试图搜索如何进行批处理,但我所发现的只是使用了一些方法,比如MNIST示例程序中的下一批。如果有人能给我一些建议,告诉我如何在下面的程序中进行批处理,我将不胜感激

import tensorflow as tf
import numpy as np
from sklearn import cross_validation
import pandas as pd
np.random.seed(20160612)
tf.set_random_seed(20160612)

#this is input data, data is a 7x86594 and label is a 5x86594
data2 = pd.read_csv('rawdata.csv', sep=',', header=None) 
data = np.array(data2)
label2=pd.read_csv('class.csv', sep='\t', header=None)
label=np.array(label2)

train_x,test_x,train_t,test_t=cross_validation.train_test_split(data,label,test_size=0.1,random_state=None)

#this is supposed to be neural size in hidden layer
num_units = 15

x = tf.placeholder(tf.float32, [None, 7])
t = tf.placeholder(tf.float32, [None, 5])

w1 = tf.Variable(tf.truncated_normal([7, num_units], mean=0.0, stddev=0.05))
b1 = tf.Variable(tf.zeros([num_units]))
hidden1 = tf.nn.relu(tf.matmul(x, w1) + b1)

w0 = tf.Variable(tf.zeros([num_units, 5]))
b0 = tf.Variable(tf.zeros([5]))

p = tf.nn.softmax(tf.matmul(hidden1, w0) + b0)


loss =  -tf.reduce_sum(t * tf.log(tf.clip_by_value(p,1e-10,1.0)))
train_step = tf.train.AdamOptimizer().minimize(loss)
correct_prediction = tf.equal(tf.argmax(p, 1), tf.argmax(t, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))


sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())

#this is how i think batching is

batch_size = 100
for j in range(0, 86594, batch_size):
    xs,ys= train_x[j:j+batch_size],  train_t[j:j+batch_size]


i = 0

for _ in range(4000):
    i += 1

    sess.run(train_step, feed_dict={x: xs, t: ys})
    if i % 100 == 0:
        loss_val, acc_val = sess.run([loss, accuracy],feed_dict={x:test_x, t: test_t})
        print ('Step: %d, Loss: %f, Accuracy: %f'% (i, loss_val, acc_val))

当然,这个程序的结果是不正确的。

继续提取批量数据,并将它们提供给网络进行培训。在每个历元中,训练数据集的所有样本都应该运行一次。因此,您可以像这样重写代码:

仅代码的必需部分:

epochs = 4000
batch_size = 100
for epoch_no in range(epochs):
    for index, offset in enumerate(range(0, 86594, batch_size)):
        xs, ys = train_x[offset: offset + batch_size], train_t[offset: offset + batch_size]
        sess.run(train_step, feed_dict={x: xs, t: ys})

        if index % 100 == 0:
            loss_val, acc_val = sess.run([loss, accuracy], feed_dict = {x: test_x, t: test_t})
            print ('Epoch %d, Step: %d, Loss: %f, Accuracy: %f'% (epoch_no, index, loss_val, acc_val))

很抱歉,如果我用你的代码替换我的代码,它会使程序运行步骤0大约500次,我也尝试过这样的“嵌套for”,但结果就像程序运行步骤0很多次一样。我是否错放了批处理部分的代码?@mstfa23我打印的方式不对。现在,它将在训练的每100步打印所有时代。运行它并再次检查。还可以用代码行中的所有代码替换上述代码。我尝试运行编辑过的代码,但不是每100次获取一次纪元,而是每9次获取一次纪元,每100步获取一次纪元。。直到800,它进入第1纪元9次,每一次都有第0步到800步(也有9次)。我也按照您所说的,将编辑后的代码放在“批量大小”的行中。@mstfa23 Ya,确切地说,它将按照您所提到的方式在每个历元的每100步打印一次。如果您只想在特定的历元打印,比如说每50个历元打印一次,也就是说您只想每100步打印一次,那么您需要将该条件修改为:
如果索引%100==0,历元号%50==0:
。让我知道你到底什么时候想要打印?i、 e在那个时代,那个时代的第几步。