Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/338.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 具有张量流的MNIST的Logistic回归速度_Python_Performance_Tensorflow_Logistic Regression_Mnist - Fatal编程技术网

Python 具有张量流的MNIST的Logistic回归速度

Python 具有张量流的MNIST的Logistic回归速度,python,performance,tensorflow,logistic-regression,mnist,Python,Performance,Tensorflow,Logistic Regression,Mnist,我将参加斯坦福大学的CS 20SI:Tensorflow深度学习研究课程。我对以下代码有疑问: import time import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data # Step 1: Read in data # using TF Learn's built in function to load MNIST data to the fo

我将参加斯坦福大学的CS 20SI:Tensorflow深度学习研究课程。我对以下代码有疑问:

import time
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# Step 1: Read in data
# using TF Learn's built in function to load MNIST data to the folder data/mnist
MNIST = input_data.read_data_sets("/data/mnist", one_hot=True)

# Batched logistic regression
learning_rate = 0.01
batch_size = 128
n_epochs = 25

X = tf.placeholder(tf.float32, [batch_size, 784], name = 'image')
Y = tf.placeholder(tf.float32, [batch_size, 10], name = 'label')

#w = tf.Variable(tf.random_normal(shape = [int(shape[1]), int(Y.shape[1])], stddev = 0.01), name='weights')
#b = tf.Variable(tf.zeros(shape = [1, int(Y.shape[1])]), name='bias')

w = tf.Variable(tf.random_normal(shape=[784, 10], stddev=0.01), name="weights")
b = tf.Variable(tf.zeros([1, 10]), name="bias")

logits = tf.matmul(X,w) + b

entropy = tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=Y)
loss = tf.reduce_mean(entropy) #computes the mean over examples in the batch

optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(loss)

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    n_batches = int(MNIST.train.num_examples/batch_size)
    for i in range(n_epochs):
        start_time = time.time()
        for _ in range(n_batches):
            X_batch, Y_batch = MNIST.train.next_batch(batch_size)
            opt, loss_ = sess.run([optimizer, loss], feed_dict = {X: X_batch, Y:Y_batch})
        end_time = time.time() 
        print('Epoch %d took %f'%(i, end_time - start_time))
在此代码上,使用MNIST数据集执行逻辑回归。提交人说:

在我的Mac电脑上运行,批量大小为128的型号的批量版本 运行时间为0.5秒


但是,当我运行它时,每个历元大约需要2秒,总执行时间大约为1分钟。这个例子花了那么多时间,这合理吗?目前我有一个没有OC的Ryzen 1700(3.0GHz)和一个没有OC的GPU Gtx 1080。

我在Gtx Titan X(Maxwell)上试过这段代码,每个历元大约有0.5秒。我希望GTX1080能够得到类似的结果

尝试使用最新的tensorflow和cuda/cudnn版本。确保设置的环境变量没有限制(哪些GPU是可见的,tensorflow可以使用多少内存等)。你可以试着运行一个微型基准测试,看看你能达到你的卡的规定的失败率,例如