Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/338.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/silverlight/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python tf.TFRecordReader仅返回一个历元的相同数据的多个副本_Python_Tensorflow - Fatal编程技术网

Python tf.TFRecordReader仅返回一个历元的相同数据的多个副本

Python tf.TFRecordReader仅返回一个历元的相同数据的多个副本,python,tensorflow,Python,Tensorflow,我正在尽可能快地评估一个模型。我从一个仅有的TFRecords文件中获取示例,它似乎非常慢,因此我在这里搜索了任何解释,并从Yaroslav Bulatov()中找到了一个示例代码 我已经用tf.train.batch替换了tf.train.shuffle\u批处理调用,因为我只需要读取1个历元,我不介意样本是否被洗牌。当enqueue_many=False时,结果是正确的,但是,当我尝试使用2个排队项目来enqueue_many=True时,我得到了相同的重复样本 关键代码如下: re

我正在尽可能快地评估一个模型。我从一个仅有的TFRecords文件中获取示例,它似乎非常慢,因此我在这里搜索了任何解释,并从Yaroslav Bulatov()中找到了一个示例代码

我已经用tf.train.batch替换了tf.train.shuffle\u批处理调用,因为我只需要读取1个历元,我不介意样本是否被洗牌。当enqueue_many=False时,结果是正确的,但是,当我尝试使用2个排队项目来enqueue_many=True时,我得到了相同的重复样本

关键代码如下:

    reader = tf.TFRecordReader()
    queue_batch = []
    for i in range(enqueue_many_size):
        _, serialized_example = reader.read(filename_queue)
        queue_batch.append(serialized_example)
    batch_serialized_example = tf.train.batch(
        [queue_batch],
        batch_size=batch_size,
        num_threads=thread_number,
        capacity=capacity,
        enqueue_many=True)
完整的概念证明如下:

import glob
import time
import numpy as np
import os
import tensorflow as tf

epoch_number = 1
thread_number = 1
batch_size = 4
capacity = thread_number * batch_size + 10
enqueue_many = True
enqueue_many_size = 2

# Just in case that you want to generate my set of samples
def generateNumbersTFRecords(directory, num_elements):
    record_filename = os.path.join(directory, 'vectors.tfrecords')
    writer = tf.python_io.TFRecordWriter(record_filename)
    for i in range(num_elements):
        vector = np.arange(i*16,(i+1)*16, dtype=np.float32)
        feature = {'vector': tf.train.Feature(float_list=tf.train.FloatList(value=vector.tolist()))}
        example = tf.train.Example(features=tf.train.Features(feature=feature))
        writer.write(example.SerializeToString())
    writer.close()


filename_queue = tf.train.string_input_producer(
      ["vectors.tfrecords"],
      shuffle=False,
      seed = int(time.time()),
      num_epochs=epoch_number)

def read_and_decode(filename_queue):
    reader = tf.TFRecordReader()
    _, serialized_example = reader.read(filename_queue)
    return serialized_example

if enqueue_many:
    reader = tf.TFRecordReader()
    queue_batch = []
    for i in range(enqueue_many_size):
        _, serialized_example = reader.read(filename_queue)
        queue_batch.append(serialized_example)
    batch_serialized_example = tf.train.batch(
        [queue_batch],
        batch_size=batch_size,
        num_threads=thread_number,
        capacity=capacity,
        enqueue_many=True)

else:
    serialized_example = read_and_decode(filename_queue)
    batch_serialized_example = tf.train.batch(
        [serialized_example],
        batch_size=batch_size,
        num_threads=thread_number,
        capacity=capacity)

features = tf.parse_example(
    batch_serialized_example,
    features={
        "vector": tf.FixedLenFeature([16], tf.float32),
    })


batch_values = features["vector"]

init_op = tf.global_variables_initializer()

sess = tf.Session()

sess.run(init_op)
sess.run(tf.local_variables_initializer())

coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord, sess=sess)

try:
    while not coord.should_stop():
        f1 = sess.run([batch_values])
        print(f1)


except tf.errors.OutOfRangeError:
    print("Done training after reading all data")
finally:
    coord.request_stop()
    print("coord stopped")

coord.join(threads)
我希望防止在enqueue\u many上下文中使用时,对读取器的两个调用返回相同的TFRecord。预期的行为将是序列向量[[0,1,2,3…15],[16,17…]…],但是我得到[[0,1,2,3…15],[0,1,2,3…15],[16,17…],[16,17…]

我的输出是:

[array([[  0.,   1.,   2.,   3.,   4.,   5.,   6.,   7.,   8.,   9.,  10.,
     11.,  12.,  13.,  14.,  15.],
   [  0.,   1.,   2.,   3.,   4.,   5.,   6.,   7.,   8.,   9.,  10.,
     11.,  12.,  13.,  14.,  15.],
   [ 16.,  17.,  18.,  19.,  20.,  21.,  22.,  23.,  24.,  25.,  26.,
     27.,  28.,  29.,  30.,  31.],
   [ 16.,  17.,  18.,  19.,  20.,  21.,  22.,  23.,  24.,  25.,  26.,
     27.,  28.,  29.,  30.,  31.]], dtype=float32)]
[array([[ 32.,  33.,  34.,  35.,  36.,  37.,  38.,  39.,  40.,  41.,  42.,
     43.,  44.,  45.,  46.,  47.],
   [ 32.,  33.,  34.,  35.,  36.,  37.,  38.,  39.,  40.,  41.,  42.,
     43.,  44.,  45.,  46.,  47.],
   [ 48.,  49.,  50.,  51.,  52.,  53.,  54.,  55.,  56.,  57.,  58.,
     59.,  60.,  61.,  62.,  63.],
   [ 48.,  49.,  50.,  51.,  52.,  53.,  54.,  55.,  56.,  57.,  58.,
     59.,  60.,  61.,  62.,  63.]], dtype=float32)]

我在tensorflow的github中提出了这个问题,幸运的是@yaroslavvb很快回答了问题并给出了解决方案。万一你和我一样被困在同一个地方,你必须知道这个问题与优化选项有关。这是TF 1.0的一个已知错误,并且已经在主分支中解决了


你可以在这里找到更多信息:

我在tensorflow的github中提出了这个问题,幸运的是@yaroslavvb很快回答了问题并给了我解决方案。万一你和我呆在同一个地方,你必须知道这个问题与优化选项有关。这是TF 1.0的一个已知错误,我已经解决了在主分支

您可以在此处找到更多信息: