Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/287.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 为什么字符串类型的tf.placeholder不';t为tf.string\u input\u producer()工作_Python_Tensorflow - Fatal编程技术网

Python 为什么字符串类型的tf.placeholder不';t为tf.string\u input\u producer()工作

Python 为什么字符串类型的tf.placeholder不';t为tf.string\u input\u producer()工作,python,tensorflow,Python,Tensorflow,在一个场景中,我想利用占位符动态地将输入文件名更改为文件名队列,这样我就可以遍历文件。但是我发现下面的代码不起作用,有人有想法吗 import tensorflow as tf def test(s): filename_queue = tf.train.string_input_producer([s]) reader = tf.TextLineReader() key, value = reader.read(filename_queue) record

在一个场景中,我想利用占位符动态地将输入文件名更改为文件名队列,这样我就可以遍历文件。但是我发现下面的代码不起作用,有人有想法吗

import tensorflow as tf

def test(s):
    filename_queue = tf.train.string_input_producer([s])

    reader = tf.TextLineReader()
    key, value = reader.read(filename_queue)

    record_defaults = [[1.0], [1]]
    col1, col2 = tf.decode_csv(value, record_defaults = record_defaults)

    return col1, col2

s = tf.placeholder(tf.string, None, name = 's')
# s = tf.constant('file0.csv', tf.string)
ss = ["file0.csv", "file1.csv"]
inputs, labels = test(s)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    for e in ss:
        inputs_val, labels_val = sess.run([inputs, labels], feed_dict = {s: e})
        print("input {} - label {}".format(inputs_val, labels_val))

    coord.request_stop()
    coord.join(threads)
谢谢你帮忙调查此事

(张量流)[yuming@atlas1工作文件]$python 36.py

2017-10-11 11:28:40.825044: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Found device 0 with properties:
name: Quadro M4000 major: 5 minor: 2 memoryClockRate(GHz): 0.7725
pciBusID: 0000:83:00.0
totalMemory: 7.93GiB freeMemory: 7.87GiB
2017-10-11 11:28:40.931938: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Found device 1 with properties:
name: Quadro K2200 major: 5 minor: 0 memoryClockRate(GHz): 1.124
pciBusID: 0000:03:00.0
totalMemory: 3.95GiB freeMemory: 3.47GiB
2017-10-11 11:28:40.931990: I tensorflow/core/common_runtime/gpu/gpu_device.cc:980] Device peer to peer matrix
2017-10-11 11:28:40.931998: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] DMA: 0 1
2017-10-11 11:28:40.932002: I tensorflow/core/common_runtime/gpu/gpu_device.cc:996] 0:   Y N
2017-10-11 11:28:40.932005: I tensorflow/core/common_runtime/gpu/gpu_device.cc:996] 1:   N Y
2017-10-11 11:28:40.932013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1055] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Quadro M4000, pci bus id: 0000:83:00.0, compute capability: 5.2)
2017-10-11 11:28:40.932018: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1042] Ignoring gpu device (device: 1, name: Quadro K2200, pci bus id: 0000:03:00.0, compute capability: 5.0) with Cuda multiprocessor count: 5. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT.
Traceback (most recent call last):
  File "36.py", line 26, in <module>
    inputs_val, labels_val = sess.run([inputs, labels], feed_dict = {s: 'file0.csv'})
  File "/home/yuming/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 889, in run
    run_metadata_ptr)
  File "/home/yuming/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1118, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/yuming/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1315, in _do_run
    options, run_metadata)
  File "/home/yuming/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: FIFOQueue '_0_input_producer' is closed and has insufficient elements (requested 1, current size 0)
         [[Node: ReaderReadV2 = ReaderReadV2[_device="/job:localhost/replica:0/task:0/cpu:0"](TextLineReaderV2, input_producer)]]

Caused by op u'ReaderReadV2', defined at:
  File "36.py", line 17, in <module>
    inputs, labels = test(s)
  File "36.py", line 7, in test
    key, value = reader.read(filename_queue)
  File "/home/yuming/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/io_ops.py", line 194, in read
    return gen_io_ops._reader_read_v2(self._reader_ref, queue_ref, name=name)
  File "/home/yuming/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 654, in _reader_read_v2
    queue_handle=queue_handle, name=name)
  File "/home/yuming/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 789, in _apply_op_helper
    op_def=op_def)
  File "/home/yuming/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3052, in create_op
    op_def=op_def)
  File "/home/yuming/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1610, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

OutOfRangeError (see above for traceback): FIFOQueue '_0_input_producer' is closed and has insufficient elements (requested 1, current size 0)
         [[Node: ReaderReadV2 = ReaderReadV2[_device="/job:localhost/replica:0/task:0/cpu:0"](TextLineReaderV2, input_producer)]]
2017-10-11 11:28:40.825044:I tensorflow/core/common乇u runtime/gpu/gpu乇device.cc:965]找到具有以下属性的设备0:
名称:Quadro M4000大调:5小调:2 memoryClockRate(GHz):0.7725
pciBusID:0000:83:00.0
总内存:7.93GiB自由内存:7.87GiB
2017-10-11 11:28:40.931938:I tensorflow/core/common_runtime/gpu/gpu_device.cc:965]找到了具有以下属性的设备1:
名称:Quadro K2200大调:5小调:0 memoryClockRate(GHz):1.124
pciBusID:0000:03:00.0
总内存:3.95GiB自由内存:3.47GiB
2017-10-11 11:28:40.931990:I tensorflow/core/common_runtime/gpu/gpu_device.cc:980]设备对等矩阵
2017-10-11 11:28:40.931998:I tensorflow/core/common_runtime/gpu/gpu_device.cc:986]DMA:01
2017-10-11 11:28:40.932002:I tensorflow/core/common_runtime/gpu/gpu_device.cc:996]0:Y N
2017-10-11 11:28:40.932005:I tensorflow/core/common_runtime/gpu/gpu_device.cc:996]1:N Y
2017-10-11 11:28:40.932013:I tensorflow/core/common_runtime/gpu/gpu_device.cc:1055]创建tensorflow设备(/device:gpu:0)->(设备:0,名称:Quadro M4000,pci总线id:0000:83:00.0,计算能力:5.2)
2017-10-11 11:28:40.932018:I tensorflow/core/common_runtime/gpu/gpu_device.cc:1042]忽略gpu设备(设备:1,名称:Quadro K2200,pci总线id:0000:03:00.0,计算能力:5.0),Cuda多处理器计数:5。所需的最小计数为8。您可以使用env var TF_MIN_GPU_多处理器计数调整此要求。
回溯(最近一次呼叫最后一次):
文件“36.py”,第26行,在
inputs\u val,labels\u val=sess.run([inputs,labels],feed\u dict={s:'file0.csv'})
文件“/home/yuming/tensorflow/lib/python2.7/site packages/tensorflow/python/client/session.py”,第889行,正在运行
运行_元数据_ptr)
文件“/home/yuming/tensorflow/lib/python2.7/site packages/tensorflow/python/client/session.py”,第1118行,正在运行
feed_dict_tensor、options、run_元数据)
文件“/home/yuming/tensorflow/lib/python2.7/site packages/tensorflow/python/client/session.py”,第1315行,运行
选项,运行(元数据)
文件“/home/yuming/tensorflow/lib/python2.7/site packages/tensorflow/python/client/session.py”,第1334行,在
提升类型(e)(节点定义、操作、消息)
tensorflow.python.framework.errors\u impl.OutOfRangeError:FIFOQueue“\u 0\u input\u producer”已关闭,元素不足(请求的1,当前大小为0)
[[Node:ReaderReadV2=ReaderReadV2[\u device=“/job:localhost/replica:0/task:0/cpu:0”](TextLineReaderV2,input\u producer)]]
由op u'ReaderReadV2'引起,定义为:
文件“36.py”,第17行,在
输入、标签=测试
文件“36.py”,第7行,在测试中
key,value=reader.read(文件名\队列)
文件“/home/yuming/tensorflow/lib/python2.7/site packages/tensorflow/python/ops/io_ops.py”,第194行,已读
返回gen\u io\u ops.\u reader\u read\u v2(self.\u reader\u ref,queue\u ref,name=name)
文件“/home/yuming/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py”,第654行,在第v2版中
队列句柄=队列句柄,名称=名称)
文件“/home/yuming/tensorflow/lib/python2.7/site packages/tensorflow/python/framework/op_def_library.py”,第789行,在“应用”op_helper中
op_def=op_def)
文件“/home/yuming/tensorflow/lib/python2.7/site packages/tensorflow/python/framework/ops.py”,第3052行,在create_op中
op_def=op_def)
文件“/home/yuming/tensorflow/lib/python2.7/site packages/tensorflow/python/framework/ops.py”,第1610行,在__
self._traceback=self._graph._extract_stack()35; pylint:disable=protected access
OutOfRangeError(回溯请参见上文):FIFOQueue“\u 0\u input\u producer”已关闭且元素不足(请求的1,当前大小为0)
[[Node:ReaderReadV2=ReaderReadV2[\u device=“/job:localhost/replica:0/task:0/cpu:0”](TextLineReaderV2,input\u producer)]]

我认为你不能用
tf.train.string\u input\u producer来实现这一点。一种可能的选择是直接使用
tf.FIFOQueue
(也由
string\u input\u producer
在引擎盖下使用),并手动将文件名填入其中

filename_queue = tf.FIFOQueue(capacity=100, dtypes=[tf.string])
with tf.Session() as session:
  reader = tf.TextLineReader()
  key, value = reader.read(filename_queue)
  col1, col2, col3, col4, target = tf.decode_csv(value, record_defaults=[[1.], [1.], [1.], [1.], [1.]])
  coord = tf.train.Coordinator()
  threads = tf.train.start_queue_runners(coord=coord)

  session.run(filename_queue.enqueue("file0.csv"))
  for i in range(10):
    print session.run(target)

  session.run(filename_queue.enqueue("file1.csv"))
  for i in range(10):
    print session.run(target)

  # NOTE: if I call this one more time, it'll hang, because
  # the queue is empty and the last CSV is fully read
  #
  # print session.run(target)

  coord.request_stop()
  coord.join(threads)
我的两个CSV文件都有10行5列,所以我使用
range(10)
,它的工作原理与我预期的一样:首先
file0.CSV
,然后
file1.CSV

小心:如果没有提供足够的示例,主线程将挂起


我建议您始终保持队列不为空,并不断向其中添加文件。通过这种方式,您可以以任何顺序动态地为队列提供信息。

我认为您无法使用
tf.train.string\u input\u producer
来实现这一点。一种可能的选择是直接使用
tf.FIFOQueue
(也由
string\u input\u producer
在引擎盖下使用),并手动将文件名填入其中

filename_queue = tf.FIFOQueue(capacity=100, dtypes=[tf.string])
with tf.Session() as session:
  reader = tf.TextLineReader()
  key, value = reader.read(filename_queue)
  col1, col2, col3, col4, target = tf.decode_csv(value, record_defaults=[[1.], [1.], [1.], [1.], [1.]])
  coord = tf.train.Coordinator()
  threads = tf.train.start_queue_runners(coord=coord)

  session.run(filename_queue.enqueue("file0.csv"))
  for i in range(10):
    print session.run(target)

  session.run(filename_queue.enqueue("file1.csv"))
  for i in range(10):
    print session.run(target)

  # NOTE: if I call this one more time, it'll hang, because
  # the queue is empty and the last CSV is fully read
  #
  # print session.run(target)

  coord.request_stop()
  coord.join(threads)
我的两个CSV文件都有10行5列,所以我使用
range(10)
,它的工作原理与我预期的一样:首先
file0.CSV
,然后
file1.CSV

小心:如果没有提供足够的示例,主线程将挂起


我建议您始终保持队列不为空,并不断向其中添加文件。通过这种方式,您可以以任何顺序动态地向队列提供信息。

请发布您收到的错误。@TheMyth:很抱歉信息不完整。输出如上所述添加,似乎与预期方法没有直接错误。但是,当使用tf.constant而不是tf.placeholder时,它可以工作。file0.csv和file1.csv是非常简单的csv文件,只有两行:0.1,0.9,1请公布您收到的错误。@TheMyth:很抱歉信息不完整。输出