当我使用TensorFlow解码“csv”文件时,如何应用';tf.map#fn和#x27;去斯巴塞滕瑟?

当我使用TensorFlow解码“csv”文件时,如何应用';tf.map#fn和#x27;去斯巴塞滕瑟?,csv,tensorflow,neural-network,Csv,Tensorflow,Neural Network,当我使用以下代码时 import tensorflow as tf # def input_pipeline(filenames, batch_size): # # Define a `tf.contrib.data.Dataset` for iterating over one epoch of the data. # dataset = (tf.contrib.data.TextLineDataset(filenames) # .map(lam

当我使用以下代码时

import tensorflow as tf

# def input_pipeline(filenames, batch_size):
#     # Define a `tf.contrib.data.Dataset` for iterating over one epoch of the data.
#     dataset = (tf.contrib.data.TextLineDataset(filenames)
#                .map(lambda line: tf.decode_csv(
#                     line, record_defaults=[['1'], ['1'], ['1']], field_delim='-'))
#                .shuffle(buffer_size=10)  # Equivalent to min_after_dequeue=10.
#                .batch(batch_size))

#     # Return an *initializable* iterator over the dataset, which will allow us to
#     # re-initialize it at the beginning of each epoch.
#     return dataset.make_initializable_iterator() 

def decode_func(line):
    record_defaults = [['1'],['1'],['1']]
    line = tf.decode_csv(line, record_defaults=record_defaults, field_delim='-')
    str_to_int = lambda r: tf.string_to_number(r, tf.int32)
    query = tf.string_split(line[:1], ",").values
    title = tf.string_split(line[1:2], ",").values
    query = tf.map_fn(str_to_int, query, dtype=tf.int32)
    title = tf.map_fn(str_to_int, title, dtype=tf.int32)
    label = line[2]
    return query, title, label

def input_pipeline(filenames, batch_size):
    # Define a `tf.contrib.data.Dataset` for iterating over one epoch of the data.
    dataset = tf.contrib.data.TextLineDataset(filenames)
    dataset = dataset.map(decode_func)
    dataset = dataset.shuffle(buffer_size=10)  # Equivalent to min_after_dequeue=10.
    dataset = dataset.batch(batch_size)

    # Return an *initializable* iterator over the dataset, which will allow us to
    # re-initialize it at the beginning of each epoch.
    return dataset.make_initializable_iterator() 


filenames=['2.txt']
batch_size = 3
num_epochs = 10
iterator = input_pipeline(filenames, batch_size)

# `a1`, `a2`, and `a3` represent the next element to be retrieved from the iterator.    
a1, a2, a3 = iterator.get_next()

with tf.Session() as sess:
    for _ in range(num_epochs):
        print(_)
        # Resets the iterator at the beginning of an epoch.
        sess.run(iterator.initializer)
        try:
            while True:
                a, b, c = sess.run([a1, a2, a3])
                print(type(a[0]), b, c)
        except tf.errors.OutOfRangeError:
            print('stop')
            # This will be raised when you reach the end of an epoch (i.e. the
            # iterator has no more elements).
            pass                 

        # Perform any end-of-epoch computation here.
        print('Done training, epoch reached')
脚本崩溃没有返回任何结果,当到达
a,b,c=sess.run([a1,a2,a3])时停止,但当我发表评论时停止

query = tf.map_fn(str_to_int, query, dtype=tf.int32)
title = tf.map_fn(str_to_int, title, dtype=tf.int32)
它工作并返回结果

2.txt
中,数据格式如下

1,2,3-4,5-0
1-2,3,4-1
4,5,6,7,8-9-0

另外,为什么返回的结果是类似字节的对象而不是str

query = tf.map_fn(str_to_int, query, dtype=tf.int32)
title = tf.map_fn(str_to_int, title, dtype=tf.int32)
label = line[2]

它很好用

似乎有两个嵌套的TensorFlow lambda函数(tf.map\u fn
和DataSet.map)不起作用。幸运的是,它太复杂了

关于你的第二个问题,我将此作为输出:

[(array([4, 5, 6, 7, 8], dtype=int32), array([9], dtype=int32), 0)]
<type 'numpy.ndarray'>
[(数组([4,5,6,7,8],dtype=int32),数组([9],dtype=int32),0)]

我将编辑您上一个问题的答案以反映这一点。非常感谢!它可以工作,但在调用
dataset=dataset.batch(batch\u size)
时会导致形状错误,并且必须设置
batch\u size=1
。因此,我们需要在前一步中填充序列,文件将被结构化,并且可以轻松解码。然后,关于
tf的代码。可以删除字符串到编号。唉…@尼古拉斯
[(array([4, 5, 6, 7, 8], dtype=int32), array([9], dtype=int32), 0)]
<type 'numpy.ndarray'>