Tensorflow tf.data中tf.SparseFeature的等价物
我目前正在研究的神经网络接受稀疏张量作为输入。我正在从TFR记录中读取数据,如下所示:Tensorflow tf.data中tf.SparseFeature的等价物,tensorflow,tensorflow-datasets,Tensorflow,Tensorflow Datasets,我目前正在研究的神经网络接受稀疏张量作为输入。我正在从TFR记录中读取数据,如下所示: _, examples = tf.TFRecordReader(options=options).read_up_to( filename_queue, num_records=batch_size) features = tf.parse_example(examples, features={ 'input_feat': tf.SparseFeature(index_key=
_, examples = tf.TFRecordReader(options=options).read_up_to(
filename_queue, num_records=batch_size)
features = tf.parse_example(examples, features={
'input_feat': tf.SparseFeature(index_key='input_feat_idx',
value_key='input_feat_values',
dtype=tf.int64,
size=SIZE_FEATURE)})
它的工作原理很有魅力,但我正在研究
tf.data
API,它看起来对许多任务都更方便,我不知道如何读取tf.SparseTensor
对象,就像我使用tf.RecordReader
和tf.parse_example()
一样。有什么想法吗?TensorFlow 1.5将在核心转换中添加对tf.SparseTensor
的本机支持。(如果您pip每晚安装tf
,或者从TensorFlow的主分支上的源代码构建,则此选项当前可用。)这意味着您可以按照以下方式编写管道:
# Create a dataset of string records from the input files.
dataset = tf.data.TFRecordReader(filenames)
# Convert each string record into a `tf.SparseTensor` representing a single example.
dataset = dataset.map(lambda record: tf.parse_single_example(
record, features={'input_feat': tf.SparseFeature(index_key='input_feat_idx',
value_key='input_feat_values',
dtype=tf.int64,
size=SIZE_FEATURE)})
# Stack together up to `batch_size` consecutive elements into a `tf.SparseTensor`
# representing a batch of examples.
dataset = dataset.batch(batch_size)
# Create an iterator to access the elements of `dataset` sequentially.
iterator = dataset.make_one_shot_iterator()
# `next_element` is a `tf.SparseTensor`.
next_element = iterator.get_next()
TensorFlow 1.5将在核心转换中添加对
tf.SparseTensor
的本机支持。(如果您pip每晚安装tf
,或者从TensorFlow的主分支上的源代码构建,则此选项当前可用。)这意味着您可以按照以下方式编写管道:
# Create a dataset of string records from the input files.
dataset = tf.data.TFRecordReader(filenames)
# Convert each string record into a `tf.SparseTensor` representing a single example.
dataset = dataset.map(lambda record: tf.parse_single_example(
record, features={'input_feat': tf.SparseFeature(index_key='input_feat_idx',
value_key='input_feat_values',
dtype=tf.int64,
size=SIZE_FEATURE)})
# Stack together up to `batch_size` consecutive elements into a `tf.SparseTensor`
# representing a batch of examples.
dataset = dataset.batch(batch_size)
# Create an iterator to access the elements of `dataset` sequentially.
iterator = dataset.make_one_shot_iterator()
# `next_element` is a `tf.SparseTensor`.
next_element = iterator.get_next()