Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/19.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 张量流中的基本一维卷积_Python_Tensorflow - Fatal编程技术网

Python 张量流中的基本一维卷积

Python 张量流中的基本一维卷积,python,tensorflow,Python,Tensorflow,好的,我想在Tensorflow中对时间序列数据进行一维卷积。根据和,使用tf.nn.conv2d显然支持这一点。唯一的要求是设置步幅=[1,1,1,1]。听起来很简单 然而,即使是在非常小的测试用例中,我也无法解决如何做到这一点。我做错了什么 我们来安排一下 import tensorflow as tf import numpy as np print(tf.__version__) >>> 0.9.0 现在,在两个小阵列上生成一个基本的卷积测试。我将使用1的批处理大小使

好的,我想在Tensorflow中对时间序列数据进行一维卷积。根据和,使用
tf.nn.conv2d
显然支持这一点。唯一的要求是设置
步幅=[1,1,1,1]
。听起来很简单

然而,即使是在非常小的测试用例中,我也无法解决如何做到这一点。我做错了什么

我们来安排一下

import tensorflow as tf
import numpy as np
print(tf.__version__)
>>> 0.9.0
现在,在两个小阵列上生成一个基本的卷积测试。我将使用1的批处理大小使其变得简单,因为时间序列是一维的,所以“图像高度”将为1。因为它是一个单变量时间序列,很明显“通道”的数量也是1,所以这很简单,对吗

g = tf.Graph()
with g.as_default():
    # data shape is "[batch, in_height, in_width, in_channels]",
    x = tf.Variable(np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(1,1,-1,1), name="x")
    # filter shape is "[filter_height, filter_width, in_channels, out_channels]"
    phi = tf.Variable(np.array([0.0, 0.5, 1.0]).reshape(1,-1,1,1), name="phi")
    conv = tf.nn.conv2d(
        phi,
        x,
        strides=[1, 1, 1, 1],
        padding="SAME",
        name="conv")
轰。错误

ValueError: Dimensions 1 and 5 are not compatible
好的,首先,我不明白任何维度都会发生这种情况,因为我已经指定在卷积运算中填充参数

但好吧,也许这是有限度的。我一定是把文件弄糊涂了,在张量的错误轴上设置了卷积。我将尝试所有可能的排列:

for i in range(4):
    for j in range(4):
        shape1 = [1,1,1,1]
        shape1[i] = -1
        shape2 = [1,1,1,1]
        shape2[j] = -1
        x_array = np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(*shape1)
        phi_array = np.array([0.0, 0.5, 1.0]).reshape(*shape2)
        try:
            g = tf.Graph()
            with g.as_default():
                x = tf.Variable(x_array, name="x")
                phi = tf.Variable(phi_array, name="phi")
                conv = tf.nn.conv2d(
                    x,
                    phi,
                    strides=[1, 1, 1, 1],
                    padding="SAME",
                    name="conv")
                init_op = tf.initialize_all_variables()
            sess = tf.Session(graph=g)
            sess.run(init_op)
            print("SUCCEEDED!", x_array.shape, phi_array.shape, conv.eval(session=sess))
            sess.close()
        except Exception as e:
            print("FAILED!", x_array.shape, phi_array.shape, type(e), e.args or e._message)
结果:

FAILED! (5, 1, 1, 1) (3, 1, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (3, 1) Input: (1, 1)',)
FAILED! (5, 1, 1, 1) (1, 3, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (1, 3) Input: (1, 1)',)
FAILED! (5, 1, 1, 1) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 1 and 3 are not compatible',)
FAILED! (5, 1, 1, 1) (1, 1, 1, 3) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs
     [[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]]
FAILED! (1, 5, 1, 1) (3, 1, 1, 1) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs
     [[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]]
FAILED! (1, 5, 1, 1) (1, 3, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (1, 3) Input: (5, 1)',)
FAILED! (1, 5, 1, 1) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 1 and 3 are not compatible',)
FAILED! (1, 5, 1, 1) (1, 1, 1, 3) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs
     [[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]]
FAILED! (1, 1, 5, 1) (3, 1, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (3, 1) Input: (1, 5)',)
FAILED! (1, 1, 5, 1) (1, 3, 1, 1) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs
     [[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]]
FAILED! (1, 1, 5, 1) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 1 and 3 are not compatible',)
FAILED! (1, 1, 5, 1) (1, 1, 1, 3) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs
     [[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]]
FAILED! (1, 1, 1, 5) (3, 1, 1, 1) <class 'ValueError'> ('Dimensions 5 and 1 are not compatible',)
FAILED! (1, 1, 1, 5) (1, 3, 1, 1) <class 'ValueError'> ('Dimensions 5 and 1 are not compatible',)
FAILED! (1, 1, 1, 5) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 5 and 3 are not compatible',)
FAILED! (1, 1, 1, 5) (1, 1, 1, 3) <class 'ValueError'> ('Dimensions 5 and 1 are not compatible',)
失败!(5,1,1,1)(3,1,1,1)('Filter不能大于输入:Filter:(3,1)input:(1,1)',)
失败!(5,1,1,1)(1,3,1,1)('Filter不能大于输入:Filter:(1,3)input:(1,1)',)
失败!(5,1,1,1)(1,1,3,1)(尺寸1和3不兼容)
失败!(5,1,1,1)(1,1,1,3)没有注册操作内核来支持带有这些属性的Op'Conv2D'
[[Node:conv=Conv2D[T=DT\u DOUBLE,data\u format=“NHWC”,padding=“SAME”,strips=[1,1,1,1],在gpu=true上使用[u cudnn](x/read,phi/read)]]
失败!(1,5,1,1)(3,1,1,1)没有注册操作内核来支持带有这些属性的Op'Conv2D'
[[Node:conv=Conv2D[T=DT\u DOUBLE,data\u format=“NHWC”,padding=“SAME”,strips=[1,1,1,1],在gpu=true上使用[u cudnn](x/read,phi/read)]]
失败!(1,5,1,1)(1,3,1,1)('Filter不能大于输入:Filter:(1,3)input:(5,1)')
失败!(1,5,1,1)(1,1,3,1)(尺寸1和3不兼容)
失败!(1,5,1,1)(1,1,1,3)没有注册操作内核来支持带有这些属性的Op'Conv2D'
[[Node:conv=Conv2D[T=DT\u DOUBLE,data\u format=“NHWC”,padding=“SAME”,strips=[1,1,1,1],在gpu=true上使用[u cudnn](x/read,phi/read)]]
失败!(1,1,5,1)(3,1,1,1)('Filter不能大于输入:Filter:(3,1)input:(1,5)')
失败!(1,1,5,1)(1,3,1,1)没有注册操作内核来支持带有这些属性的Op'Conv2D'
[[Node:conv=Conv2D[T=DT\u DOUBLE,data\u format=“NHWC”,padding=“SAME”,strips=[1,1,1,1],在gpu=true上使用[u cudnn](x/read,phi/read)]]
失败!(1,1,5,1)(1,1,3,1)(“尺寸1和3不兼容”)
失败!(1,1,5,1)(1,1,1,3)没有注册操作内核来支持带有这些属性的Op'Conv2D'
[[Node:conv=Conv2D[T=DT\u DOUBLE,data\u format=“NHWC”,padding=“SAME”,strips=[1,1,1,1],在gpu=true上使用[u cudnn](x/read,phi/read)]]
失败!(1,1,1,5)(3,1,1,1)(尺寸5和1不兼容)
失败!(1,1,1,5)(1,3,1,1)(尺寸5和1不兼容)
失败!(1,1,1,5)(1,1,3,1)(“尺寸5和3不兼容”)
失败!(1,1,1,5)(1,1,1,3)(“尺寸5和1不兼容”)
嗯。好的,看起来现在有两个问题。首先,我想,
ValueError
是关于沿错误的轴应用过滤器,尽管有两种形式

但是我可以应用过滤器的轴也很混乱——注意,它实际上用输入形状(5,1,1,1)和过滤器形状(1,1,1,3)构建了图形。从文档中可以看出,这应该是一个从批处理中查看示例的过滤器,一个“像素”和一个“通道”,并输出3个“通道”。那么,当其他人不这样做时,为什么这一个能起作用呢

无论如何,有时它在构造图时不会失败。
有时它构造图形;然后我们得到
tensorflow.python.framework.errors.InvalidArgumentError
。从我收集的一些数据来看,这可能是因为我在CPU而不是GPU上运行,或者反过来,卷积运算只为32位浮点定义,而不是为64位浮点定义。如果有人能告诉我应该在哪个轴上对齐什么,以便将时间序列与内核进行卷积,我将非常感激。

我很抱歉这么说,但您的第一个代码几乎是正确的。您只需在
tf.nn.conv2d
中反转
x
phi

g=tf.Graph()
使用g.as_default():
#数据形状为“[批次、高度、宽度、通道]”,
x=tf.Variable(np.array([0.0,0.0,0.0,0.0,1.0])。重塑(1,1,5,1),name=“x”)
#过滤器形状为“[过滤器高度、过滤器宽度、输入通道、输出通道]”
phi=tf.变量(np.数组([0.0,0.5,1.0])。重塑(1,3,1,1),name=“phi”)
conv=tf.nn.conv2d(
x,,
phi,
步幅=[1,1,1,1],
padding=“相同”,
name=“conv”)

更新:TensorFlow现在支持1D卷积,自版本r0.11起,使用。我之前在这里粘贴的stackoverflow文档(现在已绝迹)中提供了使用它们的指南:


一维卷积指南 考虑一个基本示例,输入长度
10
,尺寸
16
。批量大小为
32
。因此,我们有一个带有输入形状
[批次大小,10,16]
的占位符

batch\u size=32
x=tf.placeholder(tf.float32,[批次大小,10,16])
然后,我们创建一个宽度为3的过滤器,并将
16
通道作为输入,同时输出
16
通道

filter=tf.zero([3,16,16])#这些应该是实值,而不是0

最后,我们应用带有跨步和填充的
tf.nn.conv1d
: -步幅:整数
s
-填充output = tf.nn.conv1d(x, filter, stride=2, padding="VALID")
import numpy as np

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

task_name = 'task_MNIST_flat_auto_encoder'
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
X_train, Y_train = mnist.train.images, mnist.train.labels # N x D
X_cv, Y_cv = mnist.validation.images, mnist.validation.labels
X_test, Y_test = mnist.test.images, mnist.test.labels

# data shape is "[batch, in_height, in_width, in_channels]",
# X_train = N x D
N, D = X_train.shape
# think of it as N images with height 1 and width D.
X_train = X_train.reshape(N,1,D,1)
x = tf.placeholder(tf.float32, shape=[None,1,D,1], name='x-input')
#x = tf.Variable( X_train , name='x-input')
# filter shape is "[filter_height, filter_width, in_channels, out_channels]"
filter_size, nb_filters = 10, 12 # filter_size , number of hidden units/units
# think of it as having nb_filters number of filters, each of size filter_size
W = tf.Variable( tf.truncated_normal(shape=[1, filter_size, 1,nb_filters], stddev=0.1) )
stride_convd1 = 2 # controls the stride for 1D convolution
conv = tf.nn.conv2d(input=x, filter=W, strides=[1, 1, stride_convd1, 1], padding="SAME", name="conv")

with tf.Session() as sess:
    sess.run( tf.initialize_all_variables() )
    sess.run(fetches=conv, feed_dict={x:X_train})
X_train_org = np.array([[0,1,2,3]])
N, D = X_train_org.shape
X_train_1d = X_train_org.reshape(N,1,D,1)
#X_train = tf.constant( X_train_org )
# think of it as N images with height 1 and width D.
xx = tf.placeholder(tf.float32, shape=[None,1,D,1], name='xx-input')
#x = tf.Variable( X_train , name='x-input')
# filter shape is "[filter_height, filter_width, in_channels, out_channels]"
filter_size, nb_filters = 2, 2 # filter_size , number of hidden units/units
# think of it as having nb_filters number of filters, each of size filter_size
filter_w = np.array([[1,3],[2,4]]).reshape(1,filter_size,1,nb_filters)
#W = tf.Variable( tf.truncated_normal(shape=[1,filter_size,1,nb_filters], stddev=0.1) )
W = tf.Variable( tf.constant(filter_w, dtype=tf.float32) )
stride_convd1 = 2 # controls the stride for 1D convolution
conv = tf.nn.conv2d(input=xx, filter=W, strides=[1, 1, stride_convd1, 1], padding="SAME", name="conv")

#C = tf.constant( (np.array([[4,3,2,1]]).T).reshape(1,1,1,4) , dtype=tf.float32 ) #
#tf.reshape( conv , [])
#y_tf = tf.matmul(conv, C)


##
x = tf.placeholder(tf.float32, shape=[None,D], name='x-input') # N x 4
W1 = tf.Variable( tf.constant( np.array([[1,2,0,0],[3,4,0,0]]).T, dtype=tf.float32 ) ) # 2 x 4
y1 = tf.matmul(x,W1) # N x 2 = N x 4 x 4 x 2
W2 = tf.Variable( tf.constant( np.array([[0,0,1,2],[0,0,3,4]]).T, dtype=tf.float32 ))
y2 = tf.matmul(x,W2) # N x 2 = N x 4 x 4 x 2
C1 = tf.constant( np.array([[4,3]]).T, dtype=tf.float32 ) # 1 x 2
C2 = tf.constant( np.array([[2,1]]).T, dtype=tf.float32 )

p1 = tf.matmul(y1,C1)
p2 = tf.matmul(y2,C2)
y = p1 + p2
with tf.Session() as sess:
    sess.run( tf.initialize_all_variables() )
    print 'manual conv'
    print sess.run(fetches=y1, feed_dict={x:X_train_org})
    print sess.run(fetches=y2, feed_dict={x:X_train_org})
    #print sess.run(fetches=y, feed_dict={x:X_train_org})
    print 'tf conv'
    print sess.run(fetches=conv, feed_dict={xx:X_train_1d})
    #print sess.run(fetches=y_tf, feed_dict={xx:X_train_1d})
manual conv
[[ 2.  4.]]
[[  8.  18.]]
tf conv
[[[[  2.   4.]
   [  8.  18.]]]]
import tensorflow as tf
i = tf.constant([1, 0, 2, 3, 0, 1, 1], dtype=tf.float32, name='i')
k = tf.constant([2, 1, 3], dtype=tf.float32, name='k')

data   = tf.reshape(i, [1, int(i.shape[0]), 1], name='data')
kernel = tf.reshape(k, [int(k.shape[0]), 1, 1], name='kernel')

res = tf.squeeze(tf.nn.conv1d(data, kernel, stride=1, padding='VALID'))
with tf.Session() as sess:
    print sess.run(res)