(扫描)在NO和tensorflow中的功能

(扫描)在NO和tensorflow中的功能,tensorflow,theano,Tensorflow,Theano,我在theano中具有以下功能: def forward_prop_step(x_t, s_t1_prev, s_t2_prev): # This is how we calculated the hidden state in a simple RNN. No longer! # s_t = T.tanh(U[:,x_t] + W.dot(s_t1_prev)) # Word embedding layer x_e = E[:,

我在theano中具有以下功能:

def forward_prop_step(x_t, s_t1_prev, s_t2_prev):
        # This is how we calculated the hidden state in a simple RNN. No longer!
        # s_t = T.tanh(U[:,x_t] + W.dot(s_t1_prev))

        # Word embedding layer
        x_e = E[:,x_t]

        # GRU Layer 1
        z_t1 = T.nnet.hard_sigmoid(U[0].dot(x_e) + W[0].dot(s_t1_prev) + b[0])
        r_t1 = T.nnet.hard_sigmoid(U[1].dot(x_e) + W[1].dot(s_t1_prev) + b[1])
        c_t1 = T.tanh(U[2].dot(x_e) + W[2].dot(s_t1_prev * r_t1) + b[2])
        s_t1 = (T.ones_like(z_t1) - z_t1) * c_t1 + z_t1 * s_t1_prev

        # GRU Layer 2
        z_t2 = T.nnet.hard_sigmoid(U[3].dot(s_t1) + W[3].dot(s_t2_prev) + b[3])
        r_t2 = T.nnet.hard_sigmoid(U[4].dot(s_t1) + W[4].dot(s_t2_prev) + b[4])
        c_t2 = T.tanh(U[5].dot(s_t1) + W[5].dot(s_t2_prev * r_t2) + b[5])
        s_t2 = (T.ones_like(z_t2) - z_t2) * c_t2 + z_t2 * s_t2_prev

        # Final output calculation
        # Theano's softmax returns a matrix with one row, we only need the row
        o_t = T.nnet.softmax(V.dot(s_t2) + c)[0]

        return [o_t, s_t1, s_t2]
我用scan调用这个函数:

[o, s, s2], updates = theano.scan(
            forward_prop_step,
            sequences=x,
            truncate_gradient=self.bptt_truncate,
            outputs_info=[None, 
                          dict(initial=T.zeros(self.hidden_dim)),
dict(initial=T.zeros(self.hidden_dim))])
s = tf.zeros([self.hidden_dim, 0])
s2 = tf.zeros([self.hidden_dim, 0])

[o, s, s2] = tf.scan(
     fn=forward_prop_step,
     elems=[x, s, s2])
我尝试在tensorflow中重写相同的函数:

def forward_prop_step(x_t, s_t1_prev, s_t2_prev):
     # Word embedding layer
     x_e = E[:, x_t]

     # GRU Layer 1
     z_t1 = tf.sigmoid(tf.reduce_sum(U[0] * x_e, axis=1) + tf.reduce_sum(W[0] * s_t1_prev, axis=1) + b[0])
     r_t1 = tf.sigmoid(tf.reduce_sum(U[1] * x_e, axis=1) + tf.reduce_sum(W[1] * s_t1_prev, axis=1) + b[1])
     c_t1 = tf.tanh(tf.reduce_sum(U[2] * x_e, axis=1) + tf.reduce_sum(W[2] * (s_t1_prev * r_t1), axis=1) + b[2])
     s_t1 = (tf.ones_like(z_t1) - z_t1) * c_t1 + z_t1 * s_t1_prev

     # GRU Layer 2
     z_t2 = tf.sigmoid(tf.reduce_sum(U[3] * s_t1, axis=1) + tf.reduce_sum(W[3] * s_t2_prev, axis=1) + b[3])
     r_t2 = tf.sigmoid(tf.reduce_sum(U[4] * s_t1, axis=1) + tf.reduce_sum(W[4] * s_t2_prev) + b[1])
     c_t2 = tf.tanh(tf.reduce_sum(U[5] * s_t1, axis=1) + tf.reduce_sum(W[5] * (s_t2_prev * r_t2), axis=1) + b[5])
     s_t2 = (tf.ones_like(z_t2) - z_t2) * c_t2 + z_t2 * s_t2_prev

     # Final output calculation
     o_t = tf.softmax(tf.reduce_sum(V * s_t2, axis=1) + c)[0]

     return [o_t, s_t1, s_t2]
我使用scan调用了这个函数:

[o, s, s2], updates = theano.scan(
            forward_prop_step,
            sequences=x,
            truncate_gradient=self.bptt_truncate,
            outputs_info=[None, 
                          dict(initial=T.zeros(self.hidden_dim)),
dict(initial=T.zeros(self.hidden_dim))])
s = tf.zeros([self.hidden_dim, 0])
s2 = tf.zeros([self.hidden_dim, 0])

[o, s, s2] = tf.scan(
     fn=forward_prop_step,
     elems=[x, s, s2])
在扫描之前,我没有使用初始值设定项,而是初始化了s和s2变量。在tensorflow中运行代码时,出现以下错误:

TypeError:forward_prop_step()只接受3个参数(给定2个)


我确信唯一的问题不是上面的bug。如何通过获取theano代码的引用来重写tensorflow中的扫描函数?

如果要将多个元素传递给
tf.scan()
,则需要将它们封装在列表或元组中。下面是一个如何执行此操作的示例:

def f(x, ys):
  (y1, y2) = ys
  return x + y1 * y2

a = tf.constant([1, 2, 3, 4, 5])
b = tf.constant([2, 3, 2, 2, 1])
c = tf.scan(f, (a, b), initializer=0)
with tf.Session() as sess:
  print(sess.run(c))
其中打印:

[ 2  8 14 22 27]

我希望这有帮助

错误是正确的,tensorflow扫描函数应该有2个参数,如文档中所述,请arkhy,您能告诉我如何正确地在tf中写入扫描吗?:)我非常感谢你没有通过x。我很困惑。x对我来说很重要。