Tensorflow 将tf.trainable_variables()返回的对象转换为张量

Tensorflow 将tf.trainable_variables()返回的对象转换为张量,tensorflow,Tensorflow,tf.trainable_variables()返回所有可训练变量对象的列表。当列表中的对象被传递给op时,例如tf.nn.l2_loss,TensorFlow能够将该对象转换为张量并执行必要的计算。但是,将同一对象传递给用户定义的函数会引发错误 考虑使用以下两层网络: # Generate random data x_train = np.random.rand(64, 16, 16, 8) y_train = np.random.randint(0, 5, 64) one_hot = np.

tf.trainable_variables()
返回所有可训练变量对象的列表。当列表中的对象被传递给op时,例如
tf.nn.l2_loss
,TensorFlow能够将该对象转换为张量并执行必要的计算。但是,将同一对象传递给用户定义的函数会引发错误

考虑使用以下两层网络:

# Generate random data
x_train = np.random.rand(64, 16, 16, 8)
y_train = np.random.randint(0, 5, 64)
one_hot = np.zeros((len(y_train), 5))
one_hot[list(np.indices((len(y_train),))) + [y_train]] = 1
y_train = one_hot

# Model definition
class FeedForward(object):
    def __init__(self, l2_lambda=0.01):
        self.x = tf.placeholder(tf.float32, shape=[None, 16, 16, 4], name="input_x")
        self.y = tf.placeholder(tf.float32, [None, 5], name="input_y")

        l2_loss = tf.constant(0.0)

        with tf.name_scope("conv1"):
            kernel_shape=[1, 1, 4, 4]
            w = tf.Variable(tf.truncated_normal(kernel_shape, stddev=0.1), name="weight")
            conv1 = tf.nn.conv2d(self.x, w, strides=[1, 1, 1, 1], padding="SAME", name="conv")

        with tf.name_scope("conv2"):
            kernel_shape=[1, 1, 4, 2]
            w = tf.Variable(tf.truncated_normal(kernel_shape, stddev=0.1), name="weight")
            conv2 = tf.nn.conv2d(conv1, w, strides=[1, 1, 1, 1], padding="SAME", name="conv")

        out = tf.contrib.layers.flatten(conv2)

        with tf.name_scope("output"):
            kernel_shape=[out.get_shape()[1].value, 5]
            w = tf.Variable(tf.truncated_normal(kernel_shape, stddev=0.1), name="weight")
            self.scores = tf.matmul(out, w, name="scores")
            predictions = tf.argmax(self.scores, axis=1, name="predictions")

        # L2 Regularizer
        if l2_reg_lambda > 0.:
            l2_loss = tf.add_n([self.some_norm(var) for var in tf.trainable_variables() if ("weight" in var.name)])

        losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores, labels=self.y)
        self.loss = tf.reduce_mean(losses) + (l2_lambda * l2_loss)

        correct_predictions = tf.equal(predictions, tf.argmax(self.y, axis=1))
        self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy")

    def some_norm(w):
        # operate on w and return scalar
        # (only) for example
        return (1 / tf.nn.l2_loss(w))

with tf.Graph().as_default():
    sess = tf.Session()      

    with sess.as_default():
        ffn = FeedForward()

        global_step = tf.Variable(0, name="global_step", trainable=False)
        optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-2)
        grads_and_vars = optimizer.compute_gradients(ffn.loss)
        sess.run(tf.global_variables_initializer())

        def train_step(x_batch, y_batch):
            feed_dict = {
                ffn.x: x_batch, 
                ffn.y: y_batch,                
            }
            _, step, loss, accuracy = sess.run([train_op, global_step, ffn.loss, ffn.accuracy], feed_dict)
            print("step {}, loss {:g}, acc {:g}".format(step, loss, accuracy))

        batch_size = 32
        n_epochs = 4
        s_idx = - batch_size

        for batch_index in range(n_epochs):
            s_idx += batch_size
            e_idx = s_idx + batch_size
            x_batch = x_train[s_idx:e_idx]
            y_batch = y_train[s_idx:e_idx]

            train_step(x_batch, y_batch)
            current_step = tf.train.global_step(sess, global_step)
这里的问题是,将可训练变量传递给
某个_norm()
时,它作为对象传递,无法对其进行操作。
some\u norm()
中第一行遇到的相关错误消息是:

Failed to convert object of type <class '__main__.FeedForward'> to Tensor.
Contents: <__main__.FeedForward object at 0x7fefde7e97b8>. 
Consider casting elements to a supported type.
无法将类型的对象转换为张量。
目录:。
将铸造元素考虑为支持类型。
是否有方法将
tf.trainable_variables()
返回的对象转换为张量,或者是否有可能的解决方法,例如传递引用


使用上述方法与使用
l2_loss=tf.add_n([tf.nn.l2_loss(var)for var in tf.trainable_variables()…])
这两种方法有何不同?

您忘记了
some_norm
实现中的
定义some_norm(w):
,因此它试图转换类的实例(
self
)转换为
张量