Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 混合keras和tensorflow的可变范围误差_Python_Tensorflow_Keras - Fatal编程技术网

Python 混合keras和tensorflow的可变范围误差

Python 混合keras和tensorflow的可变范围误差,python,tensorflow,keras,Python,Tensorflow,Keras,我正在尝试使用CNN使用Keras进行文本分类。然而,keras代码的性能似乎大大低于等效的tensorflow代码。因此,我一直在尝试交换Keras代码的部分,以查看瓶颈在哪里。但是,我得到了以下错误,表明我正在重用变量\u范围: ValueError: Variable conv_maxpool_1/W already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

我正在尝试使用CNN使用Keras进行文本分类。然而,keras代码的性能似乎大大低于等效的tensorflow代码。因此,我一直在尝试交换Keras代码的部分,以查看瓶颈在哪里。但是,我得到了以下错误,表明我正在重用变量\u范围:

ValueError: Variable conv_maxpool_1/W already exists, disallowed.
Did you mean to set reuse=True in VarScope? Originally defined at:

File "foo/lib/python2.7/site-packages/keras/layers/core.py", line 651, in call
             return self.function(inputs, **arguments)
File "foo/lib/python2.7/site-packages/keras/engine/topology.py", line 603, in __call__ output = self.call(inputs, **kwargs)
我正在使用函数api和
Lambda
层来获得卷积层:

def get_conv_pool_layer(embeddings, embedding_size, filter_size, num_filters, sequence_length):

    def _get_conv_pool(embeddings):
        with tf.variable_scope("conv_maxpool_%s" % filter_size) as scope:
            filter_shape = [filter_size, embedding_size, 1, num_filters]
            W = tf.get_variable("W",
                                shape=filter_shape,
                                initializer=tf.truncated_normal_initializer(0, 0.1))
            b = tf.get_variable("b", shape=[num_filters], initializer=tf.constant_initializer(0.1))
            conv = tf.nn.conv2d(
                embeddings,
                W,
                strides=[1, 1, 1, 1],
                padding='VALID')

            h = tf.nn.relu(tf.nn.bias_add(conv, b), name=scope.name)
            return tf.nn.max_pool(
                    h,
                    ksize=[1, sequence_length - filter_size + 1, 1, 1],
                    strides=[1, 1, 1, 1],
                    padding='VALID',
                    name="pool")

    return Lambda(_get_conv_pool)(embeddings)