Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/361.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 当使用单输出神经元张量流的神经网络时,损失和精度为0_Python_Tensorflow_Loss - Fatal编程技术网

Python 当使用单输出神经元张量流的神经网络时,损失和精度为0

Python 当使用单输出神经元张量流的神经网络时,损失和精度为0,python,tensorflow,loss,Python,Tensorflow,Loss,我正在为某项任务编写一个二元分类器,我不想在输出层使用两个神经元,我只想使用一个带sigmoid函数的神经元,如果它小于0.5,则基本上输出类0,否则输出类1 加载图像,将其大小调整为64x64并展平,以创建问题的传真)。数据加载代码将出现在最后。我创建了占位符 x = tf.placeholder('float',[None, 64*64]) y = tf.placeholder('float',[None, 1]) 并定义模型如下 def create_model_linear(data)

我正在为某项任务编写一个二元分类器,我不想在输出层使用两个神经元,我只想使用一个带sigmoid函数的神经元,如果它小于0.5,则基本上输出类0,否则输出类1

加载图像,将其大小调整为64x64并展平,以创建问题的传真)。数据加载代码将出现在最后。我创建了占位符

x = tf.placeholder('float',[None, 64*64])
y = tf.placeholder('float',[None, 1])
并定义模型如下

def create_model_linear(data):

    fcl1_desc = {'weights': weight_variable([4096,128]), 'biases': bias_variable([128])}
    fcl2_desc = {'weights': weight_variable([128,1]), 'biases': bias_variable([1])}

    fc1 = tf.nn.relu(tf.matmul(data, fcl1_desc['weights']) + fcl1_desc['biases'])
    fc2 = tf.nn.sigmoid(tf.matmul(fc1, fcl2_desc['weights']) + fcl2_desc['biases'])

    return fc2
函数
weight\u variable
bias\u variable
只返回给定形状的
tf.variable()
。(它们的代码也在最后。)

然后我定义了如下的训练函数

def train(x, hm_epochs):
    prediction = create_model_linear(x)
    cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits  = prediction, labels = y) )
    optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
    batch_size = 100
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())

        for epoch in range(hm_epochs):
            epoch_loss = 0
            i = 0
            while i < len(train_x):
                start = i
                end = i + batch_size
                batch_x = train_x[start:end]
                batch_y = train_y[start:end]
                _, c = sess.run([optimizer, cost], feed_dict = {x:batch_x, y:batch_y})

                epoch_loss += c
                i+=batch_size

            print('Epoch', epoch+1, 'completed out of', hm_epochs,'loss:',epoch_loss)
        correct = tf.greater(prediction,[0.5])
        accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
        i = 0
        acc = []
        while i < len(train_x):
            acc +=[accuracy.eval({x:train_x[i:i+1000], y:train_y[i:i + 1000]})]
            i+=1000
    print sum(acc)/len(acc)

我认为你应该使用
tf.nn.sigmoid\u cross\u entropy\u和\u logits
而不是
tf.nn.softmax\u cross\u entropy\u和\u logits
,因为你在输出层使用了sigmoid和1个神经元

您还需要从
create\u model\u linear
而且,您没有使用
y
标签,精度必须符合以下形式

correct = tf.equal(tf.greater(tf.nn.sigmoid(prediction),[0.5]),tf.cast(y,'bool'))

我认为你应该用tf.nn.sigmoid\u cross\u entropy\u和logits来代替tf.nn.softmax\u cross\u entropy\u和logits,因为你在输出层使用了sigmoid和1个神经元。实际上就是这样。真不敢相信我错过了。我也应该把sifmoid从最后一层中移除我很高兴它帮助解决了这个问题!然后我会把我的评论作为一个单独的答案发表。
correct = tf.equal(tf.greater(tf.nn.sigmoid(prediction),[0.5]),tf.cast(y,'bool'))