Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/318.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Softmax在Python实现中不会导致概率分布_Python_Python 3.x_Machine Learning_Deep Learning_Softmax - Fatal编程技术网

Softmax在Python实现中不会导致概率分布

Softmax在Python实现中不会导致概率分布,python,python-3.x,machine-learning,deep-learning,softmax,Python,Python 3.x,Machine Learning,Deep Learning,Softmax,我有一个简单的softmax实现: softmax = np.exp(x) / np.sum(np.exp(x), axis=0) 对于此处设置为数组的x: 您可以将其加载为: import numpy as np x = np.as (just copy and paste the content (starting from array)) 我得到: softmax.mean(axis=0).shape (100,) # now all elements must be 1.0

我有一个简单的softmax实现:

softmax = np.exp(x) / np.sum(np.exp(x), axis=0)
对于此处设置为数组的x:

您可以将其加载为:

 import numpy as np

 x = np.as (just copy and paste the content (starting from array))
我得到:

softmax.mean(axis=0).shape 
(100,) # now all elements must be 1.0 here, since its a probability

softmax.mean(axis=0) # all elements are not 1

array([0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158,
       0.05263158, 0.05263158, 0.05263158, 0.05263158, 0.05263158])
为什么这个实现是错误的?如何修复它?

在我看来不错:

import numpy as np

def softmax(x):
    return np.exp(x) / np.sum(np.exp(x), axis=0)

logits = softmax(np.random.rand(4))

print(logits)
softmax激活的所有元素之和应等于1

对于分类任务,通常采用具有最高值np.argmax或最高n-索引的索引,并选择这些作为最可能的类:

class_index = np.argmax(logits)  # Assuming logits is the output of a trained model

print('Most likely class: %d' % class_index)

正如JosepJoestar在评论中指出的那样,可以找到softmax函数的定义。

概率总和必须是1,不是平均值。让我们用这个简单的例子更清楚地说明这一点。想象3个softmax输出值s=[0.5,0.25,0.25]。显然,他们必须总结出1个概率。但它们的平均值是0.333

>>>softmax.sumaxis=0 数组[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,。, 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]
我希望这个例子能说明这一点

是的,我也提供了输入。输入不正确working@Rafael我想你把softmax的工作原理搞糊涂了。所有元素的总和应该是1。是的,代码没有问题,除了对softmax如何工作的误解。第一段有明确的定义。