Machine learning 负决策函数值

Machine learning 负决策函数值,machine-learning,scikit-learn,classification,svm,Machine Learning,Scikit Learn,Classification,Svm,我在Iris数据集上使用sklearn的支持向量分类器。当我调用decision\u函数时,它返回负值。但分类后的测试数据集中的所有样本都有正确的类。我认为,当样本是一个内点时,decision_函数应该返回正值,而如果样本是一个异常值,那么decision_函数应该返回负值。我错在哪里 from sklearn import datasets from sklearn.svm import SVC from sklearn.model_selection import train_test_s

我在Iris数据集上使用sklearn的支持向量分类器。当我调用
decision\u函数时,它返回负值。但分类后的测试数据集中的所有样本都有正确的类。我认为,当样本是一个内点时,decision_函数应该返回正值,而如果样本是一个异常值,那么decision_函数应该返回负值。我错在哪里

from sklearn import datasets
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split

iris = datasets.load_iris()
X = iris.data[:,:]
y = iris.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, 
random_state=0)

clf = SVC(probability=True)
print(clf.fit(X_train,y_train).decision_function(X_test))
print(clf.predict(X_test))
print(y_test)
以下是输出:

[[-0.76231668 -1.03439531 -1.40331645]
 [-1.18273287 -0.64851109  1.50296097]
 [ 1.10803774  1.05572833  0.12956269]
 [-0.47070432 -1.08920859 -1.4647051 ]
 [ 1.18767563  1.12670665  0.21993744]
 [-0.48277866 -0.98796232 -1.83186272]
 [ 1.25020033  1.13721691  0.15514536]
 [-1.07351583 -0.84997114  0.82303659]
 [-1.04709616 -0.85739411  0.64601611]
 [-1.23148923 -0.69072989  1.67459938]
 [-0.77524787 -1.00939817 -1.08441968]
 [-1.12212245 -0.82394879  1.11615504]
 [-1.14646662 -0.91238712  0.80454974]
 [-1.13632316 -0.8812114   0.80171542]
 [-1.14881866 -0.95169643  0.61906248]
 [ 1.15821271  1.10902205  0.22195304]
 [-1.19311709 -0.93149873  0.78649126]
 [-1.21653084 -0.90953622  0.78904491]
 [ 1.16829526  1.12102515  0.20604678]
 [ 1.18446364  1.1080255   0.15199149]
 [-0.93911991 -1.08150089 -0.8026332 ]
 [-1.15462733 -0.95603159  0.5713605 ]
 [ 0.93278883  0.99763184  0.34033663]
 [ 1.10999556  1.04596018  0.14791409]
 [-1.07285663 -1.01864255 -0.10701465]
 [ 1.21200422  1.01284263  0.0416991 ]
 [ 0.9462457   1.01076579  0.36620915]
 [-1.2108146  -0.79124775  1.43264808]
 [-1.02747495 -0.25741977  1.13056021]
...
 [ 1.16066886  1.11212424  0.22506538]]
 [2 1 0 2 0 2 0 1 1 1 2 1 1 1 1 0 1 1 0 0 2 1 0 0 2 0 0 1 1 0 2 1 0 2 2 1 0
 2 1 1 2 0 2 0 0]

 [2 1 0 2 0 2 0 1 1 1 2 1 1 1 1 0 1 1 0 0 2 1 0 0 2 0 0 1 1 0 2 1 0 2 2 1 0
 1 1 1 2 0 2 0 0]

你需要分别考虑决策函数和预测。决定因素是从超平面到样本的距离。这意味着通过查看标志,您可以判断样本位于超平面的右侧还是左侧。所以负值是非常好的,表示负类(“超平面的另一边”)

使用iris数据集,您会遇到多类问题。由于支持向量机是二值分类器,因此没有固有的多类分类。两种方法是“一对一”(OvR)和“一对一”方法,它们从二进制“单元”构造多类分类器

一对一 既然您已经了解了OvR,OvA也就不难理解了。您基本上构造了类对的每个组合的分类器(a, B) 。在你的情况下:0对1,0对2,1对2

注:(A,B)和(B,A)的值可以从单个二进制分类器获得。你只改变被认为是正类的东西,因此你必须反转符号

这样做可以得到一个矩阵:

+-------+------+-------+-------+
| A / B |  #0  |   #1  |   #2  |
+-------+------+-------+-------+
|       |      |       |       |
| #0    |  --  | -1.18 | -0.64 |
|       |      |       |       |
| #1    | 1.18 |  --   |  1.50 |
|       |      |       |       |
| #2    | 0.64 | -1.50 |  --   |
+-------+------+-------+-------+
请阅读以下内容: A类(行)与B类(列)竞争时的决策函数值

为了提取结果,将执行投票。在基本形式中,您可以将其想象为每个分类器可以给出的单个投票:是或否。这可能导致抽签,因此我们使用整个决策函数值

+-------+------+-------+-------+-------+
| A / B |  #0  |   #1  |   #2  |  SUM  |
+-------+------+-------+-------+-------+
|       |      |       |       |       |
| #0    | -    | -1.18 | -0.64 | -1.82 |
|       |      |       |       |       |
| #1    | 1.18 | -     | 1.50  | 2.68  |
|       |      |       |       |       |
| #2    | 0.64 | -1.50 | -     | 0.86  |
+-------+------+-------+-------+-------+
结果列再次为您提供一个向量
[-1.82、2.68、0.86]
。现在应用
arg max
,它与您的预测相匹配

一对一 我保留此部分以避免进一步混淆。scikit lear分类器(libsvm)有一个
决策函数\u形状
参数,这让我误以为它是OvR(我大部分时间都在使用liblinear)

对于真正的OvR响应,您可以从每个分类器的决策函数中获得一个值,例如

 [-1.18273287 -0.64851109  1.50296097]
现在,要从中获得预测,只需应用
argmax
,它将返回最后一个索引,其值为
1.5029697
。从这里开始,不再需要决策函数的值(对于这个单一预测)。这就是为什么你注意到你的预测是正确的

但是,您还指定了
probability=True
,它使用距离函数的值并将其传递给。示例原则如上所述,但现在也有0到1之间的置信值(我更喜欢这个术语而不是概率,因为它只描述到超平面的距离)

编辑:
哦,萨沙是对的。LibSVM使用一对一(尽管决策函数的形状不同)。

Christopher是正确的,但这里假设OvR

现在,您正在执行OvO计划,但没有注意到它

以下是一些示例,其中:

  • 解释如何使用OvO+决策函数进行预测
但first OvO的预测理论来自:

代码:

从sklearn导入数据集
从sklearn.svm导入SVC
从sklearn.model\u选择导入列车\u测试\u拆分
将numpy作为np导入
iris=数据集。加载\u iris()
X=虹膜。数据[:,:]
y=iris.target
X_列,X_测试,y_列,y_测试=列测试分割(X,y,测试大小=.3,
随机_状态=0)
clf=SVC(decision_function_shape='ovo')#显式ovo用法!
clf.配合(X,y)
def预测(12月):
#OVO预测方案
#硬编码为3类!
#卵序假设:0对1;0对2;1对2(词典学!)
#理论:http://www.stat.ucdavis.edu/~choxieh/teaching/ECS289G_Fall2015/讲师9.pdf第18页
#以及:http://www.mit.edu/~9.520/spring09/Classes/multiclass.pdf第8页
类别0=十二月[0]+十二月[1]
class1=-dec[0]+dec[2]
class2=-dec[1]-dec[2]
返回np.argmax([class0,class1,class2])
dec_vals=clf.决策函数(X_测试)
预测值=clf.预测(X_检验)
pred\u vals\u own=np.array([predict(x)for x in dec\u vals])
对于范围内的i(len(X_测试)):
打印('decision\u function vals:',dec\u vals[i])
打印('预测:',预测值[i])
打印('own prediction using dec:',pred\u vals\u own[i])
输出:

decision_function vals  :  [-0.76867027 -1.04536032 -1.60216452]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [-1.19939987 -0.64932285  1.6951256 ]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [ 1.11946664  1.05573131  0.06261988]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [-0.46107656 -1.09842529 -1.50671611]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [ 1.2094164   1.12827802  0.1415261 ]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [-0.47736819 -0.99988924 -2.15027278]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [ 1.25467104  1.13814461  0.07643985]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [-1.07557745 -0.87436887  0.93179222]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [-1.05047139 -0.88027404  0.80181305]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [-1.24310627 -0.70058067  1.906847  ]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [-0.78440125 -1.00630434 -0.99963088]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [-1.12586024 -0.84193093  1.25542752]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [-1.15639222 -0.91555677  1.07438865]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [-1.14345638 -0.90050709  0.95795276]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [-1.15790163 -0.95844647  0.83046875]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [ 1.17805731  1.11063472  0.1333462 ]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [-1.20283096 -0.93961585  0.98410451]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [-1.22782802 -0.90725712  1.05316513]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [ 1.16903803  1.12221984  0.11367107]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [ 1.17145967  1.10832227  0.08212776]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [-0.9506135  -1.08467062 -0.79851794]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [-1.16266048 -0.9573001   0.79179457]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [ 0.99991983  0.99976567  0.27258784]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [ 1.14009372  1.04646327  0.05173163]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [-1.08080806 -1.03404209 -0.06411027]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [ 1.23515997  1.01235174 -0.03884014]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [ 0.99958361  1.0123953   0.31647776]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [-1.21958703 -0.8018796   1.67844367]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [-1.03327108 -0.25946619  1.1567434 ]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [ 1.12368215  1.11169071  0.20956223]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [-0.82416303 -1.07792277 -1.1580516 ]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [-1.13071754 -0.96096255  0.65828256]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [ 1.194643    1.12966124  0.15746621]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [-1.04070512 -1.04532308 -0.20319486]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [-0.70170723 -1.09340841 -1.9323473 ]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [-1.24655214 -0.74489305  1.15450078]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [ 0.99984598  1.03781258  0.2790073 ]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [-0.99993896 -1.06846079 -0.44496083]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [-1.22495071 -0.83041964  1.41965874]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [-1.286798   -0.72689128  1.72244026]
sklearns prediction     :  1
own prediction using dec:  1
decision_function vals  :  [-0.75503345 -1.09561165 -1.44344022]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [ 1.24778268  1.11179415  0.05277115]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [-0.79577073 -1.00004599 -0.99974376]
sklearns prediction     :  2
own prediction using dec:  2
decision_function vals  :  [ 1.07018075  1.0831253   0.22181655]
sklearns prediction     :  0
own prediction using dec:  0
decision_function vals  :  [ 1.16705531  1.11326796  0.15604895]
sklearns prediction     :  0
own prediction using dec:  0

谢谢你的回答!我理解决策函数负值的解释。当我们从
[-1.18273287-0.64851109 1.5029697]
获取
arg max
时,它会重新运行
2
。但第二行的真正类是'1'。我又错了?Sascha是对的libsvm使用OvA。我被决策函数输出的形状弄错了。我编辑了我的答案。对不起,把你弄糊涂了。我仍然保留了原始答案的一部分,因为现在你可以看到两种不同的多类策略如何产生不同的预测。谢谢!这是我读到的最容易理解的解释之一。