Machine learning F1微操和精准度一样吗?

Machine learning F1微操和精准度一样吗?,machine-learning,scikit-learn,svm,Machine Learning,Scikit Learn,Svm,我在scikit learn中尝试了许多F1 micro和精度的例子,在所有这些例子中,我都看到F1 micro和精度是一样的。这总是真的吗 剧本 from sklearn import svm from sklearn import metrics from sklearn.cross_validation import train_test_split from sklearn.datasets import load_iris from sklearn.metrics import f1_

我在scikit learn中尝试了许多F1 micro和精度的例子,在所有这些例子中,我都看到F1 micro和精度是一样的。这总是真的吗

剧本

from sklearn import svm
from sklearn import metrics
from sklearn.cross_validation import train_test_split
from sklearn.datasets import load_iris
from sklearn.metrics import f1_score, accuracy_score

# prepare dataset
iris = load_iris()
X = iris.data[:, :2]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

# svm classification
clf = svm.SVC(kernel='rbf', gamma=0.7, C = 1.0).fit(X_train, y_train)
y_predicted = clf.predict(X_test)

# performance
print "Classification report for %s" % clf
print metrics.classification_report(y_test, y_predicted)

print("F1 micro: %1.4f\n" % f1_score(y_test, y_predicted, average='micro'))
print("F1 macro: %1.4f\n" % f1_score(y_test, y_predicted, average='macro'))
print("F1 weighted: %1.4f\n" % f1_score(y_test, y_predicted, average='weighted'))
print("Accuracy: %1.4f" % (accuracy_score(y_test, y_predicted)))
输出

Classification report for SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
  decision_function_shape=None, degree=3, gamma=0.7, kernel='rbf',
  max_iter=-1, probability=False, random_state=None, shrinking=True,
  tol=0.001, verbose=False)
             precision    recall  f1-score   support

          0       1.00      0.90      0.95        10
          1       0.50      0.88      0.64         8
          2       0.86      0.50      0.63        12

avg / total       0.81      0.73      0.74        30

F1 micro: 0.7333

F1 macro: 0.7384

F1 weighted: 0.7381

Accuracy: 0.7333

F1 micro=准确度

在分类任务中,每个测试用例都保证被分配到一个类别,micro-F相当于准确度。在多标签分类中不会出现这种情况。

我也有同样的问题,所以我调查并提出了以下建议:

仅从理论上考虑,不可能每个数据集的精确度和f1分数都是相同的。这是因为f1的分数与真正的负数无关,而准确度则不然

通过获取
f1=acc
的数据集并向其添加真正的负片,您可以得到
f1!=acc

>>> from sklearn.metrics import accuracy_score as acc
>>> from sklearn.metrics import f1_score as f1
>>> y_pred = [0, 1, 1, 0, 1, 0]
>>> y_true = [0, 1, 1, 0, 0, 1]
>>> acc(y_true, y_pred)
0.6666666666666666
>>> f1(y_true,y_pred)
0.6666666666666666
>>> y_true = [0, 1, 1, 0, 1, 0, 0, 0, 0]
>>> y_pred = [0, 1, 1, 0, 0, 1, 0, 0, 0]
>>> acc(y_true, y_pred)
0.7777777777777778
>>> f1(y_true,y_pred)
0.6666666666666666

对于每个实例必须分类为一个(且仅一个)类的情况,平均精度、召回率、f1和准确度都是相等的。一个简单的方法是查看公式precision=TP/(TP+FP)和recall=TP/(TP+FN)。分子是相同的,一个类的每个FN都是另一个类的FP,这使得分母也相同。如果精度=召回率,则f1也将相等

对于任何输入,应能够显示:

from sklearn.metrics import accuracy_score as acc
from sklearn.metrics import f1_score as f1
f1(y_true,y_pred,average='micro')=acc(y_true,y_pred)

在对不平衡数据集进行分类时,准确度没有意义,但micro F1也没有意义(因为它们具有相同的值)??我在某个地方读到,对于不平衡的数据集,应该使用微观F1而不是宏观F1。这一切是怎么回事?@bikashg你说得对。Micro F1没有意义,原因与精确性没有意义相同。你在报纸上看过吗?你能把它链接起来吗?