Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/292.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 为什么在相同的输入上运行时,我的自定义流度量总是给出不同的结果?_Python_Tensorflow - Fatal编程技术网

Python 为什么在相同的输入上运行时,我的自定义流度量总是给出不同的结果?

Python 为什么在相同的输入上运行时,我的自定义流度量总是给出不同的结果?,python,tensorflow,Python,Tensorflow,我正在尝试学习如何在Tensorflow中创建自己的自定义流媒体度量 我开始尝试编写自己的函数来计算f1分数 以下是我目前掌握的情况: import tensorflow as tf import numpy as np from sklearn.metrics import precision_recall_fscore_support, f1_score, precision_score sess = tf.InteractiveSession() # Custom streaming

我正在尝试学习如何在Tensorflow中创建自己的自定义流媒体度量

我开始尝试编写自己的函数来计算f1分数

以下是我目前掌握的情况:

import tensorflow as tf
import numpy as np
from sklearn.metrics import precision_recall_fscore_support, f1_score, precision_score

sess = tf.InteractiveSession()

# Custom streaming metric to compute f1 score.
# Code is from answer to https://stackoverflow.com/questions/44764688/custom-metric-based-on-tensorflows-streaming-metrics-returns-nan/44935895
def metric_fn(predictions=None, labels=None, weights=None):
    P, update_op1 = tf.contrib.metrics.streaming_precision(predictions, labels)
    R, update_op2 = tf.contrib.metrics.streaming_recall(predictions, labels)
    eps = 1e-5;
    return (2*(P*R)/(P+R+eps), tf.group(update_op1, update_op2))


# True labels
labels = np.array([1, 0, 0, 1])
# Predicted labels
preds = np.array([1, 1, 0, 1])

f1 = metric_fn(preds, labels)

init1 = tf.global_variables_initializer()
init2 = tf.local_variables_initializer()
sess.run([init1, init2])

# Check result with output from sklearn
print(f1_score(labels, preds))

# Run a custom metric a few times
print(sess.run(f1))
print(sess.run(f1))
print(sess.run(f1))
这是我得到的输出:

0.8
(0.0, None)
(0.99999624, None)
(0.79999518, None)
第一行是使用sklearn的f1_分数函数计算的f1分数,这是正确的。其余部分来自
公制\u fn


我不理解
公制\u fn
的输出。为什么即使我给出相同的输出,
metric\u fn
的结果总是变化?而且,即使我所编码的公式是正确的,它的结果都不正确。为了得到正确的结果,我需要进行哪些更改?

您可以通过以下方式将
公制fn的输出分为两部分:

f1_value, update_op = metric_fn(preds, labels)
import tensorflow as tf
import numpy as np
from sklearn.metrics import precision_recall_fscore_support, f1_score, precision_score

sess = tf.InteractiveSession()

# Custom streaming metric to compute f1 score.
# Code is from answer to https://stackoverflow.com/questions/44764688/custom-metric-based-on-tensorflows-streaming-metrics-returns-nan/44935895
def metric_fn(predictions=None, labels=None, weights=None):
    P, update_op1 = tf.contrib.metrics.streaming_precision(predictions, labels)
    R, update_op2 = tf.contrib.metrics.streaming_recall(predictions, labels)
    eps = 1e-5;
    return (2*(P*R)/(P+R+eps), tf.group(update_op1, update_op2))


# True labels
labels = np.array([1, 0, 0, 1])
# Predicted labels
preds = np.array([1, 1, 0, 1])

f1_value, update_op = metric_fn(preds, labels)

init1 = tf.global_variables_initializer()
init2 = tf.local_variables_initializer()
sess.run([init1, init2])

# Check result with output from sklearn
print(f1_score(labels, preds))

# Run a custom metric a few times
print(sess.run(f1_value))
print(sess.run(update_op))
print(sess.run(f1_value))
其中,
f1_值
是分数的当前值&
update_op
是采用pred和标签的新值并更新f1分数的op

因此,在此上下文中,您可以通过以下方式更改代码:

f1_value, update_op = metric_fn(preds, labels)
import tensorflow as tf
import numpy as np
from sklearn.metrics import precision_recall_fscore_support, f1_score, precision_score

sess = tf.InteractiveSession()

# Custom streaming metric to compute f1 score.
# Code is from answer to https://stackoverflow.com/questions/44764688/custom-metric-based-on-tensorflows-streaming-metrics-returns-nan/44935895
def metric_fn(predictions=None, labels=None, weights=None):
    P, update_op1 = tf.contrib.metrics.streaming_precision(predictions, labels)
    R, update_op2 = tf.contrib.metrics.streaming_recall(predictions, labels)
    eps = 1e-5;
    return (2*(P*R)/(P+R+eps), tf.group(update_op1, update_op2))


# True labels
labels = np.array([1, 0, 0, 1])
# Predicted labels
preds = np.array([1, 1, 0, 1])

f1_value, update_op = metric_fn(preds, labels)

init1 = tf.global_variables_initializer()
init2 = tf.local_variables_initializer()
sess.run([init1, init2])

# Check result with output from sklearn
print(f1_score(labels, preds))

# Run a custom metric a few times
print(sess.run(f1_value))
print(sess.run(update_op))
print(sess.run(f1_value))
正如预期的那样,你会得到:

0.8 # Obtained with sklearn
0.0 # Value of f1_value before calling update_op
None # update_op does not return anything
0.799995 # Value of f1_value after calling update_op
请注意,
update_op
返回
None
仅仅是因为使用
tf.group
创建的op没有输出。每个
update_op1
update_op2
将分别返回1.0和0.667