Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/288.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 在tensorflow中从头创建评估指标_Python_Tensorflow_Tensorboard_Tensorflow Estimator - Fatal编程技术网

Python 在tensorflow中从头创建评估指标

Python 在tensorflow中从头创建评估指标,python,tensorflow,tensorboard,tensorflow-estimator,Python,Tensorflow,Tensorboard,Tensorflow Estimator,我正在将tensorflow估计器API与tensorboard一起使用,我正在尝试创建一个自定义的评估指标,它不在tf.compat.v1.metrics中,比如“准确性”。 我的评估模型是: def _model_fn_eval(self, mode, features, labels, endpoints, logits, use_logits): """This is the EVAL part

我正在将tensorflow估计器API与tensorboard一起使用,我正在尝试创建一个自定义的评估指标,它不在tf.compat.v1.metrics中,比如“准确性”。 我的评估模型是:

def _model_fn_eval(self, mode, features, labels, endpoints, logits,
                       use_logits):
        """This is the EVAL part of model_fn."""
        if mode != tf.estimator.ModeKeys.EVAL:
            return None
        if use_logits:
            eval_predictions = logits
        else:
            eval_predictions = endpoints['Predictions']
        variant_type = features['variant_type']
        eval_metrics = (
            eval_metric_fn, [labels, eval_predictions, variant_type])
        if not self.use_tpu:
            for name, value in eval_metrics[0](*eval_metrics[1]).items():
                tf.compat.v1.summary.scalar(tensor=value, name=name)
        return eval_metrics
此函数使用函数eval_metric_fn:

def eval_metric_fn(labels, predictions, variant_types):
        """Calculate eval metrics from Tensors, on CPU host.
    
        Args:
          labels: the ground-truth labels for the examples.
          predictions: the predicted labels for the examples.
          variant_types: variant types (int64 of EncodedVariantType.value) as a tensor
            of (batch_size,) or None. The types of these variants. If None, no type
            specific evals will be performed.
    
        Returns:
          A dictionary of string name to metric.
        """
        predicted_classes = tf.argmax(input=predictions, axis=1)
    
        metrics = {}
    
        metrics['accuracy'] = tf.compat.v1.metrics.accuracy(labels=labels,
                                                            predictions=predicted_classes)
        return metrics
我想要实现的度量记录了被估计器分类为XX类的样本的百分比

我所尝试的: 天真地,我尝试将这部分代码添加到
eval\u metric\u fn

# a function to measure the percentage of predictions with each label
    per_label_predictions = lambda label: tf.math.reduce_mean(
        tf.cast(
            (tf.equal(tf.cast(predicted_classes, tf.int64), label)),
            tf.float32))
    predictions_per_label = {f'percentage_{l}_predictions':
                                 per_label_predictions(l)
                             for l in range(NUM_CLASSES)}
将此字典附加到
度量
dict后,应在度量dict中添加一个(键,值),值为浮点值

  File "/home/yonatan/anaconda3/envs/yonatan_env/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/model_fn.py", line 475, in _validate_eval_metric_ops
    raise TypeError(
TypeError: Values of eval_metric_ops must be (metric_value, update_op) tuples, given: Tensor("Mean_10:0", shape=(), dtype=float32) for key: percentage_0_predictions

Process finished with exit code 1
我现在明白了度量不仅仅是一个简单的浮点,还有我应该实现的
update\u ops
。然而,我似乎找不到这方面的好例子。TF的例子使用了一个预构建的评估指标(这里:),而类似的问题对我来说没有任何意义,因为它假设了我猜之前的一些水含量(这里:)

如果需要更多信息,请告诉我