Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/326.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 为什么';t培训师在教程中培训时报告评估指标?_Python_Huggingface Transformers_Transformer - Fatal编程技术网

Python 为什么';t培训师在教程中培训时报告评估指标?

Python 为什么';t培训师在教程中培训时报告评估指标?,python,huggingface-transformers,transformer,Python,Huggingface Transformers,Transformer,我按照本教程学习trainer API。 我复制了如下代码: from datasets import load_dataset import numpy as np from datasets import load_metric metric = load_metric("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.ar

我按照本教程学习trainer API。

我复制了如下代码:

from datasets import load_dataset

import numpy as np
from datasets import load_metric

metric = load_metric("accuracy")

def compute_metrics(eval_pred):
    logits, labels = eval_pred
    predictions = np.argmax(logits, axis=-1)
    return metric.compute(predictions=predictions, references=labels)

print('Download dataset ...')
raw_datasets = load_dataset("imdb")
from transformers import AutoTokenizer

print('Tokenize text ...')
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_function(examples):
    return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)

print('Prepare data ...')
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(500))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(500))
full_train_dataset = tokenized_datasets["train"]
full_eval_dataset = tokenized_datasets["test"]

print('Define model ...')
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)

print('Define trainer ...')
from transformers import TrainingArguments, Trainer
training_args = TrainingArguments("test_trainer", evaluation_strategy="epoch")
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=small_train_dataset,
    eval_dataset=small_eval_dataset,
    compute_metrics=compute_metrics,
)

print('Fine-tune train ...')
trainer.evaluate()
但是,它不会报告任何有关培训指标的信息,而是以下信息:

Download dataset ...
Reusing dataset imdb (/Users/congminmin/.cache/huggingface/datasets/imdb/plain_text/1.0.0/4ea52f2e58a08dbc12c2bd52d0d92b30b88c00230b4522801b3636782f625c5b)
Tokenize text ...
100%|██████████| 25/25 [00:06<00:00,  4.01ba/s]
100%|██████████| 25/25 [00:06<00:00,  3.99ba/s]
100%|██████████| 50/50 [00:13<00:00,  3.73ba/s]
Prepare data ...
Define model ...
Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForSequenceClassification: ['cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight']
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Define trainer ...
Fine-tune train ...
100%|██████████| 63/63 [08:35<00:00,  8.19s/it]

Process finished with exit code 0
下载数据集。。。
重用数据集imdb(/Users/congminmin/.cache/huggingface/datasets/imdb/plain_text/1.0.0/4EA52F2E58A08DBC12C2BD52D0D92B30B88C00230B4522801B363782F625C5B)
标记化文本。。。

100%|██████████| 25/25[00:06evaluate函数返回度量值,但不打印它们。是否打印

metrics=trainer.evaluate()
print(metrics)

工作?另外,消息说您使用的是基本bert模型,该模型没有针对句子分类进行预训练,而是基本语言模型。因此,它没有任务的初始权重,应该进行训练。

您为什么要做
trainer.evaluate()
?这只是在验证集上运行验证。如果要微调或训练,需要执行以下操作:

trainer.train()

嗨,Samer,但是本教程完全是为了微调一个经过预训练的语言模型,应该可以用于句子分类。对吗?我从教程中粘贴的代码包含对句子分类的微调。