Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Tensorflow 如何冻结TFBertForSequenceClassification预训练模型?_Tensorflow_Huggingface Transformers - Fatal编程技术网

Tensorflow 如何冻结TFBertForSequenceClassification预训练模型?

Tensorflow 如何冻结TFBertForSequenceClassification预训练模型?,tensorflow,huggingface-transformers,Tensorflow,Huggingface Transformers,如果我使用的是huggingface transformer的tensorflow版本,如何冻结预训练编码器的权重,以便仅优化头部层的权重 对于PyTorch实现,它是通过 for param in model.base_model.parameters(): param.requires_grad = False 希望对tensorflow实现执行相同的操作。找到了一种方法。在编译基础模型之前将其冻结 model = TFBertForSequenceClassification.f

如果我使用的是huggingface transformer的tensorflow版本,如何冻结预训练编码器的权重,以便仅优化头部层的权重

对于PyTorch实现,它是通过

for param in model.base_model.parameters():
    param.requires_grad = False

希望对tensorflow实现执行相同的操作。

找到了一种方法。在编译基础模型之前将其冻结

model = TFBertForSequenceClassification.from_pretrained("bert-base-uncased")
model.layers[0].trainable = False
model.compile(...)

找到了一种方法。在编译基础模型之前将其冻结

model = TFBertForSequenceClassification.from_pretrained("bert-base-uncased")
model.layers[0].trainable = False
model.compile(...)
或者:

model.bert.trainable = False
或者:

model.bert.trainable = False
在挖掘完这个线程之后,我认为以下代码对TF2不会有什么影响。即使在特定情况下它可能是多余的

 model = TFBertModel.from_pretrained('./bert-base-uncase')
 for layer in model.layers:
    layer.trainable=False
    for w in layer.weights: w._trainable=False
在挖掘完这个线程之后,我认为以下代码对TF2不会有什么影响。即使在特定情况下它可能是多余的

 model = TFBertModel.from_pretrained('./bert-base-uncase')
 for layer in model.layers:
    layer.trainable=False
    for w in layer.weights: w._trainable=False