Google colaboratory XLNetTokenizer需要SentencePiece库,但在您的环境中找不到它

Google colaboratory XLNetTokenizer需要SentencePiece库,但在您的环境中找不到它,google-colaboratory,huggingface-transformers,transformer,huggingface-tokenizers,Google Colaboratory,Huggingface Transformers,Transformer,Huggingface Tokenizers,我正试图在Google Collaboratory上实现XLNET。但我有以下问题 ImportError: XLNetTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/google/sentencepiec

我正试图在Google Collaboratory上实现XLNET。但我有以下问题

ImportError: 
XLNetTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones
that match your environment.
我还尝试了以下步骤:

!pip install -U transformers
!pip install sentencepiece

from transformers import XLNetTokenizer
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased-spiece.model')
提前感谢您的帮助。

之后 !pip安装变压器和 !pip安装语句块

请重新启动运行时,然后执行所有其他代码。

是的,它可以工作,但我收到以下消息“调用XLNetTokenizer.from_pretrained(),带有单个文件或url的路径不推荐使用”尝试此-->(1)从transformers导入TFXLNetModel,XLNetTokenizer(2)XLNetTokenizer\u model='xlnet large cased'(3)xlnet\u标记器=XLNetTokenizer。来自预训练(xlnet\u模型)完美。它起作用了。谢谢@Anoop kottappuram