Python TypeError:uuu init_uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu;do_farasa_标记化&x27;

Python TypeError:uuu init_uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu;do_farasa_标记化&x27;,python,nlp,ara,Python,Nlp,Ara,运行此代码时出现错误 from arabert.preprocess import ArabertPreprocessor, never_split_tokens from farasa.stemmer import FarasaStemmer stemmer = FarasaStemmer(interactive=True) train_df['tweet'] = train_df['tweet'].apply(lambda x: ArabertPreprocessor(x, do_fara

运行此代码时出现错误

from arabert.preprocess import ArabertPreprocessor, never_split_tokens
from farasa.stemmer import FarasaStemmer

stemmer = FarasaStemmer(interactive=True)
train_df['tweet'] = train_df['tweet'].apply(lambda x: ArabertPreprocessor(x, do_farasa_tokenization=True , farasa=stemmer, use_farasapy = True))

TypeError:init()得到一个意外的关键字参数“do\u farasa\u tokenization”

函数
arabert预处理器在哪里定义?来自arabert.preprocess导入arabert预处理器,never\u split\u tokens必须检查GitHub上的
arabert
库。似乎,
do\u farasa\u标记化
不是
ArabertPreprocessor
类的参数。谢谢,但我看到很多笔记本使用它很好,但仍然不适合我。最后,我使用了一些不同的东西,它满足了我的需要。
来自farasa.stemmer import farasasastatemer stemmer=farasastatemer(interactive=True)train_df['tweet'].apply(lambda x:stemmer.stem(x))