Scikit learn FastText:Can';无法获得交叉验证
我正在努力将FastText()实现到一个遍历不同向量器的管道中。更具体地说,我无法获得交叉验证分数。使用以下代码:Scikit learn FastText:Can';无法获得交叉验证,scikit-learn,cross-validation,gensim,Scikit Learn,Cross Validation,Gensim,我正在努力将FastText()实现到一个遍历不同向量器的管道中。更具体地说,我无法获得交叉验证分数。使用以下代码: %%time import numpy as np import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_val_score, train_test_split from sklearn.pipeline i
%%time
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn.pipeline import Pipeline
from gensim.utils import simple_preprocess
from gensim.sklearn_api.ftmodel import FTTransformer
np.random.seed(0)
data = pd.read_csv('https://pastebin.com/raw/dqKFZ12m')
X_train, X_test, y_train, y_test = train_test_split(data.text, data.label, random_state=0)
w2v_texts = [simple_preprocess(doc) for doc in X_train]
models = [FTTransformer(size=10, min_count=0, seed=42)]
classifiers = [LogisticRegression(random_state=0)]
for model in models:
for classifier in classifiers:
model.fit(w2v_texts)
classifier.fit(model.transform(X_train), y_train)
pipeline = Pipeline([
('vec', model),
('clf', classifier)
])
print(pipeline.score(X_train, y_train))
#print(model.gensim_model.wv.most_similar('kirk'))
cross_val_score(pipeline, X_train, y_train, scoring='accuracy', cv=5)
KeyError:“所有用于word的Ngram”机器学习都很有用
品牌有时“不在模型中”
如何解决这个问题?
旁注:我的其他带有
D2V变压器
或TFIDF矢量器
的管道工作正常。在这里,我可以在定义管道之后简单地应用pipeline.fit(X\u-train,y\u-train)
,而不是上面所示的两种配合。它似乎不能与其他给定的向量器很好地集成 是,要在管道中使用,FTTransformer
需要修改,以便将文档拆分为其fit
方法中的单词。我们可以这样做:
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn.pipeline import Pipeline
from gensim.utils import simple_preprocess
from gensim.sklearn_api.ftmodel import FTTransformer
np.random.seed(0)
class FTTransformer2(FTTransformer):
def fit(self, x, y):
super().fit([simple_preprocess(doc) for doc in x])
return self
data = pd.read_csv('https://pastebin.com/raw/dqKFZ12m')
X_train, X_test, y_train, y_test = train_test_split(data.text, data.label, random_state=0)
classifiers = [LogisticRegression(random_state=0)]
for classifier in classifiers:
pipeline = Pipeline([
('ftt', FTTransformer2(size=10, min_count=0, seed=0)),
('clf', classifier)
])
score = cross_val_score(pipeline, X_train, y_train, scoring='accuracy', cv=5)
print(score)
这似乎奏效了。非常感谢。最后一个问题:如果所有数据都是这样准备的:
X\u-train,X\u-test,y\u-train,y\u-test=train\u-test\u-split([data.text]中文档的简单预处理(doc)),data.label,random\u-state=0)
?其他分类器,如D2V变压器,需要这样的输入。因此,我想知道如何告诉FTTransformer使用未处理的数据(这似乎很奇怪,因为处理后的数据是线性化的等等)。如果我像上面那样准备数据,分类器将不再工作。这不起作用,因为FTTransformer的transform
方法需要不拆分为单词的文档。我明白了。。。因此,基本上我也无法对文本进行标记化,因为fit模型与加载的数据不对应?您可能可以为接受标记化文档的transform
编写类似的自定义方法,然后您可以在将文本输入train\u test\u split
之前对文本进行预标记化(并在需要时进行标记化)。但是FastText是在子词的层次上工作的,所以lemmatization应该和FastText一起是多余的。我想我会在这方面尝试一个额外的帖子。我需要在一个管道迭代中使用D2V、FT和CV。