Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/323.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/amazon-web-services/14.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 在k-fold交叉验证中是否使用相同的Tfidf词汇表_Python_Scikit Learn_Cross Validation_Tf Idf - Fatal编程技术网

Python 在k-fold交叉验证中是否使用相同的Tfidf词汇表

Python 在k-fold交叉验证中是否使用相同的Tfidf词汇表,python,scikit-learn,cross-validation,tf-idf,Python,Scikit Learn,Cross Validation,Tf Idf,我正在基于TF-IDF向量空间模型进行文本分类。我只有不超过3000个样本。为了公平评估,我正在使用5倍交叉验证评估分类器。但让我困惑的是,是否有必要在每次交叉验证中重建TF-IDF向量空间模型。也就是说,我是否需要在每次折叠交叉验证中重建词汇表并重新计算词汇表中的IDF值 目前我正在基于scikit学习工具包进行TF-IDF转换,并使用SVM训练分类器。我的方法如下:首先,我将手头的样本除以3:1的比例,其中75%用于拟合TF-IDF向量空间模型的参数。这里的参数是词汇表的大小和其中包含的术语

我正在基于
TF-IDF
向量空间模型进行文本分类。我只有不超过3000个样本。为了公平评估,我正在使用5倍交叉验证评估分类器。但让我困惑的是,是否有必要在每次交叉验证中重建
TF-IDF
向量空间模型。也就是说,我是否需要在每次折叠交叉验证中重建词汇表并重新计算词汇表中的
IDF

目前我正在基于scikit学习工具包进行TF-IDF转换,并使用SVM训练分类器。我的方法如下:首先,我将手头的样本除以3:1的比例,其中75%用于拟合TF-IDF向量空间模型的参数。这里的参数是词汇表的大小和其中包含的术语,还有词汇表中每个术语的
IDF
值。然后我将在
TF-IDF
SVM
中转换剩余部分,并使用这些向量进行5倍交叉验证(值得注意的是,我没有使用之前75%的样本进行转换)

我的代码如下:

# train, test split, the train data is just for TfidfVectorizer() fit
x_train, x_test, y_train, y_test = train_test_split(data_x, data_y, train_size=0.75, random_state=0)
tfidf = TfidfVectorizer()
tfidf.fit(x_train)

# vectorizer test data for 5-fold cross-validation
x_test = tfidf.transform(x_test)

 scoring = ['accuracy']
 clf = SVC(kernel='linear')
 scores = cross_validate(clf, x_test, y_test, scoring=scoring, cv=5, return_train_score=False)
 print(scores)
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)
for train_index, test_index in skf.split(data_x, data_y):
    x_train, x_test = data_x[train_index], data_x[test_index]
    y_train, y_test = data_y[train_index], data_y[test_index]

    tfidf = TfidfVectorizer()
    x_train = tfidf.fit_transform(x_train)
    x_test = tfidf.transform(x_test)

    clf = SVC(kernel='linear')
    clf.fit(x_train, y_train)
    y_pred = clf.predict(x_test)
    score = accuracy_score(y_test, y_pred)
    print(score)
我的困惑在于,我进行
TF-IDF
转换和进行5倍交叉验证的方法是否正确,或者是否有必要使用列车数据重建
TF-IDF
向量模型空间,然后将列车和测试数据转换成
TF-IDF
向量?具体如下:

# train, test split, the train data is just for TfidfVectorizer() fit
x_train, x_test, y_train, y_test = train_test_split(data_x, data_y, train_size=0.75, random_state=0)
tfidf = TfidfVectorizer()
tfidf.fit(x_train)

# vectorizer test data for 5-fold cross-validation
x_test = tfidf.transform(x_test)

 scoring = ['accuracy']
 clf = SVC(kernel='linear')
 scores = cross_validate(clf, x_test, y_test, scoring=scoring, cv=5, return_train_score=False)
 print(scores)
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)
for train_index, test_index in skf.split(data_x, data_y):
    x_train, x_test = data_x[train_index], data_x[test_index]
    y_train, y_test = data_y[train_index], data_y[test_index]

    tfidf = TfidfVectorizer()
    x_train = tfidf.fit_transform(x_train)
    x_test = tfidf.transform(x_test)

    clf = SVC(kernel='linear')
    clf.fit(x_train, y_train)
    y_pred = clf.predict(x_test)
    score = accuracy_score(y_test, y_pred)
    print(score)

构建
TfidfVectorizer()
时采用的
StratifiedKFold
方法是正确的,这样做可以确保仅基于训练数据集生成特征

如果您考虑在整个数据集上构建
TfidfVectorizer()
,那么即使我们没有显式地向测试数据集提供数据,也会将测试数据集泄漏到模型中。当包含测试文档时,词汇表的大小、词汇表中每个术语的IDF值等参数将大不相同

更简单的方法是使用管道和交叉验证

用这个

from sklearn.pipeline import make_pipeline
clf = make_pipeline(TfidfVectorizer(), svm.SVC(kernel='linear'))

scores = cross_validate(clf, data_x, data_y, scoring=['accuracy'], cv=5, return_train_score=False)
print(scores) 

注:仅对测试数据进行交叉验证是没有用的。我们必须在
[train+validation]
数据集上执行此操作

对。是的。