Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/329.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python &引用;模型的特征数量必须与输入相匹配;在试图预测新的看不见的数据时_Python_Numpy_Text Classification - Fatal编程技术网

Python &引用;模型的特征数量必须与输入相匹配;在试图预测新的看不见的数据时

Python &引用;模型的特征数量必须与输入相匹配;在试图预测新的看不见的数据时,python,numpy,text-classification,Python,Numpy,Text Classification,我在一些维基百科文章上训练了一个模型,这些文章分为两个类别(每个类别有12篇文章) 下面是我如何创建模型、对其进行训练和酸洗的: import numpy as np import re import nltk from sklearn.datasets import load_files import pickle from nltk.corpus import stopwords data = load_files(r'[...]review_polarity') X, y = data.d

我在一些维基百科文章上训练了一个模型,这些文章分为两个类别(每个类别有12篇文章)

下面是我如何创建模型、对其进行训练和酸洗的:

import numpy as np
import re
import nltk
from sklearn.datasets import load_files
import pickle
from nltk.corpus import stopwords
data = load_files(r'[...]review_polarity')
X, y = data.data, data.target
documents = []
from nltk.stem import WordNetLemmatizer
stemmer = WordNetLemmatizer()
for sen in range(0, len(X)):  
    # Remove all the special characters
    document = re.sub(r'\W', ' ', str(X[sen]))

    # remove all single characters
    document = re.sub(r'\s+[a-zA-Z]\s+', ' ', document)

    # Remove single characters from the start
    document = re.sub(r'\^[a-zA-Z]\s+', ' ', document) 

    # Substituting multiple spaces with single space
    document = re.sub(r'\s+', ' ', document, flags=re.I)

    # Removing prefixed 'b'
    document = re.sub(r'^b\s+', '', document)

    # Converting to Lowercase
    document = document.lower()

    # Lemmatization
    document = document.split()

    document = [stemmer.lemmatize(word) for word in document]
    document = ' '.join(document)

    documents.append(document)

from sklearn.feature_extraction.text import TfidfTransformer
tfidfconverter = TfidfTransformer()
X = tfidfconverter.fit_transform(X).toarray()

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators=1000,random_state=0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)

with open('text_classifier', 'wb') as picklefile:
    pickle.dump(classifier, picklefile)
然后,我加载了pickle文件,并尝试预测一篇新的看不见的文章的分类:

import pickle
import sys, os
import re
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.feature_extraction.text import TfidfVectorizer

with open(os.path.join(sys.path[0], 'text_classifier'), 'rb') as training_model:
    model = pickle.load(training_model)

with open(os.path.join(sys.path[0], 'article.txt'), 'rb') as f:
    X = [f.read()]

documents = []
stemmer = WordNetLemmatizer()

for sen in range(0, len(X)):  
    # Remove all the special characters
    document = re.sub(r'\W', ' ', str(X[sen]))

    # remove all single characters
    document = re.sub(r'\s+[a-zA-Z]\s+', ' ', document)

    # Remove single characters from the start
    document = re.sub(r'\^[a-zA-Z]\s+', ' ', document) 

    # Substituting multiple spaces with single space
    document = re.sub(r'\s+', ' ', document, flags=re.I)

    # Removing prefixed 'b'
    document = re.sub(r'^b\s+', '', document)

    # Converting to Lowercase
    document = document.lower()

    # Lemmatization
    document = document.split()

    document = [stemmer.lemmatize(word) for word in document]
    document = ' '.join(document)

    documents.append(document)

tfidfconverter = TfidfVectorizer(max_features=1500, min_df=0, max_df=1.0, stop_words=stopwords.words('english'))
X = tfidfconverter.fit_transform(documents).toarray()

y_pred = model.predict(X)
print y_pred
调用predict函数时出现以下错误:

模型的特征数量必须与输入匹配。模型n_特征为10,输入n_特征为47

这篇新文章似乎得到了一个包含47个特性的numpy数组,而经过训练的模型可以使用包含10个特性的数组。我不确定我是否正确理解了这一点,如果您能帮助我更好地理解并使其发挥作用,我将非常高兴


谢谢

答案是我应该使用“transform”函数而不是“fit_transform”来处理新的未看到的数据,以保持功能的数量不变。

欢迎使用StackOverflow。请按照您创建此帐户时的建议,阅读并遵循帮助文档中的发布指南。适用于这里。在您发布MCVE代码并准确指定问题之前,我们无法有效地帮助您。我们应该能够将您发布的代码粘贴到文本文件中,并重现您指定的问题。StackOverflow不是编码、审阅或教程资源。是的,您已正确解释错误消息。通过代码跟踪该条件的尝试在哪里?至少,插入战略性的
print
语句以检查工作的控制和数据流。查看这个可爱的博客寻求帮助。我怀疑追踪错误是否会有帮助。我怀疑对训练和预测的工作原理存在根本性的误解。培训中使用的特性与测试中使用的特性之间必须保持一致。我认为应该对
sklearn
文档进行更多的研究。谢谢。我一定会看一下sklearn文档。我可能没有正确地表达它,所以我实际的问题是:TfidfVectorizer为用作训练和测试数据集的文章返回了相同数量的特性,但是它为新的未看到的文章返回了不同数量的特性。如何确保功能的数量匹配?或者更好,我如何在新的看不见的数据(文章)上运行经过培训的模型?