Python 如何将sklearn管道转换为普通代码?
我有一个学习代码: 我想将其转换为普通代码,如下所示:Python 如何将sklearn管道转换为普通代码?,python,scikit-learn,Python,Scikit Learn,我有一个学习代码: 我想将其转换为普通代码,如下所示: X_train = predictors.fit_transform(X_train) X_train = bow_vector.fit_transform(X_train) classifier.fit(X_train) 但我经常出错。快速阅读文档没有帮助 UPD 我的确切密码是 from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer df
X_train = predictors.fit_transform(X_train)
X_train = bow_vector.fit_transform(X_train)
classifier.fit(X_train)
但我经常出错。快速阅读文档没有帮助
UPD
我的确切密码是
from sklearn.feature_extraction.text import CountVectorizer,TfidfVectorizer
df = pd.read_excel('data.xlsx')
from sklearn.model_selection import train_test_split
X = df['X']
ylabels = df['y']
X_train, X_test, y_train, y_test = train_test_split(X, ylabels, test_size=0.3, random_state=42)
标点符号列表
punctuations = string.punctuation
自然语言处理引擎
nlp = spacy.load('en')
停止词列表
stop_words = spacy.lang.en.stop_words.STOP_WORDS
加载英语标记器、标记器、解析器、NER和单词向量
parser = English()
标记器
def spacy_tokenizer(sentence):
# Creating an token object
mytokens = parser(sentence)
# Lemmatizing each token and converting each token into lowercase
mytokens = [ word.lemma_.lower().strip() if word.lemma_ != "-PRON-" else word.lower_ for word in mytokens ]
# Removing stop words
mytokens = [ word for word in mytokens if word not in stop_words and word not in punctuations ]
# return preprocessed list of tokens
return mytokens
管道的第一要素
class predictors(TransformerMixin):
def transform(self, X, **transform_params):
# Cleaning Text
return [clean_text(text) for text in X]
def fit(self, X, y=None, **fit_params):
return self
def get_params(self, deep=True):
return {}
清理文本的基本功能
def clean_text(text):
# Removing spaces and converting text into lowercase
return text.strip().lower()
我解决了我的问题
tfidf_vector = TfidfVectorizer(tokenizer = spacy_tokenizer)
cleaner = predictors()
X_train_cleaned = cleaner.transform(X_train)
X_train_transformed = tfidf_vector.fit_transform(X_train_cleaned)
classifier = LogisticRegression(solver='lbfgs')
classifier.fit(X_train_transformed, y_train)
cleaner = predictors()
X_test_cleaned = cleaner.transform(X_test)
X_test_transformed = tfidf_vector.transform(X_test_cleaned)
那么,错误是什么呢?你的确切密码是什么?我现在需要知道变量的数据类型,您应该能够在
管道上调用所有这些方法,例如X\u train=pipe.fit\u transform(X\u train)
。如果您只想使用管道,则无需“拆卸”管道。docsRelated:我知道我可以一次完成管道中的所有操作,但我想深入挖掘,一步一步地了解X_列发生了什么,要做到这一点,我想先做第一个操作,然后看X_列,然后做第二个操作,依此类推on@sokolov0我们的目标是发布一个,而不是整个代码
tfidf_vector = TfidfVectorizer(tokenizer = spacy_tokenizer)
cleaner = predictors()
X_train_cleaned = cleaner.transform(X_train)
X_train_transformed = tfidf_vector.fit_transform(X_train_cleaned)
classifier = LogisticRegression(solver='lbfgs')
classifier.fit(X_train_transformed, y_train)
cleaner = predictors()
X_test_cleaned = cleaner.transform(X_test)
X_test_transformed = tfidf_vector.transform(X_test_cleaned)