Python 使用Knn分类器时出现无效形状错误
以下是Python 使用Knn分类器时出现无效形状错误,python,machine-learning,scikit-learn,classification,knn,Python,Machine Learning,Scikit Learn,Classification,Knn,以下是X和Y变量形状: X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42) ## Output for shapes X_train.shape = (970, 298) X_test.shape = (478, 298) len(y_train) = 970 len(y_test) = 478 现在我从Knn分配多输出分类器: knn =
X
和Y
变量形状:
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
## Output for shapes
X_train.shape = (970, 298)
X_test.shape = (478, 298)
len(y_train) = 970
len(y_test) = 478
现在我从Knn
分配多输出
分类器:
knn = KNeighborsClassifier(n_neighbors=3)
classifier = MultiOutputClassifier(knn, n_jobs=-1)
classifier.fit(X_train,y_train)
predictions = classifier.predict(X_test)
print classifier.score(y_test,predictions)
当我尝试运行此操作时,出现以下错误:
ValueError:X和Y矩阵的尺寸不兼容:X.shape[1]==
3而Y.shape[1]==298
现在我可以看出错误与变量的形状有关,也许我在把它们分开进行训练或测试时混合了它们
我试着搜索,但没有结果,我犯了什么错误
示例:X = (0, 96) 0.24328157992528274
(0, 191) 0.4086854706249901
(0, 279) 0.3597892480519696
(0, 209) 0.6262243704015803
(0, 287) 0.15142673105175225
(0, 44) 0.2839334104854308
(0, 31) 0.27493029497336746
(0, 62) 0.2702778021025414
Y =[1252, 12607, 12596], [12480, 12544, 12547], [1252, 12607, 12547], [12480, 12607, 12547], [12480, 12607, 12596], [1252, 12607, 12547], [12480, 12544, 12547], [1252, 12607, 12596], [1252, 12607, 12596], [12480, 12544, 12547], [12480, 12607, 12596]
从
因此,您需要为score函数指定X
,y
,而不是y\u true
和y\u pred
尝试:
从
因此,您需要为score函数指定X
,y
,而不是y\u true
和y\u pred
尝试:
当训练和测试数据在
y
向量中只有一个输出时,为什么要使用多输出分类器?您是否可能将多输出与多类输出混淆?这是我之前发布的问题,请查看您能否提供X
和Y
数据的示例?完成后,请检查编辑您为什么使用多输出分类器,当您的训练和测试数据在y
向量中只有一个输出时?您是否可能将多输出与多类输出混淆?这是我之前发布的问题,请您提供X
和Y
数据样本?完成,请检查编辑我得到这个:AttributeError:如果y.ndim==1:'list'对象没有属性'ndim'我得到这个:AttributeError:如果y.ndim==1:'list'对象没有属性'ndim'
Returns the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:
X : array-like, shape = (n_samples, n_features)
Test samples.
y : array-like, shape = (n_samples) or (n_samples, n_outputs)
True labels for X.
sample_weight : array-like, shape = [n_samples], optional
Sample weights.
Returns:
score : float
Mean accuracy of self.predict(X) wrt. y
print classifier.score(X_test, np.array(y_test))