Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/284.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python scikit学习核PCA解释方差_Python_Scikit Learn - Fatal编程技术网

Python scikit学习核PCA解释方差

Python scikit学习核PCA解释方差,python,scikit-learn,Python,Scikit Learn,我一直在使用scikit学习的正常PCA,并获得每个主成分的方差比,没有任何问题 pca = sklearn.decomposition.PCA(n_components=3) pca_transform = pca.fit_transform(feature_vec) var_values = pca.explained_variance_ratio_ 我想用核PCA探索不同的核,还想得到解释的方差比,但我现在看到它没有这个属性。有人知道如何获得这些值吗 kpca = sklearn.dec

我一直在使用scikit学习的正常PCA,并获得每个主成分的方差比,没有任何问题

pca = sklearn.decomposition.PCA(n_components=3)
pca_transform = pca.fit_transform(feature_vec)
var_values = pca.explained_variance_ratio_
我想用核PCA探索不同的核,还想得到解释的方差比,但我现在看到它没有这个属性。有人知道如何获得这些值吗

kpca = sklearn.decomposition.KernelPCA(kernel=kernel, n_components=3)
kpca_transform = pca.fit_transform(feature_vec)
var_values = kpca.explained_variance_ratio_

AttributeError:'KernelPCA'对象没有属性'explained_variance_ratio'

我知道这个问题很老了,但当我意识到
pca.explained_variance_
只是组件的方差时,我遇到了同样的“问题”,并找到了一个简单的解决方案。您可以通过以下操作简单地计算解释的方差(和比率):

kpca_transform = kpca.fit_transform(feature_vec)
explained_variance = numpy.var(kpca_transform, axis=0)
explained_variance_ratio = explained_variance / numpy.sum(explained_variance)
作为奖励,获得累计比例解释方差(通常用于选择组件和估计空间的维度):


K-PCA没有解释方差比的主要原因是数据/向量的核变换后存在于不同的特征空间中。因此,K-PCA不应该被解释为PCA。

我对此也很感兴趣,所以我做了一些测试。下面是我的代码

这些图将显示,kernelpca的第一个组件是数据集的更好的鉴别器。然而,当基于@EelkeSpaak解释计算解释的方差比率时,我们只看到50%的方差解释比率,这是没有意义的。因此,我倾向于同意@Krishna Kalyan的解释

#get data
from sklearn.datasets import make_moons 
import numpy as np
import matplotlib.pyplot as plt

x, y = make_moons(n_samples=100, random_state=123)
plt.scatter(x[y==0, 0], x[y==0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(x[y==1, 0], x[y==1, 1], color='blue', marker='o', alpha=0.5)
plt.show()

##seeing effect of linear-pca-------
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
x_pca = pca.fit_transform(x)

x_tx = x_pca
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7,3))
ax[0].scatter(x_tx[y==0, 0], x_tx[y==0, 1], color='red', marker='^', alpha=0.5)
ax[0].scatter(x_tx[y==1, 0], x_tx[y==1, 1], color='blue', marker='o', alpha=0.5)
ax[1].scatter(x_tx[y==0, 0], np.zeros((50,1))+0.02, color='red', marker='^', alpha=0.5)
ax[1].scatter(x_tx[y==1, 0], np.zeros((50,1))-0.02, color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC-1')
ax[0].set_ylabel('PC-2')
ax[0].set_ylim([-0.8,0.8])
ax[1].set_ylim([-0.8,0.8])
ax[1].set_yticks([])
ax[1].set_xlabel('PC-1')
plt.show()

##seeing effect of kernelized-pca------
from sklearn.decomposition import KernelPCA
kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
x_kpca = kpca.fit_transform(x)


x_tx = x_kpca
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7,3))
ax[0].scatter(x_tx[y==0, 0], x_tx[y==0, 1], color='red', marker='^', alpha=0.5)
ax[0].scatter(x_tx[y==1, 0], x_tx[y==1, 1], color='blue', marker='o', alpha=0.5)
ax[1].scatter(x_tx[y==0, 0], np.zeros((50,1))+0.02, color='red', marker='^', alpha=0.5)
ax[1].scatter(x_tx[y==1, 0], np.zeros((50,1))-0.02, color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC-1')
ax[0].set_ylabel('PC-2')
ax[0].set_ylim([-0.8,0.8])
ax[1].set_ylim([-0.8,0.8])
ax[1].set_yticks([])
ax[1].set_xlabel('PC-1')
plt.show()

##comparing the 2 pcas-------

#get the transformer
tx_pca = pca.fit(x)
tx_kpca = kpca.fit(x)

#transform the original data
x_pca = tx_pca.transform(x)
x_kpca = tx_kpca.transform(x)

#for the transformed data, get the explained variances
expl_var_pca = np.var(x_pca, axis=0)
expl_var_kpca = np.var(x_kpca, axis=0)
print('explained variance pca: ', expl_var_pca)
print('explained variance kpca: ', expl_var_kpca)

expl_var_ratio_pca = expl_var_pca / np.sum(expl_var_pca)
expl_var_ratio_kpca = expl_var_kpca / np.sum(expl_var_kpca)

print('explained variance ratio pca: ', expl_var_ratio_pca)
print('explained variance ratio kpca: ', expl_var_ratio_kpca)

很好的发现!只是一个简短的说明:只有当你考虑N-1成分时才有效。其中n是数据集中的特征数。@ YOLL01 01,如果我认为N-1个成分少,为什么我不会出错?我认为这是不对的。我相信@Krishna Kalyan是对的(下面的答案)将情节添加到答案中这是SO所需要的证据类型!
#get data
from sklearn.datasets import make_moons 
import numpy as np
import matplotlib.pyplot as plt

x, y = make_moons(n_samples=100, random_state=123)
plt.scatter(x[y==0, 0], x[y==0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(x[y==1, 0], x[y==1, 1], color='blue', marker='o', alpha=0.5)
plt.show()

##seeing effect of linear-pca-------
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
x_pca = pca.fit_transform(x)

x_tx = x_pca
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7,3))
ax[0].scatter(x_tx[y==0, 0], x_tx[y==0, 1], color='red', marker='^', alpha=0.5)
ax[0].scatter(x_tx[y==1, 0], x_tx[y==1, 1], color='blue', marker='o', alpha=0.5)
ax[1].scatter(x_tx[y==0, 0], np.zeros((50,1))+0.02, color='red', marker='^', alpha=0.5)
ax[1].scatter(x_tx[y==1, 0], np.zeros((50,1))-0.02, color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC-1')
ax[0].set_ylabel('PC-2')
ax[0].set_ylim([-0.8,0.8])
ax[1].set_ylim([-0.8,0.8])
ax[1].set_yticks([])
ax[1].set_xlabel('PC-1')
plt.show()

##seeing effect of kernelized-pca------
from sklearn.decomposition import KernelPCA
kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
x_kpca = kpca.fit_transform(x)


x_tx = x_kpca
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7,3))
ax[0].scatter(x_tx[y==0, 0], x_tx[y==0, 1], color='red', marker='^', alpha=0.5)
ax[0].scatter(x_tx[y==1, 0], x_tx[y==1, 1], color='blue', marker='o', alpha=0.5)
ax[1].scatter(x_tx[y==0, 0], np.zeros((50,1))+0.02, color='red', marker='^', alpha=0.5)
ax[1].scatter(x_tx[y==1, 0], np.zeros((50,1))-0.02, color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC-1')
ax[0].set_ylabel('PC-2')
ax[0].set_ylim([-0.8,0.8])
ax[1].set_ylim([-0.8,0.8])
ax[1].set_yticks([])
ax[1].set_xlabel('PC-1')
plt.show()

##comparing the 2 pcas-------

#get the transformer
tx_pca = pca.fit(x)
tx_kpca = kpca.fit(x)

#transform the original data
x_pca = tx_pca.transform(x)
x_kpca = tx_kpca.transform(x)

#for the transformed data, get the explained variances
expl_var_pca = np.var(x_pca, axis=0)
expl_var_kpca = np.var(x_kpca, axis=0)
print('explained variance pca: ', expl_var_pca)
print('explained variance kpca: ', expl_var_kpca)

expl_var_ratio_pca = expl_var_pca / np.sum(expl_var_pca)
expl_var_ratio_kpca = expl_var_kpca / np.sum(expl_var_kpca)

print('explained variance ratio pca: ', expl_var_ratio_pca)
print('explained variance ratio kpca: ', expl_var_ratio_kpca)