Scikit learn 斯克林的斯巴塞普卡不能正常工作?
首先让我澄清一下,这里的“稀疏PCA”是指具有L1惩罚和稀疏加载的PCA,而不是稀疏矩阵上的PCA 我读过邹和黑斯蒂关于稀疏PCA的论文,我读过关于sklearn.decomposition.SparsePCA的文档,我知道如何使用PCA,但我似乎无法从SparsePCA得到正确的结果 也就是说,当L1惩罚为0时,SparsePCA的结果应该与PCA一致,但负载差异很大。为了确保我没有弄乱任何超参数,我在R中使用了相同的超参数(收敛公差、最大迭代次数、脊惩罚、套索惩罚…)和“elasticnet”中的“spca”,R给出了正确的结果。如果有人有使用此函数的经验,并且如果我犯了任何错误,可以让我知道的话,我宁愿不必阅读SparsePCA的源代码 下面是我如何生成数据集的。这有点复杂,因为我想要一个特定的马尔可夫决策过程来测试一些强化学习算法。只需将其视为一些非稀疏数据集Scikit learn 斯克林的斯巴塞普卡不能正常工作?,scikit-learn,pca,lasso-regression,Scikit Learn,Pca,Lasso Regression,首先让我澄清一下,这里的“稀疏PCA”是指具有L1惩罚和稀疏加载的PCA,而不是稀疏矩阵上的PCA 我读过邹和黑斯蒂关于稀疏PCA的论文,我读过关于sklearn.decomposition.SparsePCA的文档,我知道如何使用PCA,但我似乎无法从SparsePCA得到正确的结果 也就是说,当L1惩罚为0时,SparsePCA的结果应该与PCA一致,但负载差异很大。为了确保我没有弄乱任何超参数,我在R中使用了相同的超参数(收敛公差、最大迭代次数、脊惩罚、套索惩罚…)和“elasticnet
import numpy as np
from sklearn.decomposition import PCA, SparsePCA
import numpy.random as nr
def transform(data, TranType=None):
if TranType == 'quad':
data = np.minimum(np.square(data), 3)
if TranType == 'cubic':
data = np.maximum(np.minimum(np.power(data, 3), 3), -3)
if TranType == 'exp':
data = np.minimum(np.exp(data), 3)
if TranType == 'abslog':
data = np.minimum(np.log(abs(data)), 3)
return data
def NewStateGen(OldS, A, TranType, m=0, sd=0.5, nsd=0.1, dim=64):
# dim needs to be a multiple of 4, and preferably a multiple of 16.
assert (dim == len(OldS) and dim % 4 == 0)
TrueDim = dim / 4
NewS = np.zeros(dim)
# Generate new state according to action
if A == 0:
NewS[range(0, dim, 4)] = transform(OldS[0:TrueDim], TranType) + \
nr.normal(scale=nsd, size=TrueDim)
NewS[range(1, dim, 4)] = transform(OldS[0:TrueDim], TranType) + \
nr.normal(scale=nsd, size=TrueDim)
NewS[range(2, dim, 4)] = nr.normal(m, sd, size=TrueDim)
NewS[range(3, dim, 4)] = nr.normal(m, sd, size=TrueDim)
R = 2 * np.sum(transform(OldS[0:int(np.ceil(dim / 32.0))], TranType)) - \
np.sum(transform(OldS[int(np.ceil(dim / 32.0)):(dim / 16)], TranType)) + \
nr.normal(scale=nsd)
if A == 1:
NewS[range(0, dim, 4)] = nr.normal(m, sd, size=TrueDim)
NewS[range(1, dim, 4)] = nr.normal(m, sd, size=TrueDim)
NewS[range(2, dim, 4)] = transform(OldS[0:TrueDim], TranType) + \
nr.normal(scale=nsd, size=TrueDim)
NewS[range(3, dim, 4)] = transform(OldS[0:TrueDim], TranType) + \
nr.normal(scale=nsd, size=TrueDim)
R = 2 * np.sum(transform(OldS[int(np.floor(dim / 32.0)):(dim / 16)], TranType)) - \
np.sum(transform(OldS[0:int(np.floor(dim / 32.0))], TranType)) + \
nr.normal(scale=nsd)
return NewS, R
def MDPGen(dim=64, rep=1, n=30, T=100, m=0, sd=0.5, nsd=0.1, TranType=None):
X_all = np.zeros(shape=(rep*n*T, dim))
Y_all = np.zeros(shape=(rep*n*T, dim+1))
A_all = np.zeros(rep*n*T)
R_all = np.zeros(rep*n*T)
for j in xrange(rep*n):
# Data for a single subject
X = np.zeros(shape=(T+1, dim))
A = np.zeros(T)
R = np.zeros(T)
NewS = np.zeros(dim)
X[0] = nr.normal(m, sd, size=dim)
for i in xrange(T):
OldS = X[i]
# Pick a random action
A[i] = nr.randint(2)
# Generate new state according to action
X[i+1], R[i] = NewStateGen(OldS, A[i], TranType, m, sd, nsd, dim)
Y = np.concatenate((X[1:(T+1)], R.reshape(T, 1)), axis=1)
X = X[0:T]
X_all[(j*T):((j+1)*T)] = X
Y_all[(j*T):((j+1)*T)] = Y
A_all[(j*T):((j+1)*T)] = A
R_all[(j*T):((j+1)*T)] = R
return {'X': X_all, 'Y': Y_all, 'A': A_all, 'R': R_all, 'rep': rep, 'n': n, 'T': T}
nr.seed(1)
MDP = MDPGen(dim=64, rep=1, n=30, T=90, sd=0.5, nsd=0.1, TranType=None)
X = MDP.get('X').astype(np.float32)
现在我运行PCA和SparsePCA。当套索惩罚“alpha”为0时,SparsePCA应该给出与PCA相同的结果,但事实并非如此。其他超参数是用R中elasticnet的默认值设置的。如果我使用SparsePCA的默认值,结果仍然是不正确的
PCA_model = PCA(n_components=64)
PCA_model.fit(X)
Z = PCA_model.transform(X)
SPCA_model = SparsePCA(n_components=64, alpha=0, ridge_alpha=1e-6, max_iter=200, tol=1e-3)
SPCA_model.fit(X)
SZ = SPCA_model.transform(X)
# Check the first 2 loadings from PCA and SPCA. They are supposed to agree.
print PCA_model.components_[0:2]
print SPCA_model.components_[0:2]
# Check the first 2 observations of transformed data. They are supposed to agree.
print Z[0:2]
print SZ[0:2]
当套索惩罚大于0时,SparsePCA得出的结果与R给出的结果仍有很大差异,根据手动检查和我从原始论文中了解到的情况,R给出的结果是正确的。那么,SparsePCA坏了吗?或者我错过了什么吗?经常出现:有许多不同的公式和实现方式。 sklearn使用具有不同特征的不同实现 让我们看看它们的区别:
- sklearn:()
- Elasticnet:(Zou等人)
答案没有什么不同。首先,我认为可能是解算器,但检查不同的解算器,我得到的载荷几乎相同。请参见:
nr.seed(1)
MDP = MDPGen(dim=16, rep=1, n=30, T=90, sd=0.5, nsd=0.1, TranType=None)
X = MDP.get('X').astype(np.float32)
PCA_model = PCA(n_components=10,svd_solver='auto',tol=1e-6)
PCA_model.fit(X)
SPCA_model = SparsePCA(n_components=10, alpha=0, ridge_alpha=0)
SPCA_model.fit(X)
PC1 = PCA_model.components_[0]/np.linalg.norm(PCA_model.components_[0])
SPC1 = SPCA_model.components_[0].T/np.linalg.norm(SPCA_model.components_[0])
print(np.dot(PC1,SPC1))
import pylab
pylab.plot(PC1)
pylab.plot(SPC1)
pylab.show()
谢谢你的回复。有趣的是,斯克勒诺的斯巴塞普卡确实有“山脊阿尔法”作为一个论点,我只能想象它是对l2范数的惩罚。但是如果p
fit
中未使用。它仅用于变换
,仅帮助数值稳定性/性能。从文档字符串:ridge_alpha:float,调用时应用的脊收缩量,以改善条件g转换方法。
也与您关于p的推理有关