Algorithm 高斯混合模型EM算法的实现
使用EM算法,我想在给定的数据集上训练一个具有四个分量的高斯混合模型。这套是三维的,包含300个样本 问题是,经过大约6轮EM算法后,协方差矩阵sigma根据matlab变得接近奇异(Algorithm 高斯混合模型EM算法的实现,algorithm,matlab,mixture-model,Algorithm,Matlab,Mixture Model,使用EM算法,我想在给定的数据集上训练一个具有四个分量的高斯混合模型。这套是三维的,包含300个样本 问题是,经过大约6轮EM算法后,协方差矩阵sigma根据matlab变得接近奇异(rank(sigma)=2而不是3)。这反过来又会导致不期望的结果,如评估高斯分布的复值gm(k,i) 此外,我使用高斯曲线的对数来解释底流故障-参见E-step。我不确定这是否正确,我是否必须将责任p(w|k|x^(I),θ)的exp带到其他地方 你能告诉我,到目前为止,我对EM算法的实现是否正确吗? 如何解释接
rank(sigma)=2而不是3)。这反过来又会导致不期望的结果,如评估高斯分布的复值gm(k,i)
此外,我使用高斯曲线的对数来解释底流故障-参见E-step。我不确定这是否正确,我是否必须将责任p(w|k|x^(I),θ)的exp带到其他地方
你能告诉我,到目前为止,我对EM算法的实现是否正确吗?
如何解释接近奇异协方差σ的问题
以下是我对EM算法的实现:
首先,我使用kmeans初始化了分量的均值和协方差:
load('data1.mat');
X = Data'; % 300x3 data set
D = size(X,2); % dimension
N = size(X,1); % number of samples
K = 4; % number of Gaussian Mixture components
% Initialization
p = [0.2, 0.3, 0.2, 0.3]; % arbitrary pi
[idx,mu] = kmeans(X,K); % initial means of the components
% compute the covariance of the components
sigma = zeros(D,D,K);
for k = 1:K
sigma(:,:,k) = cov(X(idx==k,:));
end
load('data1.mat');
X = Data'; % 300x3 data set
D = size(X,2); % dimension
N = size(X,1); % number of samples
K = 4; % number of Gaussian Mixture components
% Initialization
p = [0.2, 0.3, 0.2, 0.3]; % arbitrary pi
[idx,mu] = kmeans(X,K); % initial means of the components
% compute the covariance of the components
sigma = zeros(D,D,K);
for k = 1:K
sigma(:,:,k) = cov(X(idx==k,:));
end
对于电子步骤我使用以下公式计算责任
w_k是k个高斯分量
x^(i)是单个数据点(示例)
θ代表高斯混合模型的参数:μ,σ,π
以下是相应的代码:
% variables for convergence
converged = 0;
prevLoglikelihood = Inf;
prevMu = mu;
prevSigma = sigma;
prevPi = p;
round = 0;
while (converged ~= 1)
round = round +1
gm = zeros(K,N); % gaussian component in the nominator
sumGM = zeros(N,1); % denominator of responsibilities
% E-step: Evaluate the responsibilities using the current parameters
% compute the nominator and denominator of the responsibilities
for k = 1:K
for i = 1:N
Xmu = X-mu;
% I am using log to prevent underflow of the gaussian distribution (exp("small value"))
logPdf = log(1/sqrt(det(sigma(:,:,k))*(2*pi)^D)) + (-0.5*Xmu*(sigma(:,:,k)\Xmu'));
gm(k,i) = log(p(k)) * logPdf;
sumGM(i) = sumGM(i) + gm(k,i);
end
end
% calculate responsibilities
res = zeros(K,N); % responsibilities
Nk = zeros(4,1);
for k = 1:K
for i = 1:N
% I tried to use the exp(gm(k,i)/sumGM(i)) to compute res but this leads to sum(pi) > 1.
res(k,i) = gm(k,i)/sumGM(i);
end
Nk(k) = sum(res(k,:));
end
% variables for convergence
converged = 0;
prevLoglikelihood = Inf;
prevMu = mu;
prevSigma = sigma;
prevPi = p;
round = 0;
while (converged ~= 1)
round = round +1
gm = zeros(K,N); % gaussian component in the nominator -
% some values evaluate to zero
sumGM = zeros(N,1); % denominator of responsibilities
% E-step: Evaluate the responsibilities using the current parameters
% compute the nominator and denominator of the responsibilities
for k = 1:K
for i = 1:N
% HERE values evalute to zero e.g. exp(-746.6228) = -Inf
gm(k,i) = p(k)/sqrt(det(sigma(:,:,k))*(2*pi)^D)*exp(-0.5*(X(i,:)-mu(k,:))*inv(sigma(:,:,k))*(X(i,:)-mu(k,:))');
sumGM(i) = sumGM(i) + gm(k,i);
end
end
% calculate responsibilities
res = zeros(K,N); % responsibilities
Nk = zeros(4,1);
for k = 1:K
for i = 1:N
res(k,i) = gm(k,i)/sumGM(i);
end
Nk(k) = sum(res(k,:));
end
Nk(k)
使用M步中给出的公式计算,并在M步中用于计算新概率p(k)
M步
现在,为了检查收敛性,使用以下公式计算对数似然:
对于电子步骤我使用以下公式计算责任
以下是相应的代码:
% variables for convergence
converged = 0;
prevLoglikelihood = Inf;
prevMu = mu;
prevSigma = sigma;
prevPi = p;
round = 0;
while (converged ~= 1)
round = round +1
gm = zeros(K,N); % gaussian component in the nominator
sumGM = zeros(N,1); % denominator of responsibilities
% E-step: Evaluate the responsibilities using the current parameters
% compute the nominator and denominator of the responsibilities
for k = 1:K
for i = 1:N
Xmu = X-mu;
% I am using log to prevent underflow of the gaussian distribution (exp("small value"))
logPdf = log(1/sqrt(det(sigma(:,:,k))*(2*pi)^D)) + (-0.5*Xmu*(sigma(:,:,k)\Xmu'));
gm(k,i) = log(p(k)) * logPdf;
sumGM(i) = sumGM(i) + gm(k,i);
end
end
% calculate responsibilities
res = zeros(K,N); % responsibilities
Nk = zeros(4,1);
for k = 1:K
for i = 1:N
% I tried to use the exp(gm(k,i)/sumGM(i)) to compute res but this leads to sum(pi) > 1.
res(k,i) = gm(k,i)/sumGM(i);
end
Nk(k) = sum(res(k,:));
end
% variables for convergence
converged = 0;
prevLoglikelihood = Inf;
prevMu = mu;
prevSigma = sigma;
prevPi = p;
round = 0;
while (converged ~= 1)
round = round +1
gm = zeros(K,N); % gaussian component in the nominator -
% some values evaluate to zero
sumGM = zeros(N,1); % denominator of responsibilities
% E-step: Evaluate the responsibilities using the current parameters
% compute the nominator and denominator of the responsibilities
for k = 1:K
for i = 1:N
% HERE values evalute to zero e.g. exp(-746.6228) = -Inf
gm(k,i) = p(k)/sqrt(det(sigma(:,:,k))*(2*pi)^D)*exp(-0.5*(X(i,:)-mu(k,:))*inv(sigma(:,:,k))*(X(i,:)-mu(k,:))');
sumGM(i) = sumGM(i) + gm(k,i);
end
end
% calculate responsibilities
res = zeros(K,N); % responsibilities
Nk = zeros(4,1);
for k = 1:K
for i = 1:N
res(k,i) = gm(k,i)/sumGM(i);
end
Nk(k) = sum(res(k,:));
end
Nk(k)
使用M步中给出的公式计算
M步
现在,为了检查收敛性,使用以下公式计算对数似然:
此外,我注意到kmeans初始化的平均值与在M步中计算平均值的下一轮完全不同
kmeans:
mu = 13.500000000000000 0.026602138870044 0.062415945993735
88.500000000000000 -0.009869960132085 -0.075177888210981
39.000000000000000 -0.042569305020309 0.043402772876513
64.000000000000000 -0.024519281362918 -0.012586980924762
M-step之后:
round = 2
mu = 1.000000000000000 0.077230046948357 0.024498886414254
2.000000000000000 0.074260118474053 0.026484346404660
3.000000000000002 0.070944016105476 0.029043085983168
4.000000000000000 0.067613431480832 0.031641849205021
在接下来的几轮中,mu
没有任何变化。它与第二轮保持相同
我想这是因为gm(k,I)中的下溢造成的?
要么我的缩放实现不正确,要么算法的整个实现在某个地方出错:(
编辑2
四轮之后,我得到了NaN
值,并更详细地研究了gm。只看一个样本(没有0.5因子),gm
在所有分量中都变为零。将matlabgm(:,1)=[0 0 0 0 0 0 0]
放进去。这反过来导致sumGM等于0->NaN,因为我除以了0。我在
round = 1
mu = 62.0000 -0.0298 -0.0078
37.0000 -0.0396 0.0481
87.5000 -0.0083 -0.0728
12.5000 0.0303 0.0614
gm(:,1) = [11.7488, 0.0000, 0.0000, 0.0000]
round = 2
mu = 1.0000 0.0772 0.0245
2.0000 0.0743 0.0265
3.0000 0.0709 0.0290
4.0000 0.0676 0.0316
gm(:,1) = [0.0000, 0.0000, 0.0000, 0.3128]
round = 3
mu = 1.0000 0.0772 0.0245
2.0000 0.0743 0.0265
3.0000 0.0709 0.0290
4.0000 0.0676 0.0316
gm(:,1) = [0, 0, 0.0000, 0.2867]
round = 4
mu = 1.0000 0.0772 0.0245
NaN NaN NaN
3.0000 0.0709 0.0290
4.0000 0.0676 0.0316
gm(:,1) = 1.0e-105 * [0, NaN, 0, 0.5375]
首先,这些方法似乎没有改变,与kmeans的初始化完全不同
根据gm(:,1)
的输出,每个样本(不仅仅是像这里这样的第一个样本)只对应一个高斯分量。样本不应该在每个高斯分量之间“部分分布”吗
EDIT3:
所以我猜mu不改变的问题是M步中的第一行:mu=zeros(K,3);
为了说明底流问题,我目前正尝试使用高斯曲线的对数:
function logPdf = logmvnpdf(X, mu, sigma, D)
Xmu = X-mu;
logPdf = log(1/sqrt(det(sigma)*(2*pi)^D)) + (-0.5*Xmu*inv(sigma)*Xmu');
end
新问题是协方差矩阵sigma。Matlab声称:
警告:矩阵接近单数或比例严重。结果可能不准确
在6轮之后,我得到了gm(高斯分布)的假想值
更新后的E-Step现在如下所示:
gm = zeros(K,N); % gaussian component in the nominator
sumGM = zeros(N,1); % denominator of responsibilities
for k = 1:K
for i = 1:N
%gm(k,i) = p(k)/sqrt(det(sigma(:,:,k))*(2*pi)^D)*exp(-0.5*Xmu*inv(sigma(:,:,k))*Xmu');
%gm(k,i) = p(k)*mvnpdf(X(i,:),mu(k,:),sigma(:,:,k));
gm(k,i) = log(p(k)) + logmvnpdf(X(i,:), mu(k,:), sigma(:,:,k), D);
sumGM(i) = sumGM(i) + gm(k,i);
end
end
看起来你应该能够使用比例因子比例(i)将gm(k,i)带入一个可表示的范围,因为如果你用比例(i)乘以gm(k,i),它也会乘以sumGM(i),当你计算res(k,i)=gm(k,i)/sumGM(i)时,它会被取消
理论上,我会让scale(I)=1/max_k(exp(-0.5*)(X(I,:)-mu(k,:))在不进行求幂运算的情况下实际计算它,所以你最终会处理它的对数max_k(-0.5*)(X(I,:)-mu(k,:)-这给你一个公共项,你可以在使用exp()之前添加到-0.5*(X(I,:)-mu(k,:)并且至少将最大值保持在一个可表示的范围内-任何在这次修正后仍然下溢到零的值,你都不关心,因为与其他贡献相比,它是非常小的。谢谢你的帮助!不幸的是,该算法只运行了更多轮(大约4到8轮)在下溢再次发生之前。我声明至少有一个值不会导致下溢,因为该值将等于比例因子,减法将留下一个值0。我将通过尝试找出此参数失败的原因来调试此问题。查看您的代码,我不明白您为什么在Xmu-0.5*scale(i,:)。我也不理解计算scale(i,:)的循环,但我没有使用Matlab的经验。如果您对Matlab的矢量功能完全不确定,您可以随时替换它或使用完全串行代码检查它,就像一个实验一样。感谢您的努力!我必须承认,我不完全确定E-step中的这一责任如何/为什么起作用。如果我错了,请纠正我,但是我知道每个数据点都会部分分配给所有高斯分量(具有不同的权重)。现在考虑到只有一个样本,问题是责任提名人中的所有值都变为零。为第一个样本输入matlab代码:gm(:,1)=[0]
。这反过来会导致sumGM=0
,因此会产生NaN值,因为我除以零。我在编辑2中给出了更多细节。我怀疑您遇到的是Matlab调试问题,但有一个p
round = 1
mu = 62.0000 -0.0298 -0.0078
37.0000 -0.0396 0.0481
87.5000 -0.0083 -0.0728
12.5000 0.0303 0.0614
gm(:,1) = [11.7488, 0.0000, 0.0000, 0.0000]
round = 2
mu = 1.0000 0.0772 0.0245
2.0000 0.0743 0.0265
3.0000 0.0709 0.0290
4.0000 0.0676 0.0316
gm(:,1) = [0.0000, 0.0000, 0.0000, 0.3128]
round = 3
mu = 1.0000 0.0772 0.0245
2.0000 0.0743 0.0265
3.0000 0.0709 0.0290
4.0000 0.0676 0.0316
gm(:,1) = [0, 0, 0.0000, 0.2867]
round = 4
mu = 1.0000 0.0772 0.0245
NaN NaN NaN
3.0000 0.0709 0.0290
4.0000 0.0676 0.0316
gm(:,1) = 1.0e-105 * [0, NaN, 0, 0.5375]
function logPdf = logmvnpdf(X, mu, sigma, D)
Xmu = X-mu;
logPdf = log(1/sqrt(det(sigma)*(2*pi)^D)) + (-0.5*Xmu*inv(sigma)*Xmu');
end
gm = zeros(K,N); % gaussian component in the nominator
sumGM = zeros(N,1); % denominator of responsibilities
for k = 1:K
for i = 1:N
%gm(k,i) = p(k)/sqrt(det(sigma(:,:,k))*(2*pi)^D)*exp(-0.5*Xmu*inv(sigma(:,:,k))*Xmu');
%gm(k,i) = p(k)*mvnpdf(X(i,:),mu(k,:),sigma(:,:,k));
gm(k,i) = log(p(k)) + logmvnpdf(X(i,:), mu(k,:), sigma(:,:,k), D);
sumGM(i) = sumGM(i) + gm(k,i);
end
end