Python MATLAB矩阵幂算法

Python MATLAB矩阵幂算法,python,matlab,scipy,matrix-multiplication,Python,Matlab,Scipy,Matrix Multiplication,我想把一个算法从MATLAB移植到Python。所述算法中的一个步骤涉及取A^(-1/2),其中A是9x9平方复矩阵。据我所知,矩阵的平方根(以及它们的逆)不是唯一的 我一直在用scipy.linalg.fractional\u matrix\u power和使用A^(-1/2)=exp(-1/2)*log(A))和numpy内置的expm和logm函数进行近似试验。前者非常差,仅提供3位小数点后的精度,而后者对于左上角的元素非常正确,但随着向下和向右移动,精度会逐渐降低。这可能是也可能不是表达

我想把一个算法从MATLAB移植到Python。所述算法中的一个步骤涉及取A^(-1/2),其中A是9x9平方复矩阵。据我所知,矩阵的平方根(以及它们的逆)不是唯一的

我一直在用
scipy.linalg.fractional\u matrix\u power
和使用
A^(-1/2)=exp(-1/2)*log(A))
和numpy内置的
expm
logm
函数进行近似试验。前者非常差,仅提供3位小数点后的精度,而后者对于左上角的元素非常正确,但随着向下和向右移动,精度会逐渐降低。这可能是也可能不是表达式的完全有效的数学解,但是它不足以用于此应用程序

因此,我希望在Python中直接实现MATLAB的矩阵幂算法,以便每次都能100%地确认相同的结果。是否有人有任何关于这将如何工作的见解或文档?该算法的可并行性越强越好,因为最终的目标是在OpenCL中重写它以实现GPU加速

编辑:根据请求编辑MCVE:

[[(0.591557294607941+4.33680868994202e-19j), (-0.219707725574605-0.35810724986609j), (-0.121305654177909+0.244558388829046j), (0.155552026648172-0.0180264818714123j), (-0.0537690384136066-0.0630740244116577j), (-0.0107526931263697+0.0397896274845627j), (0.0182892503609312-0.00653264433724856j), (-0.00710188853532244-0.0050445035279044j), (-2.20414002823034e-05+0.00373184532662288j)], [(-0.219707725574605+0.35810724986609j), (0.312038814492119+2.16840434497101e-19j), (-0.109433401402399-0.174379997015402j), (-0.0503362231078033+0.108510948023091j), (0.0631826956936223-0.00992931123813742j), (-0.0219902325360141-0.0233215237172002j), (-0.00314837555001163+0.0148621558916679j), (0.00630295247506065-0.00266790359447072j), (-0.00249343102520442-0.00156160619280611j)], [(-0.121305654177909-0.244558388829046j), (-0.109433401402399+0.174379997015402j), (0.136649392858215-1.76182853028894e-19j), (-0.0434623984527311-0.0669251299161109j), (-0.0168737559719828+0.0393768358149159j), (0.0211288536117387-0.00417146769324491j), (-0.00734306979471257-0.00712443264825166j), (-0.000742681625102133+0.00455752452374196j), (0.00179068247786595-0.000862706240042082j)], [(0.155552026648172+0.0180264818714123j), (-0.0503362231078033-0.108510948023091j), (-0.0434623984527311+0.0669251299161109j), (0.0467980890488569+5.14996031930615e-19j), (-0.0140208255975664-0.0209483313237692j), (-0.00472995448413803+0.0117916398375124j), (0.00589653974090387-0.00134198920550751j), (-0.00202109265416585-0.00184021636458858j), (-0.000150793859056431+0.00116822322464066j)], [(-0.0537690384136066+0.0630740244116577j), (0.0631826956936223+0.00992931123813742j), (-0.0168737559719828-0.0393768358149159j), (-0.0140208255975664+0.0209483313237692j), (0.0136137125669776-2.03287907341032e-20j), (-0.00387854073283377-0.0056769786724813j), (-0.0011741038702424+0.00306007798625676j), (0.00144000687517355-0.000355251914809693j), (-0.000481433965262789-0.00042129815655098j)], [(-0.0107526931263697-0.0397896274845627j), (-0.0219902325360141+0.0233215237172002j), (0.0211288536117387+0.00417146769324491j), (-0.00472995448413803-0.0117916398375124j), (-0.00387854073283377+0.0056769786724813j), (0.00347771689075251+8.21621958836671e-20j), (-0.000944046302699304-0.00136521328407881j), (-0.00026318475762475+0.000704212317211994j), (0.00031422288569727-8.10033316327328e-05j)], [(0.0182892503609312+0.00653264433724856j), (-0.00314837555001163-0.0148621558916679j), (-0.00734306979471257+0.00712443264825166j), (0.00589653974090387+0.00134198920550751j), (-0.0011741038702424-0.00306007798625676j), (-0.000944046302699304+0.00136521328407881j), (0.000792908166233942-7.41153828847513e-21j), (-0.00020531962049495-0.000294952695922854j), (-5.36226164765808e-05+0.000145645628243286j)], [(-0.00710188853532244+0.00504450352790439j), (0.00630295247506065+0.00266790359447072j), (-0.000742681625102133-0.00455752452374196j), (-0.00202109265416585+0.00184021636458858j), (0.00144000687517355+0.000355251914809693j), (-0.00026318475762475-0.000704212317211994j), (-0.00020531962049495+0.000294952695922854j), (0.000162971629601464-5.39321759384574e-22j), (-4.03304806590714e-05-5.77159110863666e-05j)], [(-2.20414002823034e-05-0.00373184532662288j), (-0.00249343102520442+0.00156160619280611j), (0.00179068247786595+0.000862706240042082j), (-0.000150793859056431-0.00116822322464066j), (-0.000481433965262789+0.00042129815655098j), (0.00031422288569727+8.10033316327328e-05j), (-5.36226164765808e-05-0.000145645628243286j), (-4.03304806590714e-05+5.77159110863666e-05j), (3.04302590501313e-05-4.10281583826302e-22j)]]

我可以想出两种解释,在这两种情况下,我都指责用户的错误。按时间顺序:

理论#1(微妙的一个) 我怀疑你是在把输入矩阵的打印值从一个代码复制到另一个代码。也就是说,当你切换代码时,你就丢掉了双精度,这在平方根逆计算中会被放大

作为证明,我将MATLAB的平方根逆与python中使用的函数进行了比较。出于尺寸考虑,我将展示一个3x3示例,但扰流板警告-我对一个9x9随机矩阵做了相同的操作,并得到了两个结果,条件编号为
11.245754109790719
(MATLAB)和
11.245754109790818
(numpy)。这将告诉您一些关于结果相似性的信息,而不必保存和加载两个代码之间的实际矩阵。我建议你这样做:关键字是和

我所做的是用python生成随机数据(因为这是我喜欢的):

>>将numpy作为np导入
>>>打印((np.random.rand(3,3)+1j*np.random.rand(3,3)).tolist())
[(0.8404782758300281+0.29389006737780765j),(0.741574080512219+0.7944606900644321j),(0.12788250870304718+0.37304665786925073j)],[(0.858340278463595+0.13952117266781894j),(0.2138809231406249+0.6233427148017449j),(0.7276466404131303+0.6480553962539J)],[(0.1784848161290463595+0.13952117276787878787845J),(0.190279190+0.41717845J),(0.098035589605617+0.03022344706473823j)]]
通过将相同的截断输出复制到两个代码中,我保证了输入的对应性

MATLAB中的示例:

>M=[(0.8404782758300281+0.29389006737780765j),(0.741574080512219+0.7944606900644321j),(0.12788250870304718+0.37304665786925073j)];[(0.8583402784463595+0.13952117266781894j),(0.2138809231406249+0.62334271747449J),(0.72764664041303+0.72768055973962599J)];[(0.2138807272907+0.11572728J),(0.2870462766764591+0.8891190037142521j),(0.0980355896905617+0.03022344706473823j)];
>>A=M^(-0.5);
>>格式长
>>展览(A)
0.922112307438377+0.919346397931976i 0.108620882045523-0.64985043489795i-0.778737740194425-0.320654127149988i
-0.423384022626231-0.842737730824859i 0.592015668030645+0.661682656423866i 0.529361991464903-0.388343838121371i
-0.550789874427422+0.02112951921025i 0.472026152514446-0.502143106675176i 0.942976466768961+0.141839849623673i
>>条件(A)
ans=
3.429368520364765
python中的示例:

>>M=[[(0.8404782758300281+0.29389006737780765j),(0.74157400512219+0.7944606900644321j),(0.12788250870304718+0.37304665786925073j)],[(0.858340278463595+0.13952117266781894j),(0.2138809231406249+0.623342717449J),(0.72766404
…131303+0.6480559739625379j)],[(0.1784816129006297+0.72452362541158j),(0.2870462766764591+0.8891190037142521j),(0.0980355896905617+0.03022344706473823j)]]
>>>A=分数矩阵幂(M,-0.5)
>>>印刷品(A)
[0.92211231+0.9193464j 0.10862088-0.64985043j-0.77873774-0.32065413j]
[-0.42338402-0.84273773j 0.59201567+0.66168266j 0.52936199-0.38834384j]
[-0.55078987+0.02112952j 0.47202615-0.50214311j 0.94297647+0.14183985j]]
>>>np.linalg.cond(A)
3.4293685203647408
我的怀疑是,如果您将矩阵
scipy.io.loadmat
转换为python,进行计算,
scipy.io.savemat
将结果加载回MATLAB,您将看到结果之间的绝对误差小于
1e-12
(希望更小)


理论#2(脸掌理论) 我怀疑您使用的是python 2,而您的
-1/2
-powered除法是一个简单的反比:

下面是python 3 >>>#python 3的//是python 2的/,即整数除法 >>> 1/2 0.5 >>> 1//2 0 >>> -1/2 -0.5 >>> -1//2 -1 因此,如果您使用的是python 2,那么

fractional_matrix_power(M,-1/2)
实际上与
M
相反。显而易见的解决方案是切换到Python3。不太明显的解决方案是继续使用Python2(正如上面的例子所示,您不应该使用Python2),而是使用


这将覆盖简单的
/
除法运算符的行为,以便它反映python 3版本,并且您会少一点头痛。

我可以想出两种解释,在这两种情况下,我都指责用户错误。按时间顺序:

理论#1(微妙的一个) 我的怀疑是,你将输入矩阵的打印值从一个代码复制到另一个代码中作为输入。也就是说,当你切换代码时,你丢掉了双精度,这在平方根逆计算过程中会被放大

作为证明,我将MATLAB的平方根反比与您所使用的函数u进行了比较
from __future__ import division