Python 加速Kronecker产品Numpy
所以我试图计算两个任意维矩阵的kronecker积。(我仅在示例中使用相同维度的方阵) 最初,我尝试使用Python 加速Kronecker产品Numpy,python,numpy,linear-algebra,Python,Numpy,Linear Algebra,所以我试图计算两个任意维矩阵的kronecker积。(我仅在示例中使用相同维度的方阵) 最初,我尝试使用kron: a = np.random.random((60,60)) b = np.random.random((60,60)) start = time.time() a = np.kron(a,b) end = time.time() Output: 0.160096406936645 为了提高速度,我使用了tensordot: a = np.random.random((60,6
kron
:
a = np.random.random((60,60))
b = np.random.random((60,60))
start = time.time()
a = np.kron(a,b)
end = time.time()
Output: 0.160096406936645
为了提高速度,我使用了tensordot
:
a = np.random.random((60,60))
b = np.random.random((60,60))
start = time.time()
a = np.tensordot(a,b,axes=0)
a = np.transpose(a,(0,2,1,3))
a = np.reshape(a,(3600,3600))
end = time.time()
Output: 0.11808371543884277
在搜索了一下网络之后,我发现(或者至少在我的理解中)numpy在需要重塑已被转置的张量时会产生一个额外的副本
于是我尝试了以下方法(这段代码显然没有给出a和b的kronecker乘积,但我只是做了一个测试):
我的问题是:如何计算kronecker乘积而不遇到与转置相关的问题
我只是在寻找一个快速的加速,因此解决方案不必使用tensordot
编辑
我刚刚在这篇文章中发现,还有另一种方法:
a = np.random.random((60,60))
b = np.random.random((60,60))
c = a
start = time.time()
a = a[:,np.newaxis,:,np.newaxis]
a = a[:,np.newaxis,:,np.newaxis]*b[np.newaxis,:,np.newaxis,:]
a.shape = (3600,3600)
end = time.time()
test = np.kron(c,b)
print(np.array_equal(a,test))
print(end-start)
Output: True
0.05503702163696289
我仍然感兴趣的问题是,您是否可以进一步加快计算速度?
einsum
似乎有效:
>>> a = np.random.random((60,60))
>>> b = np.random.random((60,60))
>>> ab = np.kron(a,b)
>>> abe = np.einsum('ik,jl', a, b).reshape(3600,3600)
>>> (abe==ab).all()
True
>>> timeit(lambda: np.kron(a, b), number=10)
1.0697475590277463
>>> timeit(lambda: np.einsum('ik,jl', a, b).reshape(3600,3600), number=10)
0.42500176999601535
简单的广播速度更快:
>>> abb = (a[:, None, :, None]*b[None, :, None, :]).reshape(3600,3600)
>>> (abb==ab).all()
True
>>> timeit(lambda: (a[:, None, :, None]*b[None, :, None, :]).reshape(3600,3600), number=10)
0.28011218502069823
更新:使用blas和cython,我们可以获得另一个适度(30%)的加速。你自己决定是否值得费心
[setup.py]
from distutils.core import setup
from Cython.Build import cythonize
setup(name='kronecker',
ext_modules=cythonize("cythkrn.pyx"))
[cythrn.pyx]
import cython
cimport scipy.linalg.cython_blas as blas
import numpy as np
@cython.boundscheck(False)
@cython.wraparound(False)
def kron(double[:, ::1] a, double[:, ::1] b):
cdef int i = a.shape[0]
cdef int j = a.shape[1]
cdef int k = b.shape[0]
cdef int l = b.shape[1]
cdef int onei = 1
cdef double oned = 1
cdef int m, n
result = np.zeros((i*k, j*l), float)
cdef double[:, ::1] result_v = result
for n in range(i):
for m in range(k):
blas.dger(&l, &j, &oned, &b[m, 0], &onei, &a[n, 0], &onei, &result_v[m+k*n, 0], &l)
return result
要构建,请运行cython cythrn.pyx
,然后运行python3 setup.py build
>>> from timeit import timeit
>>> import cythkrn
>>> import numpy as np
>>>
>>> a = np.random.random((60,60))
>>> b = np.random.random((60,60))
>>>
>>> np.all(cythkrn.kron(a, b)==np.kron(a, b))
True
>>>
>>> timeit(lambda: cythkrn.kron(a, b), number=10)
0.18925874299020506
加速内存限制的计算
- 完全避免,是可能的(例如克朗和苏姆的例子)
- ,与其他计算相结合时
- 也许float32加上float64就足够了
- 如果此计算在循环中,则只分配一次内存
import numba as nb
import numpy as np
@nb.njit(fastmath=True,parallel=True)
def kron(A,B):
out=np.empty((A.shape[0],B.shape[0],A.shape[1],B.shape[1]),dtype=A.dtype)
for i in nb.prange(A.shape[0]):
for j in range(B.shape[0]):
for k in range(A.shape[1]):
for l in range(B.shape[1]):
out[i,j,k,l]=A[i,k]*B[j,l]
return out
@nb.njit(fastmath=True,parallel=False)
def kron_preallocated(A,B,out):
for i in nb.prange(A.shape[0]):
for j in range(B.shape[0]):
for k in range(A.shape[1]):
for l in range(B.shape[1]):
out[i,j,k,l]=A[i,k]*B[j,l]
return out
@nb.njit(fastmath=True,parallel=True)
def kron_and_sum(A,B):
out=0.
for i in nb.prange(A.shape[0]):
TMP=np.float32(0.)
for j in range(B.shape[0]):
for k in range(A.shape[1]):
for l in range(B.shape[1]):
out+=A[i,k]*B[j,l]
return out
计时
#Create some data
out_float64=np.empty((a.shape[0],b.shape[0],a.shape[1],b.shape[1]),dtype=np.float64)
out_float32=np.empty((a.shape[0],b.shape[0],a.shape[1],b.shape[1]),dtype=np.float32)
a_float64 = np.random.random((60,60))
b_float64 = np.random.random((60,60))
a_float32 = a_float64.astype(np.float32)
b_float32 = b_float64.astype(np.float32)
#Reference
%timeit np.kron(a_float64,b_float64)
147 ms ± 1.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
#If you have to allocate memory for every calculation (float64)
%timeit B=kron(a_float64,b_float64).reshape(3600,3600)
17.6 ms ± 244 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#If you don't have to allocate memory for every calculation (float64)
%timeit B=kron_preallocated(a_float64,b_float64,out_float64).reshape(3600,3600)
8.08 ms ± 269 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#If you have to allocate memory for every calculation (float32)
%timeit B=kron(a_float32,b_float32).reshape(3600,3600)
9.27 ms ± 185 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#If you don't have to allocate memory for every calculation (float32)
%timeit B=kron_preallocated(a_float32,b_float32,out_float32).reshape(3600,3600)
3.95 ms ± 155 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#Example for a joined operation (sum of kroncker product)
#which isn't memory bottlenecked
%timeit B=kron_and_sum(a_float64,b_float64)
881 µs ± 104 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
谢谢你的解决方案。对于高维张量,我也看到了与tensordot收缩指数的转置相同的问题,但我看到einsum和广播没有比tensordot有所改进。这是意料之中的还是仍然有优势?@user1058860我不知道tensordot的内部工作原理,所以我无法回答这个问题。但我很好奇一个人能把它推多远,见更新的帖子。我想,blas/cython方法在不投入大量开发时间的情况下,速度是最快的,您是在哪个平台上获得这种时间安排的?在Windows(Core i5第8代)上,使用Cython实现,我得到预分配内存8ms单线程/8ms多线程,没有预分配内存,我得到大约40ms单线程/20ms多线程(几乎与我的Numba implementation完全相同)@max9111这是一台相当旧的x86_64 linux机器,我不认为它有超高速blas。你的观察结果令人费解,例如,我没有收到任何提前分配给家里写信的加速。也就是说,这可能是我计时的一个缺陷,因为我无法控制操作系统如何回收刚刚释放的内存。
#Create some data
out_float64=np.empty((a.shape[0],b.shape[0],a.shape[1],b.shape[1]),dtype=np.float64)
out_float32=np.empty((a.shape[0],b.shape[0],a.shape[1],b.shape[1]),dtype=np.float32)
a_float64 = np.random.random((60,60))
b_float64 = np.random.random((60,60))
a_float32 = a_float64.astype(np.float32)
b_float32 = b_float64.astype(np.float32)
#Reference
%timeit np.kron(a_float64,b_float64)
147 ms ± 1.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
#If you have to allocate memory for every calculation (float64)
%timeit B=kron(a_float64,b_float64).reshape(3600,3600)
17.6 ms ± 244 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#If you don't have to allocate memory for every calculation (float64)
%timeit B=kron_preallocated(a_float64,b_float64,out_float64).reshape(3600,3600)
8.08 ms ± 269 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#If you have to allocate memory for every calculation (float32)
%timeit B=kron(a_float32,b_float32).reshape(3600,3600)
9.27 ms ± 185 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#If you don't have to allocate memory for every calculation (float32)
%timeit B=kron_preallocated(a_float32,b_float32,out_float32).reshape(3600,3600)
3.95 ms ± 155 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
#Example for a joined operation (sum of kroncker product)
#which isn't memory bottlenecked
%timeit B=kron_and_sum(a_float64,b_float64)
881 µs ± 104 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)