Python 添加多个矩阵而不构建新矩阵

Python 添加多个矩阵而不构建新矩阵,python,numpy,matrix,Python,Numpy,Matrix,假设我有两个矩阵B和M,我想执行以下语句: B += 3*M 我反复执行此指令,因此不希望每次矩阵3*M时都构建它(3可能会发生变化,只是为了确保我只生成标量矩阵积)。是不是一个numpy函数让这个计算“就位” 更准确地说,我有一个标量列表as和一个矩阵列表Ms,我想执行这两个的“点积”(因为两个操作数的类型不同,所以不是一个),也就是说: sum(a*M for a, M in zip(as, Ms)) np.dot函数除了…您可以使用- 或- 样本运行- In [41]: As = [2

假设我有两个矩阵
B
M
,我想执行以下语句:

B += 3*M
我反复执行此指令,因此不希望每次矩阵
3*M
时都构建它(
3
可能会发生变化,只是为了确保我只生成标量矩阵积)。是不是一个numpy函数让这个计算“就位”

更准确地说,我有一个标量列表
as
和一个矩阵列表
Ms
,我想执行这两个的“点积”(因为两个操作数的类型不同,所以不是一个),也就是说:

sum(a*M for a, M in zip(as, Ms))
np.dot
函数除了…

您可以使用-

或-

样本运行-

In [41]: As = [2,5,6]

In [42]: Ms = [np.random.rand(2,3),np.random.rand(2,3),np.random.rand(2,3)]

In [43]: sum(a*M for a, M in zip(As, Ms))
Out[43]: 
array([[  6.79630284,   5.04212877,  10.76217631],
       [  4.91927651,   1.98115548,   6.13705742]])

In [44]: np.tensordot(As,Ms,axes=(0,0))
Out[44]: 
array([[  6.79630284,   5.04212877,  10.76217631],
       [  4.91927651,   1.98115548,   6.13705742]])

In [45]: np.einsum('i,ijk->jk',As,Ms)
Out[45]: 
array([[  6.79630284,   5.04212877,  10.76217631],
       [  4.91927651,   1.98115548,   6.13705742]])

另一种方法是利用

因此,您可以从1D和2D阵列创建3D阵列,然后在适当的轴上求和:

>>> Ms = np.random.randn(4, 2, 3)   # 4 arrays of size 2x3
>>> As = np.random.randn(4)
>>> np.sum(As[:, np.newaxis, np.newaxis] * Ms)
array([[-1.40199248, -0.40337845, -0.69986566],
       [ 3.52724279,  0.19547118,  2.1485559 ]])
>>> sum(a*M for a, M in zip(As, Ms))
array([[-1.40199248, -0.40337845, -0.69986566],
       [ 3.52724279,  0.19547118,  2.1485559 ]])
然而,值得注意的是,
np.einsum
np.tensordot
通常效率更高:

>>> %timeit np.sum(As[:, np.newaxis, np.newaxis] * Ms, axis=0)
The slowest run took 7.38 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 8.58 µs per loop
>>> %timeit np.einsum('i,ijk->jk', As, Ms)
The slowest run took 19.16 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 2.44 µs per loop
对于更多的人来说也是如此:

>>> Ms = np.random.randn(100, 200, 300)
>>> As = np.random.randn(100)
>>> %timeit np.einsum('i,ijk->jk', As, Ms)
100 loops, best of 3: 5.03 ms per loop
>>> %timeit np.sum(As[:, np.newaxis, np.newaxis] * Ms, axis=0)
100 loops, best of 3: 14.8 ms per loop
>>> %timeit np.tensordot(As,Ms,axes=(0,0))
100 loops, best of 3: 2.79 ms per loop
因此
np.tensordot
在这种情况下效果最好

使用
np.sum
和广播的唯一好理由是使代码更具可读性(当矩阵较小时会有所帮助)

>>> %timeit np.sum(As[:, np.newaxis, np.newaxis] * Ms, axis=0)
The slowest run took 7.38 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 8.58 µs per loop
>>> %timeit np.einsum('i,ijk->jk', As, Ms)
The slowest run took 19.16 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 2.44 µs per loop
>>> Ms = np.random.randn(100, 200, 300)
>>> As = np.random.randn(100)
>>> %timeit np.einsum('i,ijk->jk', As, Ms)
100 loops, best of 3: 5.03 ms per loop
>>> %timeit np.sum(As[:, np.newaxis, np.newaxis] * Ms, axis=0)
100 loops, best of 3: 14.8 ms per loop
>>> %timeit np.tensordot(As,Ms,axes=(0,0))
100 loops, best of 3: 2.79 ms per loop