Python NumPy广播:计算两个阵列之间的平方差之和

Python NumPy广播:计算两个阵列之间的平方差之和,python,numpy,array-broadcasting,Python,Numpy,Array Broadcasting,我有以下代码。在Python中,这需要花费很长时间。一定有办法把这个计算转换成广播 def euclidean_square(a,b): squares = np.zeros((a.shape[0],b.shape[0])) for i in range(squares.shape[0]): for j in range(squares.shape[1]): diff = a[i,:] - b[j,:] sqr =

我有以下代码。在Python中,这需要花费很长时间。一定有办法把这个计算转换成广播

def euclidean_square(a,b):
    squares = np.zeros((a.shape[0],b.shape[0]))
    for i in range(squares.shape[0]):
        for j in range(squares.shape[1]):
            diff = a[i,:] - b[j,:]
            sqr = diff**2.0
            squares[i,j] = np.sum(sqr)
    return squares
您可以在计算a中的差异后使用,如下所示-

ab = a[:,None,:] - b
out = np.einsum('ijk,ijk->ij',ab,ab)
from scipy.spatial.distance import cdist
out = cdist(a,b,'sqeuclidean')
或者将其可选度量参数设置为
'sqeuclidean'
来提供问题所需的平方欧几里德距离,如下所示-

ab = a[:,None,:] - b
out = np.einsum('ijk,ijk->ij',ab,ab)
from scipy.spatial.distance import cdist
out = cdist(a,b,'sqeuclidean')

除了使用cdist之外,另一个解决方案如下

difference_squared = np.zeros((a.shape[0], b.shape[0]))
for dimension_iterator in range(a.shape[1]):
    difference_squared = difference_squared + np.subtract.outer(a[:, dimension_iterator], b[:, dimension_iterator])**2.

我收集了这里提出的两种不同方法,并测量了不同方法的速度:

import numpy as np
import scipy.spatial
import sklearn.metrics

def dist_direct(x, y):
    d = np.expand_dims(x, -2) - y
    return np.sum(np.square(d), axis=-1)

def dist_einsum(x, y):
    d = np.expand_dims(x, -2) - y
    return np.einsum('ijk,ijk->ij', d, d)

def dist_scipy(x, y):
    return scipy.spatial.distance.cdist(x, y, "sqeuclidean")

def dist_sklearn(x, y):
    return sklearn.metrics.pairwise.pairwise_distances(x, y, "sqeuclidean")

def dist_layers(x, y):
    res = np.zeros((x.shape[0], y.shape[0]))
    for i in range(x.shape[1]):
        res += np.subtract.outer(x[:, i], y[:, i])**2
    return res

# inspired by the excellent https://github.com/droyed/eucl_dist
def dist_ext1(x, y):
    nx, p = x.shape
    x_ext = np.empty((nx, 3*p))
    x_ext[:, :p] = 1
    x_ext[:, p:2*p] = x
    x_ext[:, 2*p:] = np.square(x)

    ny = y.shape[0]
    y_ext = np.empty((3*p, ny))
    y_ext[:p] = np.square(y).T
    y_ext[p:2*p] = -2*y.T
    y_ext[2*p:] = 1

    return x_ext.dot(y_ext)

# https://stackoverflow.com/a/47877630/648741
def dist_ext2(x, y):
    return np.einsum('ij,ij->i', x, x)[:,None] + np.einsum('ij,ij->i', y, y) - 2 * x.dot(y.T)
我使用
timeit
来比较不同方法的速度。为了比较,我使用长度为10的向量,第一组中有100个向量,第二组中有1000个向量

import timeit

p = 10
x = np.random.standard_normal((100, p))
y = np.random.standard_normal((1000, p))

for method in dir():
    if not method.startswith("dist_"):
        continue
    t = timeit.timeit(f"{method}(x, y)", number=1000, globals=globals())
    print(f"{method:12} {t:5.2f}ms")
在我的笔记本电脑上,结果如下:

dist_direct   5.07ms
dist_einsum   3.43ms
dist_ext1     0.20ms  <-- fastest
dist_ext2     0.35ms
dist_layers   2.82ms
dist_scipy    0.60ms
dist_sklearn  0.67ms
dist_direct 5.07ms
距离3.43ms

dist_ext1 0.20ms哇,我需要练习这个einsum东西。我的很多代码都可以广播……谢谢。@bordeo这是一个纯粹的魔法,正如你在回答之前的问题时所看到的那样!