Python 高斯核性能

Python 高斯核性能,python,performance,numpy,Python,Performance,Numpy,以下方法计算高斯核: import numpy as np def gaussian_kernel(X, X2, sigma): """ Calculate the Gaussian kernel matrix k_ij = exp(-||x_i - x_j||^2 / (2 * sigma^2)) :param X: array-like, shape=(n_samples_1, n_features), feature-matrix :pa

以下方法计算高斯核:

import numpy as np
def gaussian_kernel(X, X2, sigma):
    """
    Calculate the Gaussian kernel matrix

        k_ij = exp(-||x_i - x_j||^2 / (2 * sigma^2))

    :param X: array-like, shape=(n_samples_1, n_features), feature-matrix
    :param X2: array-like, shape=(n_samples_2, n_features), feature-matrix
    :param sigma: scalar, bandwidth parameter

    :return: array-like, shape=(n_samples_1, n_samples_2), kernel matrix
    """

    norm = np.square(np.linalg.norm(X[None,:,:] - X2[:,None,:], axis=2).T)    
    return np.exp(-norm/(2*np.square(sigma)))

# Usage example
%timeit gaussian_kernel(np.random.rand(5000, 10), np.random.rand(5000, 10), 1)
每个回路1.43 s±39.3 ms(7次运行的平均值±标准偏差,每个回路1次)

我的问题是:使用numpy有没有提高性能的方法?

这篇文章:给出了答案

很快,要复制numpy部件,请执行以下操作:

import numpy as np
def gaussian_kernel(X, X2, sigma):
    """
    Calculate the Gaussian kernel matrix

        k_ij = exp(-||x_i - x_j||^2 / (2 * sigma^2))

    :param X: array-like, shape=(n_samples_1, n_features), feature-matrix
    :param X2: array-like, shape=(n_samples_2, n_features), feature-matrix
    :param sigma: scalar, bandwidth parameter

    :return: array-like, shape=(n_samples_1, n_samples_2), kernel matrix
    """
    X_norm = np.sum(X ** 2, axis = -1)
    X2_norm = np.sum(X2 ** 2, axis = -1)
    norm = X_norm[:,None] + X2_norm[None,:] - 2 * np.dot(X, X2.T)
    return np.exp(-norm/(2*np.square(sigma)))

# Timing
%timeit gaussian_kernel(np.random.rand(5000, 10), np.random.rand(5000, 10), 1)
每个回路976 ms±73.5 ms(7次运行的平均值±标准偏差,每个回路1次)


对于非常小的数组,您可以编写一个简单的循环实现,并使用Numba进行编译。对于较大的数组,使用np.dot()进行代数重新格式化将更快

示例

#from version 0.43 until 0.47 this has to be set before importing numba
#Bug: https://github.com/numba/numba/issues/4689
from llvmlite import binding
binding.set_option('SVML', '-vector-library=SVML')
import numba as nb
import numpy as np

@nb.njit(fastmath=True,error_model="numpy",parallel=True)
def gaussian_kernel_2(X, X2, sigma):
    res=np.empty((X.shape[0],X2.shape[0]),dtype=X.dtype)
    for i in nb.prange(X.shape[0]):
        for j in range(X2.shape[0]):
            acc=0.
            for k in range(X.shape[1]):
                acc+=(X[i,k]-X2[j,k])**2/(2*sigma**2)
            res[i,j]=np.exp(-1*acc)
    return res
计时

X1=np.random.rand(5000, 10)
X2=np.random.rand(5000, 10)

#Your solution
%timeit gaussian_kernel(X1,X2, 1)
#511 ms ± 10.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit gaussian_kernel_2(X1,X2, 1)
#90.1 ms ± 9.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

你的代码都是矢量化的,在我看来像闪电一样快。我相信scitkit learn中有一个guassian_内核,所以您不必自己编写代码。但我怀疑它能比这更快。不要忘记创建随机数也需要时间。最佳解决方案取决于数组的第二维度的大小。因此,重要的是要知道第二维度是否总是小到10,或者也可以更大,比如100或1000。在我的例子中,第二维度总是<30