Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/cplusplus/142.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/ionic-framework/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
C++ 使用SSE的矩阵向量和矩阵乘法_C++_Sse_Matrix Multiplication_Intrinsics_Vector Multiplication - Fatal编程技术网

C++ 使用SSE的矩阵向量和矩阵乘法

C++ 使用SSE的矩阵向量和矩阵乘法,c++,sse,matrix-multiplication,intrinsics,vector-multiplication,C++,Sse,Matrix Multiplication,Intrinsics,Vector Multiplication,我需要编写矩阵向量和矩阵乘法函数,但我不能对SSE命令束手无策 矩阵和向量的维数总是4的倍数 我成功地编写了向量乘法函数,如下所示: void vector_multiplication_SSE(float* m, float* n, float* result, unsigned const int size) { int i; __declspec(align(16))__m128 *p_m = (__m128*)m; __declspec(align(16))__

我需要编写矩阵向量和矩阵乘法函数,但我不能对SSE命令束手无策

矩阵和向量的维数总是4的倍数

我成功地编写了向量乘法函数,如下所示:

void vector_multiplication_SSE(float* m, float* n, float* result, unsigned const int size)
{
    int i;

    __declspec(align(16))__m128 *p_m = (__m128*)m;
    __declspec(align(16))__m128 *p_n = (__m128*)n;
    __declspec(align(16))__m128 *p_result = (__m128*)result;

    for (i = 0; i < size / 4; ++i)
        p_result[i] = _mm_mul_ps(p_m[i], p_n[i]);

    // print the result
    for (int i = 0; i < size; ++i)
    {
        if (i % 4 == 0) cout << endl;
        cout << result[i] << '\t';
    }
}
更新2
void将矩阵乘以向量(float*m,float*v,float*result,unsigned const int vector)
{
int i,j;
__declspec(align(16))u m128*p_m=(u m128*)m;
__declspec(align(16))u m128*p_v=(u m128*)v;
__declspec(align(16))uu m128*p_结果=(u m128*)结果;
对于(i=0;i如果(i%4==0)不能而没有任何技巧或任何东西,那么矩阵向量乘法就是向量和矩阵行之间的一组点积。您的代码实际上没有这种结构。将其实际编写为点积(未经测试):

for(int行=0;行
这里有一些明显的技巧,比如一次处理几行,重用向量中的负载,创建几个独立的依赖链,以便更好地利用吞吐量(见下文)。另外一个非常简单的技巧是对mul/add组合使用FMA,但支持还没有那么广泛(这不是在2015年,但在2020年,这一现象相当普遍)

您可以从中构建矩阵乘法(如果更改结果的位置),但这不是最优的(请参阅下文)


一次取四行(未测试):

for(int行=0;行
现在,每4个FMA只有5个加载,而在未展开行的版本中,每1个FMA只有2个加载。此外,还有4个独立的FMA,或者添加/mul对而不进行FMA收缩,这两种方式都增加了流水线/同时执行的可能性。实际上,您可能想要展开更多,例如Skylake可以启动2个独立的FMA每个周期的FMA需要4个周期才能完成,所以要完全占据这两个FMA单元,你需要8个独立的FMA。作为奖励,最后的3个水平加法对于水平求和来说效果相对较好


不同的数据布局最初似乎是一个缺点,不再可能简单地从矩阵和向量中加载向量并将它们相乘(这将使第一个矩阵的一小行向量再次乘以第二个矩阵的一小行向量,这是错误的)但是完全矩阵乘法可以利用这样一个事实,即它本质上是将一个矩阵乘以许多独立的向量,它有很多独立的工作要做。水平和也可以很容易地避免。所以实际上它比矩阵向量乘法更方便

关键是从矩阵a中提取一个小列向量,从矩阵B中提取一个小行向量,然后将它们相乘成一个小矩阵。这听起来可能与您习惯的情况相反,但这样做在SIMD中效果更好,因为计算始终保持独立且无需水平操作

例如(未测试,假设矩阵的维数可被展开因子整除,需要x64,否则会耗尽寄存器)

(尺寸i=0;i{ 对于(尺寸j=0;jvoid multiply_matrix_by_vector_SSE(float* m, float* v, float* result, unsigned const int vector_dims) { int i, j; __declspec(align(16))__m128 *p_m = (__m128*)m; __declspec(align(16))__m128 *p_v = (__m128*)v; __declspec(align(16))__m128 *p_result = (__m128*)result; for (i = 0; i < vector_dims; i += 4) { __m128 tmp = _mm_load_ps(&result[i]); __m128 p_m_tmp = _mm_load_ps(&m[i]); tmp = _mm_add_ps(tmp, _mm_mul_ps(tmp, p_m_tmp)); _mm_store_ps(&result[i], tmp); // another for loop here? } // print the result for (int i = 0; i < vector_dims; ++i) { if (i % 4 == 0) cout << endl; cout << result[i] << '\t'; } }
void multiply_matrix_by_vector_SSE(float* m, float* v, float* result, unsigned const int vector_dims)
{
    int i, j;

    __declspec(align(16))__m128 *p_m = (__m128*)m;
    __declspec(align(16))__m128 *p_v = (__m128*)v;
    __declspec(align(16))__m128 *p_result = (__m128*)result;

    for (i = 0; i < vector_dims; ++i)
    {
        p_result[i] = _mm_mul_ps(_mm_load_ps(&m[i]), _mm_load_ps1(&v[i]));
    }

    // print the result
    for (int i = 0; i < vector_dims; ++i)
    {
        if (i % 4 == 0) cout << endl;
        cout << result[i] << '\t';
    }
}
void multiply_matrix_by_vector_SSE(float* m, float* v, float* result, unsigned const int vector_dims)
{
    int i, j;
    __declspec(align(16))__m128 *p_m = (__m128*)m;
    __declspec(align(16))__m128 *p_v = (__m128*)v;
    __declspec(align(16))__m128 *p_result = (__m128*)result;

    for (i = 0; i < vector_dims; ++i)
    {
        for (j = 0; j < vector_dims * vector_dims / 4; ++j)
        {
            p_result[i] = _mm_mul_ps(p_v[i], p_m[j]);
        }
    }

    for (int i = 0; i < vector_dims; ++i)
    {
        if (i % 4 == 0) cout << endl;
        cout << result[i] << '\t';
    }
    cout << endl;
}
for (int row = 0; row < nrows; ++row) {
    __m128 acc = _mm_setzero_ps();
    // I'm just going to assume the number of columns is a multiple of 4
    for (int col = 0; col < ncols; col += 4) {
        __m128 vec = _mm_load_ps(&v[col]);
        // don't forget it's a matrix, do 2d addressing
        __m128 mat = _mm_load_ps(&m[col + ncols * row]);
        acc = _mm_add_ps(acc, _mm_mul_ps(mat, vec));
    }
    // now we have 4 floats in acc and they have to be summed
    // can use two horizontal adds for this, they kind of suck but this
    // isn't the inner loop anyway.
    acc = _mm_hadd_ps(acc, acc);
    acc = _mm_hadd_ps(acc, acc);
    // store result, which is a single float
    _mm_store_ss(&result[row], acc);
}
for (int row = 0; row < nrows; row += 4) {
    __m128 acc0 = _mm_setzero_ps();
    __m128 acc1 = _mm_setzero_ps();
    __m128 acc2 = _mm_setzero_ps();
    __m128 acc3 = _mm_setzero_ps();
    for (int col = 0; col < ncols; col += 4) {
        __m128 vec = _mm_load_ps(&v[col]);
        __m128 mat0 = _mm_load_ps(&m[col + ncols * row]);
        __m128 mat1 = _mm_load_ps(&m[col + ncols * (row + 1)]);
        __m128 mat2 = _mm_load_ps(&m[col + ncols * (row + 2)]);
        __m128 mat3 = _mm_load_ps(&m[col + ncols * (row + 3)]);
        acc0 = _mm_add_ps(acc0, _mm_mul_ps(mat0, vec));
        acc1 = _mm_add_ps(acc1, _mm_mul_ps(mat1, vec));
        acc2 = _mm_add_ps(acc2, _mm_mul_ps(mat2, vec));
        acc3 = _mm_add_ps(acc3, _mm_mul_ps(mat3, vec));
    }
    acc0 = _mm_hadd_ps(acc0, acc1);
    acc2 = _mm_hadd_ps(acc2, acc3);
    acc0 = _mm_hadd_ps(acc0, acc2);
    _mm_store_ps(&result[row], acc0);
}
for (size_t i = 0; i < mat1rows; i += 4) {
    for (size_t j = 0; j < mat2cols; j += 8) {
        float* mat1ptr = &mat1[i * mat1cols];
        __m256 sumA_1, sumB_1, sumA_2, sumB_2, sumA_3, sumB_3, sumA_4, sumB_4;
        sumA_1 = _mm_setzero_ps();
        sumB_1 = _mm_setzero_ps();
        sumA_2 = _mm_setzero_ps();
        sumB_2 = _mm_setzero_ps();
        sumA_3 = _mm_setzero_ps();
        sumB_3 = _mm_setzero_ps();
        sumA_4 = _mm_setzero_ps();
        sumB_4 = _mm_setzero_ps();

        for (size_t k = 0; k < mat2rows; ++k) {
            auto bc_mat1_1 = _mm_set1_ps(mat1ptr[0]);
            auto vecA_mat2 = _mm_load_ps(mat2 + m2idx);
            auto vecB_mat2 = _mm_load_ps(mat2 + m2idx + 4);
            sumA_1 = _mm_add_ps(_mm_mul_ps(bc_mat1_1, vecA_mat2), sumA_1);
            sumB_1 = _mm_add_ps(_mm_mul_ps(bc_mat1_1, vecB_mat2), sumB_1);
            auto bc_mat1_2 = _mm_set1_ps(mat1ptr[N]);
            sumA_2 = _mm_add_ps(_mm_mul_ps(bc_mat1_2, vecA_mat2), sumA_2);
            sumB_2 = _mm_add_ps(_mm_mul_ps(bc_mat1_2, vecB_mat2), sumB_2);
            auto bc_mat1_3 = _mm_set1_ps(mat1ptr[N * 2]);
            sumA_3 = _mm_add_ps(_mm_mul_ps(bc_mat1_3, vecA_mat2), sumA_3);
            sumB_3 = _mm_add_ps(_mm_mul_ps(bc_mat1_3, vecB_mat2), sumB_3);
            auto bc_mat1_4 = _mm_set1_ps(mat1ptr[N * 3]);
            sumA_4 = _mm_add_ps(_mm_mul_ps(bc_mat1_4, vecA_mat2), sumA_4);
            sumB_4 = _mm_add_ps(_mm_mul_ps(bc_mat1_4, vecB_mat2), sumB_4);
            m2idx += 8;
            mat1ptr++;
        }
        _mm_store_ps(&result[i * mat2cols + j], sumA_1);
        _mm_store_ps(&result[i * mat2cols + j + 4], sumB_1);
        _mm_store_ps(&result[(i + 1) * mat2cols + j], sumA_2);
        _mm_store_ps(&result[(i + 1) * mat2cols + j + 4], sumB_2);
        _mm_store_ps(&result[(i + 2) * mat2cols + j], sumA_3);
        _mm_store_ps(&result[(i + 2) * mat2cols + j + 4], sumB_3);
        _mm_store_ps(&result[(i + 3) * mat2cols + j], sumA_4);
        _mm_store_ps(&result[(i + 3) * mat2cols + j + 4], sumB_4);
    }
}