Mpi LU分解

Mpi LU分解,mpi,openmpi,Mpi,Openmpi,这是LU分解的MPI代码 我采用了以下策略: 有一个主人(等级0),其他人是奴隶。主设备向每个从设备发送行。 因为每个从机可能接收多行,所以我将所有接收到的行存储在 缓冲区,然后对其执行LU分解。完成后,我会发回 缓冲区到主机。主人不做任何计算。它只是发送和接收 for(i=0; i<n; i++) map[i] = i%(numProcs-1) + 1; for(i=0; i<n-1; i++) { if(rank == 0) { stat

这是LU分解的MPI代码

我采用了以下策略:

有一个主人(等级0),其他人是奴隶。主设备向每个从设备发送行。 因为每个从机可能接收多行,所以我将所有接收到的行存储在 缓冲区,然后对其执行LU分解。完成后,我会发回 缓冲区到主机。主人不做任何计算。它只是发送和接收

for(i=0; i<n; i++)
    map[i] = i%(numProcs-1) + 1;

for(i=0; i<n-1; i++)
{
    if(rank == 0)
    {
        status = pivot(LU,i,n);

        for(j=0; j<n; j++)
            row1[j] = LU[n*i+j];
    }

    MPI_Bcast(&status, 1, MPI_INT, 0, MPI_COMM_WORLD);

    if(status == -1)
        return -1;

    MPI_Bcast(row1, n, MPI_DOUBLE, 0, MPI_COMM_WORLD);

    int tag1 = 1, tag2 = 2, tag3 = 3, tag4 = 4;

    if(rank == 0)
    {
        int pno, start, index, l, rowsReceived = 0;
        MPI_Request req;
        MPI_Status stat;

        for(j=i+1; j<n; j++)
            MPI_Isend(&LU[n*j], n, MPI_DOUBLE, map[j], map[j], MPI_COMM_WORLD, &req);

        if(i>=n-(numProcs-1))
            cnt++;

        for(j=0; j<numProcs-1-cnt; j++)
        {
            MPI_Recv(&pno, 1, MPI_INT, MPI_ANY_SOURCE, tag2, MPI_COMM_WORLD, &stat);
            //printf("1. Recv from %d and j : %d and i : %d\n",pno,j,i);
            MPI_Recv(&rowsReceived, 1, MPI_INT, pno, tag3, MPI_COMM_WORLD, &stat);
            MPI_Recv(rowFinal, n*rowsReceived, MPI_DOUBLE, pno, tag4, MPI_COMM_WORLD, &stat);

            /* Will not go more than numProcs anyways */
            for(k=i+1; k<n; k++)
            {
                if(map[k] == pno)
                {
                    start = k;
                    break; 
                }
            }

            for(k=0; k<rowsReceived; k++)
            {
                index = start + k*(numProcs-1);

                for(l=0; l<n; l++)
                    LU[n*index+l] = rowFinal[n*k+l];
            }
        }
    }

    else
    {
        int rowsReceived = 0;
        MPI_Status stat, stats[3];
        MPI_Request reqs[3];

        for(j=i+1; j<n; j++)
            if(map[j] == rank)
                rowsReceived += 1;


        for(j=0; j<rowsReceived; j++)
        {
            MPI_Recv(&rowFinal[n*j], n, MPI_DOUBLE, 0, rank, MPI_COMM_WORLD, &stat);
        }

        for(j=0; j<rowsReceived; j++)
        {
            double factor = rowFinal[n*j+i]/row1[i];

            for(k=i+1; k<n; k++)
                rowFinal[n*j+k] -= (row1[k]*factor);

            rowFinal[n*j+i] = factor;
        }

        if(rowsReceived != 0)
        {
            //printf("Data sent from %d iteration : %d\n",rank,i);
            MPI_Isend(&rank, 1, MPI_INT, 0, tag2, MPI_COMM_WORLD, &reqs[0]);
            MPI_Isend(&rowsReceived, 1, MPI_INT, 0, tag3, MPI_COMM_WORLD, &reqs[1]);
            MPI_Isend(rowFinal, n*rowsReceived, MPI_DOUBLE, 0, tag4, MPI_COMM_WORLD, &reqs[2]);
        }
        //MPI_Waitall(3,reqs,stats);
    }
}

for(i=0;i看起来您从未完成过非阻塞操作。在整个代码中,您有一系列对
MPI\u Isend
MPI\u Irecv
的调用,但您从未执行过
MPI\u Wait
MPI\u Test
(或类似调用之一)。如果没有完成呼叫,那些非阻塞呼叫将永远无法完成。

是的。谢谢。我能够修复它。