MPI运行时错误:Scatterv计数错误、segmentationfault或卡住

MPI运行时错误:Scatterv计数错误、segmentationfault或卡住,c,segmentation-fault,mpi,scatter,C,Segmentation Fault,Mpi,Scatter,老:我有三种类型的错误。我可以得到scatterv计数错误,segmentationfault 11,或者进程被卡住。我得到的错误似乎是随机的。我每次运行2个过程的代码。当它卡住时,它会在printfprint2:%d,myrank;之前卡住;。当我的朋友在他自己的计算机上运行代码时,也有两个进程,他不会在第一次MPI_Bcast之前通过。当他运行时,没有打印出来。以下是我收到的错误链接: 更新的问题:现在我在printfprint2之后只得到一个分段错误:%d,myrank;在漫无目的的电话之

老:我有三种类型的错误。我可以得到scatterv计数错误,segmentationfault 11,或者进程被卡住。我得到的错误似乎是随机的。我每次运行2个过程的代码。当它卡住时,它会在printfprint2:%d,myrank;之前卡住;。当我的朋友在他自己的计算机上运行代码时,也有两个进程,他不会在第一次MPI_Bcast之前通过。当他运行时,没有打印出来。以下是我收到的错误链接:


更新的问题:现在我在printfprint2之后只得到一个分段错误:%d,myrank;在漫无目的的电话之前。即使在printf语句之后删除了所有代码,我也会出现分段错误,但只有在我将代码运行两个以上的过程时才会出现这种错误。

我在跟踪您尝试执行的操作时遇到了一点困难。我认为你让这个漫无目的的电话变得比它需要的更复杂了。这是我今年类似作业的一个片段。希望这是scatterv如何工作的一个更清晰的例子

/*

Matricefilenames:
  small matrix A.bin of dimension 100 × 50
  small matrix B.bin of dimension 50 × 100
  large matrix A.bin of dimension 1000 × 500
  large matrix B.bin of dimension 500 × 1000

An MPI program should be implemented such that it can
• accept two file names at run-time,
• let process 0 read the A and B matrices from the two data files,
• let process 0 distribute the pieces of A and B to all the other processes,
• involve all the processes to carry out the the chosen parallel algorithm
for matrix multiplication C = A * B ,
• let process 0 gather, from all the other processes, the different pieces
of C ,
• let process 0 write out the entire C matrix to a data file.
*/




int main(int argc, char *argv[]) {

  printf("Oblig 2 \n");
  double **matrixa;
  double **matrixb;
  int ma,na,my_ma,my_na;
  int mb,nb,my_mb,my_nb;
  int i,j,k;
  int myrank,numprocs;
  int konstanta,konstantb;

  MPI_Init(&argc,&argv);
  MPI_Comm_rank(MPI_COMM_WORLD,&myrank);
  MPI_Comm_size(MPI_COMM_WORLD,&numprocs);





  if(myrank==0) {
    read_matrix_binaryformat ("small_matrix_A.bin", &matrixa, &ma, &na);
    read_matrix_binaryformat ("small_matrix_B.bin", &matrixb, &mb, &nb);
  }

  //mpi broadcast

  MPI_Bcast(&ma,1,MPI_INT,0,MPI_COMM_WORLD);
  MPI_Bcast(&mb,1,MPI_INT,0,MPI_COMM_WORLD);
  MPI_Bcast(&na,1,MPI_INT,0,MPI_COMM_WORLD);
  MPI_Bcast(&nb,1,MPI_INT,0,MPI_COMM_WORLD);

  fflush(stdout);

  int resta = ma % numprocs;//rest antall som har den største verdien
  //int restb = mb % numprocs;
  if (myrank == 0) {
    printf("ma : %d",ma);
    fflush(stdout);
    printf("mb : %d",mb);
    fflush(stdout); 

  } 

  MPI_Barrier(MPI_COMM_WORLD);
  if (resta == 0) {
    my_ma = ma / numprocs;
    printf("null rest\n ");
    fflush(stdout);
  } else {
    if (myrank < resta) {
      my_ma = ma / numprocs + 1;//husk + 1 
    } else {
      my_ma = ma / numprocs;    //heltalls divisjon gir nedre verdien !
    }
  }




  my_na = na;
  my_nb = nb;

  double **myblock = malloc(my_ma*sizeof(double*));
  for(i=0;i<na;i++) {
    myblock[i] = malloc(my_na*sizeof(double));
  }

  //send_cnt for scatterv
  //________________________________________________________________________________________________________________________________________________
  int* send_cnta = (int*)malloc(numprocs*sizeof(int));//array med antall elementer sendt til hver prosess array[i] = antall elementer , i er process
  int tot_elemsa = my_ma*my_na;
  MPI_Allgather(&tot_elemsa,1,MPI_INT,&send_cnta[0],1,MPI_INT,MPI_COMM_WORLD);//arrays i c må sendes &array[0]




  //send_disp for scatterv
  //__________________________________________________________________________________

    int* send_dispa = (int*)malloc(numprocs*sizeof(int)); //hvorfor trenger disp
    // int* send_dispb = (int*)malloc(numprocs*sizeof(int));
    //disp hvor i imagechars første element til hver prosess skal til


    fflush(stdout);
    if(resta==0) {
      send_dispa[myrank]=myrank*my_ma*my_na;
    } else if(myrank<=resta) {
      if(myrank<resta) {    
    send_dispa[myrank]=myrank*my_ma*my_na;
      } else {//my_rank == rest
    send_dispa[myrank]=myrank*(my_ma+1)*my_na;
    konstanta=myrank*(my_ma+1)*my_na;
      } 
    }



    MPI_Bcast(&konstanta,1,MPI_INT,resta,MPI_COMM_WORLD);

    if (myrank>resta){
      send_dispa[myrank]=((myrank-resta)*(my_ma*my_na))+konstanta;
    }


    MPI_Allgather(&send_dispa[myrank],1,MPI_INT,&send_dispa[0],1,MPI_INT,MPI_COMM_WORLD);


    //___________________________________________________________________________________

     printf("print2: %d" , myrank);
     fflush(stdout);

    //recv_buffer for scatterv
    double *recv_buffera=malloc((my_ma*my_na)*sizeof(double));


    MPI_Scatterv(&matrixa[0], &send_cnta[0], &send_dispa[0], MPI_UNSIGNED_CHAR, &recv_buffera[0], my_ma*my_na, MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD);


    for(i=0; i<my_ma; i++) {
      for(j=0; j<my_na; j++) {
    myblock[i][j]=recv_buffera[i*my_na + j];

      }
    }

    MPI_Finalize();
    return 0;
}

看起来矩阵是指向每一行的指针数组。这不适用于MPI。您应该将矩阵分配为一大块数据。老实说,我不明白为什么你可以在每一个过程中重复计算,而所有这些都会聚集在一起。。。如果OpenMPI在一次通信操作后被卡住,这可能是网络问题的一个迹象:myblock在每个进程上都被一次又一次地分配到自身上。在内存页中分配matrixa的起始位置是随机的。如果接近开始,则MPI_Scatterv可以读取足够的数据,以便将垃圾发送到第一个进程,然后该进程继续到使用myblock的部分,并在那里进行故障隔离。在另一种情况下,主进程读取MPI_Scatterv.Yep中的页面和segfaults。将矩阵分配为一个大的块将解决很多问题。你完全正确。我发现了错误。我写了双*myblock=mallocmy_masizeofdouble*;fori=0;我在你的帖子上提到了这一点。很高兴现在对你有用。我还是觉得你让事情变得更难了。祝你好运。谢谢你的帮助。是的,我现在明白了,你提到我应该如何分配我的区块,但我没有注意到。当我运行两个进程时,它只是工作,但当我运行两个以上的进程时,仍然会出现错误。我正在努力理解你写来帮助我的其他事情。
 /*********************************************************************
  * Scatter A to All Processes
  *   - Using Scatterv for versatility.
  *********************************************************************/        


  int *send_counts;                // Send Counts
  int *displacements;              // Send Offsets
  int chunk;                       // Number of Rows per Process (- Root)
  int chunk_size;                  // Number of Doubles per Chunk
  int remainder;                   // Number of Rows for Root Process
  double * rbuffer;                // Receive Buffer

  // Do Some Math
  chunk = m / (p - 1);
  remainder = m % (p - 1);
  chunk_size = chunk * n;

  // Setup Send Counts
  send_counts = malloc(p * sizeof(int));
  send_counts[0] = remainder * n;
  for (i = 1; i < p; i++)
    send_counts[i] = chunk_size;

  // Setup Displacements
  displacements = malloc(p * sizeof(int));
  displacements[0] = 0;
  for (i = 1; i < p; i++)
    displacements[i] = (remainder * n) + ((i - 1) * chunk_size);  

  // Allocate Receive Buffer
  rbuffer = malloc(send_counts[my_rank] * sizeof(double));

  // Scatter A Over All Processes!
  MPI_Scatterv(A,                      // A
               send_counts,            // Array of counts [int]
               displacements,          // Array of displacements [int]
               MPI_DOUBLE,             // Sent Data Type
               rbuffer,                // Receive Buffer
               send_counts[my_rank],   // Receive Count - Per Process
               MPI_DOUBLE,             // Received Data Type
               root,                   // Root
               comm);                  // Comm World

  MPI_Barrier(comm);
#include <stdio.h>
#include <stdlib.h>

void main ()
{

    int na = 5;
    int my_ma = 5;
    int my_na = 5;
    int i;
    int j;

  double **myblock = malloc(my_ma*sizeof(double*));
  for(i=0;i<na;i++) {
    myblock = malloc(my_na*sizeof(double));
  }

  unsigned char *recv_buffera=malloc((my_ma*my_na)*sizeof(unsigned char));

  for(i=0; i<my_ma; i++) {
    for(j=0; j<my_na; j++) {
      myblock[i][j]=(float)recv_buffera[i*my_na + j];
      }
    }

}
  // Allocate A, b, and y. Generate random A and b
  double *buff=0;
  if (my_rank==0)
    {
      int A_size = m*n, b_size = n, y_size = m;
      int size = (A_size+b_size+y_size)*sizeof(double);
      buff = (double*)malloc(size);
      if (buff==NULL) 
    {
      printf("Process %d failed to allocate %d bytes\n", my_rank, size);
      MPI_Abort(comm,-1);
      return 1;
    }
      // Set pointers
      A = buff; b = A+m*n; y = b+n;
      // Generate matrix and vector
      genMatrix(m, n, A);
      genVector(n, b);
    }