Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/fortran/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Fortran 输出不符合预期的mpi\U聚集_Fortran_Mpi - Fatal编程技术网

Fortran 输出不符合预期的mpi\U聚集

Fortran 输出不符合预期的mpi\U聚集,fortran,mpi,Fortran,Mpi,在收集矩阵(4x4全局矩阵)后,我没有得到预期的输出。我正在生成一个全局矩阵: global = [0 1 2 3 1 2 3 4 2 3 4 5 3 4 5 6 ] 并尝试将其拆分为2X2

在收集矩阵(4x4全局矩阵)后,我没有得到预期的输出。我正在生成一个全局矩阵:

global =  [0           1           2           3
           1           2           3           4
           2           3           4           5
           3           4           5           6 ]
并尝试将其拆分为2X2子矩阵,并在使用mpi\u cart\u create创建的2D拓扑中进行收集。我正在获取并打印矩阵。我希望在global_recv中再次使用相同的矩阵。但在收集之后,我的矩阵像这样收集:

global_recv = [ 0           2           2           4
                1           3           3           5
                1           3           3           5
                2           4           4           6]
我必须做出哪些改变,我错过了什么?我首先尝试了mpi_gahterv,但没有成功。我也想使用mpi_gatherv实现同样的功能。代码如下:

PROGRAM MAIN
    implicit none
    include "mpif.h"
    integer, parameter:: nx = 4         ! global number of rows 
    integer, parameter:: ny = 4         ! global number of columns                           
    integer, parameter:: Root = 0
    integer global(nx,ny),global_recv(nx,ny)
    integer,allocatable::loc(:,:)       ! local matrix 
    integer rows,cols                   ! rows and columns in local matrix 
    integer,allocatable::counts(:),displs(:)
    integer myid,numprocs
    integer comm2d,ierr,req
    integer sx, ex, sy, ey
    integer dims(2),coord(2)
    logical periods(2)
    integer status(MPI_STATUS_SIZE)
    data periods/2*.false./
    integer i,j
! Initialize mpi
    CALL MPI_INIT( ierr )
    CALL MPI_COMM_RANK(MPI_COMM_WORLD,myid,ierr)
    CALL MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,ierr)
! Get a new communicator for a decomposition of the domain.  
! Let MPI find a "good" decomposition
    dims(1) = 0
    dims(2) = 0
    CALL MPI_DIMS_CREATE(numprocs,2,dims,ierr)
    CALL MPI_CART_CREATE(MPI_COMM_WORLD,2,dims,periods,.true.,   &
                         comm2d,ierr)
! Get my position in this communicator
    CALL MPI_COMM_RANK(comm2d,myid,ierr)
    if (myid==Root) then
        print *,'dimensions:',dims(1),dims(2)
    endif
! Compute the decomposition
    CALL fnd2ddecomp(comm2d,nx,ny,sx,ex,sy,ey)
    rows = ex-sx+1 
    cols = ey-sy+1  
! Initialize the a matrix
   do  i=1,nx
    do j=1,ny
       global(i,j) = (i-1)+(j-1)
     enddo
   enddo
! print global matrix 
   if (myid.EQ.Root) then
   print *, 'Global matrix :'
      do i=1,nx
         print *, global(i,:)
      enddo
    endif  
! get local matrix 
   allocate(loc(rows,cols))     
   loc = global(sx:ex,sy:ey)
   print *, myid,loc
! Build counts and displs for mpi_gatherv
   allocate(counts(numprocs),displs(numprocs))
   !dx = rows + (cols-1)*size(M,1);
   displs(1) = 0
   do j=1,numprocs
     counts(j) = rows*cols
    if((j-1).ne.0) displs(j) = displs(j-1) + counts(j-1)
  enddo

! Recieve the results using mpi_gather    
!   CALL MPI_GATHERV(loc,cols,MPI_INT,     &   
!                   b,counts,displs,MPI_INT,root,   &
!                   MPI_COMM_WORLD,ierr)         

    CALL MPI_GATHER(loc,rows*cols,MPI_INT, &  
                     global_recv,rows*cols,MPI_INT,   &  
                     Root, comm2d, ierr)
!      print the results
    if (myid.EQ.Root) then
    print *, 'Global recieved matrix:'
      do i=1,nx
         print *, global_recv(i,:)
      enddo
    endif
!      Cleanup goes here.
      CALL MPI_COMM_FREE( comm2d, ierr )
      CALL MPI_FINALIZE(ierr)

      STOP
      END
!******************************************************* 
      subroutine fnd2ddecomp(comm2d,nx,ny,sx,ex,sy,ey)
      integer   comm2d
      integer   nx,ny,sx,ex,sy,ey
      integer   dims(2),coords(2),ierr
      logical   periods(2)
! Get (i,j) position of a processor from Cartesian topology.
      CALL MPI_Cart_get(comm2d,2,dims,periods,coords,ierr)
! Decomposition in first (ie. X) direction
      CALL MPE_DECOMP1D(nx,dims(1),coords(1),sx,ex)
! Decomposition in second (ie. Y) direction
      CALL MPE_DECOMP1D(ny,dims(2),coords(2),sy,ey)
      return
      end
!*********************************************************************
      SUBROUTINE MPE_DECOMP1D(n,numprocs,myid,s,e)
      integer n,numprocs,myid,s,e,nlocal,deficit
      nlocal  = n / numprocs
      s       = myid * nlocal + 1
      deficit = mod(n,numprocs)
      s       = s + min(myid,deficit)
! Give one more slice to processors
      if (myid .lt. deficit) then
          nlocal = nlocal + 1
      endif
      e = s + nlocal - 1
      if (e .gt. n .or. myid .eq. numprocs-1) e = n
      return
      end

请注意,第二个矩阵的列对应于每个进程的局部矩阵。实际上,
MPI_Gather()
将来自不同进程的chunck收集到一个大的1D数组中。对于MPI\u Gatherv()来说也是一样的,除了Chunck可以具有不同的长度。使用专用数据类型可能会很有趣,例如由
MPI\u Type\u create\u subarray()生成的数据类型。
@francis我已经提到了这个链接。无法逃避MPI_Type_create_subarray()?从给定链接编写的代码适用于2x2,但不适用于3x2。您可以在不使用MPI_Type_create_subarray()的情况下执行某些操作,但很可能最终会执行类似的操作
MPI\u Type\u create\u subarray()
被设计用于与子阵列通信:对我来说,这似乎是一个帮助,而不是一个障碍!如果每个进程处理不同大小的chunck(例如:4x4阵列的3x2进程),最好为每个列组调用
MPI\u Type\u create\u subarray()
,并为每个列组使用
MPI\u Send()
/
MPI\u Irecv()
。然后在根进程上调用
MPI_Waitall()
。如果我有时间,我会试一试的!我之前试过使用mpi_isend和recv()。我遇到了一个问题,就开始收集和分散。我会试试,然后再打给你。