Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/c/64.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/jenkins/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
未在C中映射MPI地址_C_Parallel Processing_Malloc_Mpi_Openmpi - Fatal编程技术网

未在C中映射MPI地址

未在C中映射MPI地址,c,parallel-processing,malloc,mpi,openmpi,C,Parallel Processing,Malloc,Mpi,Openmpi,使用malloc时,我遇到了MPI_Recv问题?是否建议接收使用malloc创建的二维数组 谢谢 #include <stdio.h> #include <stdlib.h> #include <time.h> #include <mpi.h> #define SIZE 2000 /* Tags defines message from_to */ #define TO_SLAVE_TAG 1 #define TO_MASTER_TAG 5

使用
malloc
时,我遇到了MPI_Recv问题?是否建议接收使用
malloc
创建的二维数组

谢谢

#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <mpi.h>

#define SIZE 2000
/* Tags defines message from_to */
#define TO_SLAVE_TAG 1
#define TO_MASTER_TAG 5

void createMatrices();
/* Matrices */
int** first;
/* MPI_WORLD rank and size */
int rank, size;

MPI_Status status;
/*
 * matrixSize: current matrix size
 * lower_bound: lower bound of the number of rows of [first matrix] allocated to a slave
 * upper_bound: upper bound of the number of rows of [first matrix] allocated to a slave
 * portion: number of the rows of [first matrix] allocated to a slave according to the number of processors
 * count: number of data will pass with mpi functions
 */
int matrixSize, lower_bound, upper_bound, portion, count;
int sum = 0;
clock_t t, start_time, end_time;

int main( int argc, char **argv ) {

  /* Initialize the MPI execution environment */
  MPI_Init( &argc, &argv );
  /* Determines the size of the group */
  MPI_Comm_size( MPI_COMM_WORLD, &size );
  /* Determines the rank of the calling process */
  MPI_Comm_rank( MPI_COMM_WORLD, &rank );

  if (rank == 0)
    {
      for (matrixSize = 500; matrixSize <= SIZE; matrixSize += 500) {
    createMatrices(matrixSize);
    /* 
     * Master processor divides [first matrix] elements
     * and send them to proper slave processors.
     * We can start time at this point.
     */
    start_time = clock();

    /* Define bounds for each processor except master */
    for (int i = 1; i < size; ++i)
      {
        /* Calculate portion for each slave */
        portion = (matrixSize / (size - 1));
        lower_bound = (i-1) * portion;
        if (((i+1)==size) && (matrixSize % (size-1) != 0)) {
          upper_bound = matrixSize;
        } else {
          upper_bound = lower_bound + portion;
        }
        /* send matrix size to ith slave */
        MPI_Send(&matrixSize, 1, MPI_INT, i, TO_SLAVE_TAG, MPI_COMM_WORLD);
        /* send lower bount to ith slave */
        MPI_Send(&lower_bound, 1, MPI_INT, i, TO_SLAVE_TAG + 1, MPI_COMM_WORLD);
        /* send upper bount to ith slave */
        MPI_Send(&upper_bound, 1, MPI_INT, i, TO_SLAVE_TAG + 2, MPI_COMM_WORLD);
        /* send allocated row of [first matrix] to ith slave */
        count = (upper_bound - lower_bound) * matrixSize;
        printf("Count: %d\n", count);
        MPI_Send(&(first[lower_bound][0]), count, MPI_DOUBLE, i, TO_SLAVE_TAG + 3, MPI_COMM_WORLD);
      }
      }
    }
  if (rank > 0)
    {
      //receive low bound from the master
      MPI_Recv(&matrixSize, 1, MPI_INT, 0, TO_SLAVE_TAG, MPI_COMM_WORLD, &status);
      printf("Matrix size: %d\n", matrixSize);
      //receive low bound from the master
      MPI_Recv(&lower_bound, 1, MPI_INT, 0, TO_SLAVE_TAG + 1, MPI_COMM_WORLD, &status);
      printf("Lower bound: %d\n", lower_bound);
      //next receive upper bound from the master
      MPI_Recv(&upper_bound, 1, MPI_INT, 0, TO_SLAVE_TAG + 2, MPI_COMM_WORLD, &status);
      printf("Upper bound: %d\n", upper_bound);
      //finally receive row portion of [A] to be processed from the master
      count = (upper_bound - lower_bound) * matrixSize;
      printf("Count: %d\n", count);

      MPI_Recv(&first[lower_bound][0], count, MPI_INT, 0, TO_SLAVE_TAG + 3, MPI_COMM_WORLD, &status);
      printf("first[0][0]: %d\n", first[0][0]);
    }
  MPI_Finalize();
  return 0;
}

void createMatrices(int mSize) {
  /* matrix cols */
  first = malloc(mSize * sizeof(int*));
  /* matrix rows */
  for (int i = 0; i < mSize; ++i)
    first[i] = malloc(mSize * sizeof(int));

  srand(time(NULL));
  for (int i = 0; i < mSize; ++i)
    for (int j = 0; j < mSize; ++j)
      first[i][j] = rand()%2;
}
为了避免发送每行的延迟成本(可能很高),您需要在线性内存中创建一个矩阵。这是通过为整个矩阵分配足够的内存并设置指向每一行的指针来实现的。下面是修改后的函数

void createMatrices(int mSize) {
  /* initialize enough linear memory to store whole matrix */
  raw_data=malloc(mSize*mSize*sizeof(int*));

  /* matrix row pointers i.e. they point to each consecutive row */
  first = malloc(mSize * sizeof(int*));

  /* set the pointers to the appropriate address */
  for (int i = 0; i < mSize; ++i)
    first[i] = raw_data + mSize*i;

  /* initialize with random values */
  srand(time(NULL));
  for (int i = 0; i < mSize; ++i)
    for (int j = 0; j < mSize; ++j)
      first[i][j] = rand()%2;
}
void createMatrix(int-mSize){
/*初始化足够的线性内存以存储整个矩阵*/
原始数据=malloc(mSize*mSize*sizeof(int*);
/*矩阵行指针,即它们指向每个连续行*/
第一个=malloc(mSize*sizeof(int*);
/*将指针设置到适当的地址*/
对于(int i=0;i
您面临的另一个主要问题是正确的内存处理。在根秩上分配新矩阵之前,应该释放矩阵


在尝试复制数据之前,还需要为从属列组上的矩阵分配内存。这也需要在线性内存中,就像上面的函数一样。

malloc()
不会将连续内存分配给矩阵。传递二维矩阵时,确保它们在内存中是连续的。我总是把一维线性阵列映射到二维矩阵。第一步是检查这一点。没有正确地阅读代码,但是我看到你的matrix
CreateMasteries(int-mSize)
函数看起来不合适。谢谢Christian,我明天会尝试,我会返回结果。我想问MPI_数据类型是否可以用于动态数组创建?很抱歉,我没有回答这个问题。Sarofen我将raw_数据声明为*int raw_数据,并按照您所说的那样使用它。我仍然收到***进程接收信号***信号:分段错误:11(11)信号代码:地址未映射(1)地址失败:0x0[0]0 libsystem\u platform.dylib 0x00007fff8832df1a\u sigtramp+26[1]0 libsystem\u c.dylib 0x00007fff71ebc070\u stack\u chk\u guard+0[2]0 libdyld.dylib 0x00007fff8f5b85c9 start+1实际上,我正在尝试使用MPI将动态创建的两个矩阵相乘。我使用malloc()函数创建了第一个、第二个和结果矩阵。我正在向所有处理器广播第二个矩阵,并拆分第一个矩阵,并尝试通过MPI_Send将其片段传递给处理器。问题发生在步骤MPI_Recv,这是如上所述的分段故障。