Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/three.js/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何在C中使用MPI查找给定数字的和?_C_Mpi - Fatal编程技术网

如何在C中使用MPI查找给定数字的和?

如何在C中使用MPI查找给定数字的和?,c,mpi,C,Mpi,我试图找到数组中所有给定数字的和。我必须将数组分割成相等的大小,然后发送到每个进程并计算总和。稍后,将每个进程的计算结果发送回根进程,以获得最终答案。实际上,我知道我可以使用MPI\u Scatter。但我的问题是,如果我的列表是奇数,该怎么办。例如,我有一个包含13元素的数组,然后我有3进程。因此,默认情况下,MPI_Scatter将数组除以3,留下最后一个元素。基本上,它只计算12个元素的总和。仅使用MPI\u散点时的输出: myid = 0 total = 6 myid = 1 total

我试图找到数组中所有给定数字的和。我必须将数组分割成相等的大小,然后发送到每个进程并计算总和。稍后,将每个进程的计算结果发送回根进程,以获得最终答案。实际上,我知道我可以使用
MPI\u Scatter
。但我的问题是,如果我的列表是奇数,该怎么办。例如,我有一个包含
13
元素的数组,然后我有
3
进程。因此,默认情况下,
MPI_Scatter
将数组除以
3
,留下最后一个元素。基本上,它只计算
12个
元素的总和。仅使用
MPI\u散点时的输出:

myid = 0 total = 6
myid = 1 total = 22
myid = 2 total = 38
results from all processors_= 66 
size= 13 
因此,我计划使用
MPI\u散点
MPI\u发送
。因此,我可以获取最后一个元素,并通过
MPI\u send
发送它,然后计算它,并在根进程中接收。但是我遇到了问题。。我的代码:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <mpi.h>

/*  globals */
int numnodes, myid, mpi_err;
int last_core;
int n;
int last_elements[];

#define mpi_root 0
/* end globals  */

void init_it(int  *argc, char ***argv);

void init_it(int  *argc, char ***argv) {
    mpi_err = MPI_Init(argc, argv);
    mpi_err = MPI_Comm_size( MPI_COMM_WORLD, &numnodes );
    mpi_err = MPI_Comm_rank(MPI_COMM_WORLD, &myid);
}

int main(int argc, char *argv[]) {
    int *myray, *send_ray, *back_ray;
    int count;
    int size, mysize, i, k, j, total;

    MPI_Status status;

    init_it(&argc, &argv);

    /* each processor will get count elements from the root */
    count = 4;
    myray = (int*)malloc(count * sizeof(int));
    size = (count * numnodes) + 1;
    send_ray = (int*)malloc(size * sizeof(int));
    back_ray = (int*)malloc(numnodes * sizeof(int));
    last_core = numnodes - 1;

    /* create the data to be sent on the root */
    if(myid == mpi_root){
        for(i = 0; i < size; i++)
        {
            send_ray[i] = i;
        }
    }

    /* send different data to each processor */
    mpi_err = MPI_Scatter( send_ray, count, MPI_INT,
                           myray, count, MPI_INT,
                           mpi_root, MPI_COMM_WORLD);

    if(myid == mpi_root) {
        n = 1;
        memcpy(last_elements, &send_ray[size-n], n * sizeof(int));

        //Send the last numbers to the last core through send command
        MPI_Send(last_elements, n, MPI_INT, last_core, 99, MPI_COMM_WORLD);
    }

    /* each processor does a local sum */
    total = 0;
    for(i = 0; i < count; i++)
        total = total + myray[i];
        //total = total + send_ray[size-1];
    printf("myid= %d total= %d\n", myid, total);

    if(myid == last_core)
    {
        printf("Last core\n");
        MPI_Recv(last_elements, n, MPI_INT, 0, 99, MPI_COMM_WORLD, &status);
    }

    /* send the local sums back to the root */
    mpi_err = MPI_Gather(&total, 1, MPI_INT,
                        back_ray, 1, MPI_INT,
                        mpi_root, MPI_COMM_WORLD);

    /* the root prints the global sum */
    if(myid == mpi_root){
        total=0;
        for(i = 0; i < numnodes; i++)
            total = total + back_ray[i];
        printf("results from all processors_= %d \n", total);
        printf("size= %d \n ", size);
    }

    mpi_err = MPI_Finalize();
}

我知道我做错了。如果您能告诉我,我将不胜感激。

您最后的\u元素数组没有指定大小。MPI_Recv出错,因为没有空间放置正在发送的项目。您的代码缺少最后一个元素的malloc。

可能我很晚才回答,但可能其他人会得到帮助

请检查以下代码

# include <cstdlib>
# include <iostream>
# include <iomanip>
# include <ctime>
# include <mpi.h>

using namespace std;

int main ( int argc, char *argv[] );
void timestamp ( );

//****************************************************************************80

int main ( int argc, char *argv[] )

//****************************************************************************80

{
  int *a;
  int dest;
  float factor;
  int global;
  int i;
  int id;
  int ierr;
  int n;
  int npart;
  int p;
  int source;
  int start;
  MPI_Status status;
  int tag;
  int tag_target = 1;
  int tag_size = 2;
  int tag_data = 3;
  int tag_found = 4;
  int tag_done = 5;
  int target;
  int workers_done;
  int x;
//
//  Initialize MPI.
//
  ierr = MPI_Init ( &argc, &argv );
//
//  Get this processes's rank.
//
  ierr = MPI_Comm_rank ( MPI_COMM_WORLD, &id );
//
//  Find out how many processes are available.
//
  ierr = MPI_Comm_size ( MPI_COMM_WORLD, &p );

  if ( id == 0 )
  {
    timestamp ( );
    cout << "\n";
    cout << "SEARCH - Master process:\n";
    cout << "  C++ version\n";
    cout << "  An example MPI program to search an array.\n";
    cout << "\n";
    cout << "  Compiled on " << __DATE__ << " at " << __TIME__ << ".\n";
    cout << "\n";
    cout << "  The number of processes is " << p << "\n";
  }

  cout << "\n";
  cout << "Process " << id << " is active.\n";
//
//  Have the master process generate the target and data.  In a more 
//  realistic application, the data might be in a file which the master 
//  process would read.  Here, the master process decides.
//
  if ( id == 0 )
  {
//
//  Pick the number of data items per process, and set the total.
//
    factor = ( float ) rand ( ) / ( float ) RAND_MAX;
    npart = 50 + ( int ) ( factor * 100.0E+00 );
    n = npart * p;

    cout << "\n";
    cout << "SEARCH - Master process:\n";
    cout << "  The number of data items per process is " << npart << "\n";
    cout << "  The total number of data items is       " << n << ".\n";
//
//  Now allocate the master copy of A, fill it with values, and pick 
//  a value for the target.
//
    a = new int[n];

    factor = ( float ) n / 10.0E+00 ;

    for ( i = 0; i < n; i++ ) 
    {
      a[i] = ( int ) ( factor * ( float ) rand ( ) / ( float ) RAND_MAX );
    }
    target = a[n/2];

    cout << "  The target value is " << target << ".\n";
//
//  The worker processes need to have the target value, the number of data items,
//  and their individual chunk of the data vector.
//
    for ( i = 1; i <= p-1; i++ )
    {
      dest = i;
      tag = tag_target;

      ierr = MPI_Send ( &target, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );

      tag = tag_size;

      ierr = MPI_Send ( &npart, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );

      start = ( i - 1 ) * npart;
      tag = tag_data;

      ierr = MPI_Send ( a+start, npart, MPI_INT, dest, tag,
    MPI_COMM_WORLD );
    }
//
//  Now the master process simply waits for each worker process to report that 
//  it is done.
//
    workers_done = 0;

    while ( workers_done < p-1 )
    {
      ierr = MPI_Recv ( &x, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG,
    MPI_COMM_WORLD, &status );

      source = status.MPI_SOURCE;
      tag = status.MPI_TAG;

      if ( tag == tag_done )
      {
    workers_done = workers_done + 1;
      }
      else if ( tag == tag_found )
      {
    cout << "P" << source << "  " << x << "  " << a[x] << "\n";
      }
      else
      {
    cout << "  Master process received message with unknown tag = "
         << tag << ".\n";
      }

    }
//
//  The master process can throw away A now.
//
    delete [] a;
  }
//
//  Each worker process expects to receive the target value, the number of data
//  items, and the data vector.
//
  else 
  {
    source = 0;
    tag = tag_target;

    ierr = MPI_Recv ( &target, 1, MPI_INT, source, tag, MPI_COMM_WORLD,
      &status );

    source = master;
    tag = tag_size;

    ierr = MPI_Recv ( &npart, 1, MPI_INT, source, tag, MPI_COMM_WORLD, 
      &status );

    a = new int[npart];

    source = 0;
    tag = tag_data;

    ierr = MPI_Recv ( a, npart, MPI_INT, source, tag, MPI_COMM_WORLD,
      &status );
//
//  The worker simply checks each entry to see if it is equal to the target
//  value.
//
    for ( i = 0; i < npart; i++ )
    {
      if ( a[i] == target )
      {
    global = ( id - 1 ) * npart + i;
    dest = 0;
    tag = tag_found;

    ierr = MPI_Send ( &global, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );
      }
    }
//
//  When the worker is finished with the loop, it sends a dummy data value with
//  the tag "TAG_DONE" indicating that it is done.
//
    dest = 0;
    tag = tag_done;

    ierr = MPI_Send ( &target, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );

    delete [] ( a );
  }
//
//  Terminate MPI.
//
  MPI_Finalize ( );
//
//  Terminate.
//
  if ( id == 0 )
  {
    cout << "\n";
    cout << "SEARCH - Master process:\n";
    cout << "  Normal end of execution.\n";
    cout << "\n";
    timestamp ( );
  } 
  return 0;
}
//****************************************************************************80

void timestamp ( )

//****************************************************************************80

{
# define TIME_SIZE 40

  static char time_buffer[TIME_SIZE];
  const struct std::tm *tm_ptr;
  size_t len;
  std::time_t now;

  now = std::time ( NULL );
  tm_ptr = std::localtime ( &now );

  len = std::strftime ( time_buffer, TIME_SIZE, "%d %B %Y %I:%M:%S %p", tm_ptr );

  std::cout << time_buffer << "\n";

  return;
# undef TIME_SIZE
}

调查使用
mpi\u scatterv
按您的意愿分发数据,并使用
mpi\u reduce
执行求和。@学生,对旧问题和答案进行非常细微的修饰性更新并不能真正提高SO的质量。由于这项活动,那些古老的问题被排在了榜首。
# include <cstdlib>
# include <iostream>
# include <iomanip>
# include <ctime>
# include <mpi.h>

using namespace std;

int main ( int argc, char *argv[] );
void timestamp ( );

//****************************************************************************80

int main ( int argc, char *argv[] )

//****************************************************************************80

{
  int *a;
  int dest;
  float factor;
  int global;
  int i;
  int id;
  int ierr;
  int n;
  int npart;
  int p;
  int source;
  int start;
  MPI_Status status;
  int tag;
  int tag_target = 1;
  int tag_size = 2;
  int tag_data = 3;
  int tag_found = 4;
  int tag_done = 5;
  int target;
  int workers_done;
  int x;
//
//  Initialize MPI.
//
  ierr = MPI_Init ( &argc, &argv );
//
//  Get this processes's rank.
//
  ierr = MPI_Comm_rank ( MPI_COMM_WORLD, &id );
//
//  Find out how many processes are available.
//
  ierr = MPI_Comm_size ( MPI_COMM_WORLD, &p );

  if ( id == 0 )
  {
    timestamp ( );
    cout << "\n";
    cout << "SEARCH - Master process:\n";
    cout << "  C++ version\n";
    cout << "  An example MPI program to search an array.\n";
    cout << "\n";
    cout << "  Compiled on " << __DATE__ << " at " << __TIME__ << ".\n";
    cout << "\n";
    cout << "  The number of processes is " << p << "\n";
  }

  cout << "\n";
  cout << "Process " << id << " is active.\n";
//
//  Have the master process generate the target and data.  In a more 
//  realistic application, the data might be in a file which the master 
//  process would read.  Here, the master process decides.
//
  if ( id == 0 )
  {
//
//  Pick the number of data items per process, and set the total.
//
    factor = ( float ) rand ( ) / ( float ) RAND_MAX;
    npart = 50 + ( int ) ( factor * 100.0E+00 );
    n = npart * p;

    cout << "\n";
    cout << "SEARCH - Master process:\n";
    cout << "  The number of data items per process is " << npart << "\n";
    cout << "  The total number of data items is       " << n << ".\n";
//
//  Now allocate the master copy of A, fill it with values, and pick 
//  a value for the target.
//
    a = new int[n];

    factor = ( float ) n / 10.0E+00 ;

    for ( i = 0; i < n; i++ ) 
    {
      a[i] = ( int ) ( factor * ( float ) rand ( ) / ( float ) RAND_MAX );
    }
    target = a[n/2];

    cout << "  The target value is " << target << ".\n";
//
//  The worker processes need to have the target value, the number of data items,
//  and their individual chunk of the data vector.
//
    for ( i = 1; i <= p-1; i++ )
    {
      dest = i;
      tag = tag_target;

      ierr = MPI_Send ( &target, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );

      tag = tag_size;

      ierr = MPI_Send ( &npart, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );

      start = ( i - 1 ) * npart;
      tag = tag_data;

      ierr = MPI_Send ( a+start, npart, MPI_INT, dest, tag,
    MPI_COMM_WORLD );
    }
//
//  Now the master process simply waits for each worker process to report that 
//  it is done.
//
    workers_done = 0;

    while ( workers_done < p-1 )
    {
      ierr = MPI_Recv ( &x, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG,
    MPI_COMM_WORLD, &status );

      source = status.MPI_SOURCE;
      tag = status.MPI_TAG;

      if ( tag == tag_done )
      {
    workers_done = workers_done + 1;
      }
      else if ( tag == tag_found )
      {
    cout << "P" << source << "  " << x << "  " << a[x] << "\n";
      }
      else
      {
    cout << "  Master process received message with unknown tag = "
         << tag << ".\n";
      }

    }
//
//  The master process can throw away A now.
//
    delete [] a;
  }
//
//  Each worker process expects to receive the target value, the number of data
//  items, and the data vector.
//
  else 
  {
    source = 0;
    tag = tag_target;

    ierr = MPI_Recv ( &target, 1, MPI_INT, source, tag, MPI_COMM_WORLD,
      &status );

    source = master;
    tag = tag_size;

    ierr = MPI_Recv ( &npart, 1, MPI_INT, source, tag, MPI_COMM_WORLD, 
      &status );

    a = new int[npart];

    source = 0;
    tag = tag_data;

    ierr = MPI_Recv ( a, npart, MPI_INT, source, tag, MPI_COMM_WORLD,
      &status );
//
//  The worker simply checks each entry to see if it is equal to the target
//  value.
//
    for ( i = 0; i < npart; i++ )
    {
      if ( a[i] == target )
      {
    global = ( id - 1 ) * npart + i;
    dest = 0;
    tag = tag_found;

    ierr = MPI_Send ( &global, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );
      }
    }
//
//  When the worker is finished with the loop, it sends a dummy data value with
//  the tag "TAG_DONE" indicating that it is done.
//
    dest = 0;
    tag = tag_done;

    ierr = MPI_Send ( &target, 1, MPI_INT, dest, tag, MPI_COMM_WORLD );

    delete [] ( a );
  }
//
//  Terminate MPI.
//
  MPI_Finalize ( );
//
//  Terminate.
//
  if ( id == 0 )
  {
    cout << "\n";
    cout << "SEARCH - Master process:\n";
    cout << "  Normal end of execution.\n";
    cout << "\n";
    timestamp ( );
  } 
  return 0;
}
//****************************************************************************80

void timestamp ( )

//****************************************************************************80

{
# define TIME_SIZE 40

  static char time_buffer[TIME_SIZE];
  const struct std::tm *tm_ptr;
  size_t len;
  std::time_t now;

  now = std::time ( NULL );
  tm_ptr = std::localtime ( &now );

  len = std::strftime ( time_buffer, TIME_SIZE, "%d %B %Y %I:%M:%S %p", tm_ptr );

  std::cout << time_buffer << "\n";

  return;
# undef TIME_SIZE
}
SEARCH - Master process:
A program using MPI, to search an array.
Compiled on jan  14 2018 at 11:21:45.

The number of processes is 4

Process 0 is active.

SEARCH - Master process:
The number of data items per process is 101
The total number of data items is       404.
The target value is 14.
P3  202  14
P2  145  14
P2  178  14
P2  180  14
P3  211  14
P3  240  14
P3  266  14
P3  295  14
P1  12  14
P1  23  14
P1  36  14
P1  71  14

SEARCH - Master process:
  Normal end of execution.

Process 1 is active.

Process 2 is active.

Process 3 is active.