Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/cplusplus/130.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/http/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
C++ openMP缺少线程数较高的递减回报_C++_Multithreading_Parallel Processing - Fatal编程技术网

C++ openMP缺少线程数较高的递减回报

C++ openMP缺少线程数较高的递减回报,c++,multithreading,parallel-processing,C++,Multithreading,Parallel Processing,我的代码现在有一个循环,它调用蒙特卡罗函数来计算多个样本数的简单积分(y=x,从0到1),并将总时间和积分值写入文本文件。然后循环增加线程数并继续。现在大约有8个线程,时间峰值约为2.6秒。循环迭代了64个线程,我看不到超过0.2秒的速度下降,甚至有时会加速 对于循环调用Monte Carlo方法,增加线程数: //this loop will iterate the main loop for a number of threads from 1 to 16 for (int j =

我的代码现在有一个循环,它调用蒙特卡罗函数来计算多个样本数的简单积分(y=x,从0到1),并将总时间和积分值写入文本文件。然后循环增加线程数并继续。现在大约有8个线程,时间峰值约为2.6秒。循环迭代了64个线程,我看不到超过0.2秒的速度下降,甚至有时会加速

对于循环调用Monte Carlo方法,增加线程数:

//this loop will iterate the main loop for a number of threads from 1 to 16
    for (int j = 1; j <= 17; j++)
    {
        //tell user how many threads are running monte-carlo currently
        cout << "Program is running " << number_threads << " thread(s) currently." << endl;

        //reset values for new run
        num_of_samples = 1;
        integration_result = 0;

        //this for loop will run throughout number of circulations running through monte-carlo
        //and entering the data into the text folder
        for (int i = 1; i <= iteration_num; i++)
        {
            //call monte carlo function to perform integration and write values to text
            monteCarlo(num_of_samples, starting_x, end_x, number_threads);

            //increase num of samples for next test round
            num_of_samples = 2 * num_of_samples;
        } //end of second for loop

        //iterate num_threads
        if (number_threads == 1)
            number_threads = 2;
        else if (number_threads >= 32)
            number_threads += 8;
        else if (number_threads >= 16)
            number_threads += 4;
        else
            number_threads += 2;
    } //end of for loop
//此循环将在主循环中迭代1到16个线程

对于(int j=1;j在通过光散射的简单蒙特卡罗行走实现相同类型的并行化之后,我能够相当多地了解递减收益。我认为这里缺少递减收益,因为积分计算非常简单,线程本身几乎没有什么事要做因此,它们的开销相对较少。
如果其他人有任何其他对这个问题有用的信息,请随时发布。否则我会接受这个作为我的答案。

在通过光散射的简单蒙特卡罗行走实现了相同类型的并行化之后,我能够相当多地了解到收益减少的情况。我认为,在f这里的收益递减是因为积分计算非常简单,线程本身几乎不需要单独做什么,因此它们的开销相对较小。
如果其他任何人有任何其他信息证明对这个问题有用,请随时发布。否则我会接受这个作为我的答案。

你有64核系统吗?也许openmp忽略了你的参数。有趣的是,我有一个4核、8逻辑处理器CPU。有了这个限制,它会简单地忽略线程请求吗be output
nthrds
。在输出
nthrds
后,我仍然发现线程的数量在增加。我最好的猜测是,不管线程的数量和线程开销如何,手头的问题相对简单,因此几乎不会导致回报减少。您有64核系统吗?也许openmp忽略了您的问题参数。有趣的是,我有一个4核8逻辑处理器CPU。有了这个限制,它会忽略线程请求吗?可能会输出
nthrds
。在输出
nthrds
后,我仍然发现线程的数量在增加。我最好的猜测是,无论线程的数量和线程开销如何,问题都会出现手头的lem相对简单,因此几乎不会降低回报。
int num_threads;
    double x, u, error_difference, fs = 0, integration_result = 0; //fs is a placeholder to hold added values of f(x)
    vector< vector<double>> dataHolder(number_threads, vector<double>(1)); //this vector will hold temp values of each thread

    //get start time for parallel block of code
    double start_time = omp_get_wtime();

    omp_set_dynamic(0);     // Explicitly disable dynamic teams
    omp_set_num_threads(number_threads); // Use 4 threads for all consecutive parallel regions

#pragma omp parallel default(none) private(x, u) shared(std::cout, end_x, starting_x, num_of_samples, fs, number_threads, num_threads, dataHolder)
    {
        int i, id, nthrds;
        double temp = fs;

        //define thread id and num of threads
        id = omp_get_thread_num();
        nthrds = omp_get_num_threads();

        //initilialize random seed
        srand(id * time(NULL) * 1000);

        //if there is only one thread
        if(id == 0)
            num_threads = nthrds;

        //this for loop will calculate a temp value for fs for each thread
        for (int i = id; i < num_of_samples; i = i + nthrds)
        {
            //assign random number under integration from 0 to 1
            u = fRand(0, 1); //random number between 0 and 1
            x = starting_x + (end_x - starting_x) * u;

            //this line of code is from Monte_Carlo Method by Alex Godunov (February 2007)
            //calculuate y for reciporical value of x and add it to thread's local fs
            temp += function(x);
        }

        //place temp inside vector dataHolder
        dataHolder[id][0] = temp;

        //no thread will go beyond this barrier until task is complete
#pragma omp barrier

        //one thread will do this task
#pragma omp single
        {
            //add summations to calc fs
            for(i = 0, fs = 0.0; i < num_threads; i ++)
                fs += dataHolder[i][0];
        } //implicit barrier here, wait for all tasks to be done
    }//end of parallel block of code