Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/multithreading/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Multithreading 让Julia SharedArrays与Sun Grid引擎完美配合_Multithreading_Parallel Processing_Julia_Sungridengine - Fatal编程技术网

Multithreading 让Julia SharedArrays与Sun Grid引擎完美配合

Multithreading 让Julia SharedArrays与Sun Grid引擎完美配合,multithreading,parallel-processing,julia,sungridengine,Multithreading,Parallel Processing,Julia,Sungridengine,我一直在尝试让Julia程序在带有sharedarray的SGE环境中正确运行。我读了几篇关于Julia和SGE的文章,但大多数都是关于MPI的。Gist中的函数bind_pe_procs似乎可以正确地将进程绑定到本地环境。像这样的剧本 ### define bind_pe_procs() as in Gist ### ... println("Started julia") bind_pe_procs() println("do SharedArrays initialize correctl

我一直在尝试让Julia程序在带有sharedarray的SGE环境中正确运行。我读了几篇关于Julia和SGE的文章,但大多数都是关于MPI的。Gist中的函数bind_pe_procs似乎可以正确地将进程绑定到本地环境。像这样的剧本

### define bind_pe_procs() as in Gist
### ...
println("Started julia")
bind_pe_procs()
println("do SharedArrays initialize correctly?")
x = SharedArray(Float64, 3, pids = procs(), init = S -> S[localindexes(S)] = 1.0)
pids = procs(x)
println("number of workers: ", length(procs()))
println("SharedArrays map to ", length(pids), " workers")
产生以下输出:

starting qsub script file
Mon Oct 12 15:13:38 PDT 2015
calling mpirun now 
exception on 2: exception on exception on 4: exception on exception on 53: : exception on exception on exception on Started julia
parsing PE_HOSTFILE
[{"name"=>"compute-0-8.local","n"=>"5"}]compute-0-8.local
ASCIIString["compute-0-8.local","compute-0-8.local","compute-0-8.local","compute-0-8.local"]adding machines to current system
done
do SharedArrays initialize correctly?
number of workers: 5
SharedArrays map to 5 workers
奇怪的是,如果我需要从文件加载数组并使用命令convertSharedArray,vecreadlmfilepath将其转换为SharedArray格式,那么这似乎不起作用。如果脚本是

println("Started julia")
bind_pe_procs()

### script reads arrays from file and converts to SharedArrays
println("running script...")
my_script()
那么结果就是垃圾:

starting qsub script file
Mon Oct 19 09:18:29 PDT 2015
calling mpirun now Started julia
parsing PE_HOSTFILE
[{"name"=>"compute-0-5.local","n"=>"11"}]compute-0-5.local
ASCIIString["compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0-5.local","compute-0- 5.local"]adding machines to current system
done
running script...
Current number of processes: [1,2,3,4,5,6,7,8,9,10,11]
SharedArray y is seen by [1] processes
### tons of errors here
### important one is "SharedArray cannot be used on a non-participating process"

因此,ShareDarray无法正确映射到所有核心。有人对这个问题有什么建议或见解吗?

我在工作中使用的一种解决方法是简单地强制SGE将作业提交到特定节点,然后将并行环境限制在我想要使用的核心数量

下面我为24核节点提供了一个SGE qsub脚本,我只想使用6核

#!/bin/bash
# lots of available SGE script options, only relevant ones included below

# request processes in parallel environment 
#$ -pe orte 6 

# use this command to dump job on a particular queue/node
#$ -q all.q@compute-0-13

/share/apps/julia-0.4.0/bin/julia -p 5 MY_SCRIPT.jl
赞成:与ShareDarray配合得很好。
缺点:作业将在队列中等待,直到节点具有足够的可用内核。

请注意,进程与工作进程不同。第一种可能包括非工作进程。特别是因为在sge.jl中使用的addprocs添加了远程机器。请原谅@FelipeLema的延迟响应!你指出了一个重要的细节。但到目前为止,我仍然被困在:。感兴趣的读者还可以看到我在回购协议中提交的以下内容。