Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/meteor/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
linux上的shell脚本_Linux_Shell_Cron - Fatal编程技术网

linux上的shell脚本

linux上的shell脚本,linux,shell,cron,Linux,Shell,Cron,这是我的shell脚本 #!/bin/bash crawlers(){ nohup scrapy crawl a & nohup scrapy crawl b & wait $! nohup scrapy crawl f & nohup scrapy crawl g & wait $! nohup scrapy crawl h & nohup scrapy crawl i &

这是我的shell脚本

#!/bin/bash

crawlers(){
    nohup scrapy crawl a &
    nohup scrapy crawl b &
    wait $!
    nohup scrapy crawl f &
    nohup scrapy crawl g &
    wait $!
    nohup scrapy crawl h &
    nohup scrapy crawl i &
    wait $!
    nohup scrapy crawl i &
    nohup scrapy crawl j &
    nohup scrapy crawl k &
    wait $!
    nohup scrapy crawl l &
    nohup scrapy crawl m &
}

PATH=$PATH:/usr/local/bin
export PATH

python add_columns.py &
wait $!
crawlers &
wait $!
python final_script.py &
我想首先运行的添加列.py脚本

然后爬虫程序脚本(爬虫程序中的所有脚本都是异步的

最后要运行final_script.py

但是使用上面的shell脚本

结束前正在执行final_script.py

nohup scrapy crawl l &
nohup scrapy crawl m &
虽然我在cralwers等着

crawlers &
wait $!
最后,如何在crawler()方法中完成所有作业后才能调用final_script.py


首先谢谢你,为什么要麻烦把你马上等待的事情背景化呢

其次,在
爬虫
功能中,您只需等待一半的呼叫;另一半可能还在运行

使用不带参数的
wait
等待所有当前活动的子级退出。这将是一个更好的版本:

#!/bin/bash

crawlers(){
    nohup scrapy crawl a &
    nohup scrapy crawl b &
    nohup scrapy crawl f &
    nohup scrapy crawl g &
    nohup scrapy crawl h &
    nohup scrapy crawl i &
    nohup scrapy crawl i &
    nohup scrapy crawl j &
    nohup scrapy crawl k &
    nohup scrapy crawl l &
    nohup scrapy crawl m &

    wait
}

PATH=$PATH:/usr/local/bin
export PATH

python add_columns.py

crawlers

python final_script.py

从除“scrapy crawl”之外的所有调用中删除“&”,删除对“wait”的所有调用,在函数“crawler”的结尾添加“for pid in$(jobs-p);do wait$pid | | exit$?;done”。感谢您的贡献,我需要在下一步开始之前等待半个进程结束,是的,我还需要在crawler的结尾添加wait。但为什么它只在爬虫功能中工作?是否在爬网程序调用后等待不应等待结束爬网程序中以结尾的所有子进程?