如何在Python中重用Popen的中间结果?

如何在Python中重用Popen的中间结果?,python,linux,process,subprocess,pipe,Python,Linux,Process,Subprocess,Pipe,代码如下所示: from subprocess import Popen, PIPE p1 = Popen("command1", stdout = PIPE) p2 = Popen("command2", stdin = p1.stdout, stdout = PIPE) result_a = p2.communicate()[0] p1_again = Popen("command1", stdout = PIPE) p3 = Popen("command3", stdin = p1_

代码如下所示:

from subprocess import Popen, PIPE


p1 = Popen("command1", stdout = PIPE)
p2 = Popen("command2", stdin = p1.stdout, stdout = PIPE)
result_a = p2.communicate()[0]

p1_again = Popen("command1", stdout = PIPE)
p3 = Popen("command3", stdin = p1_again.stdout, stdout = PIPE)
result_b = p3.communicate()[0]

with open("test") as tf:
    p1_again_again = Popen("command1", stdout = tf)
    p1_again_again.communicate()
不好的部分是:

执行了三次
command1
,因为当我使用
commnniate
一次时,该
Popen
对象的
stdout
不能再次使用。我只是想知道是否有一种方法可以重用
PIPE
的中间结果


有人对如何使这些代码更好(更好的性能以及更少的代码行)有想法吗?谢谢

这里有一个有效的解决方案。我为cmd1、cmd2、cmd3提供了示例命令,以便您可以运行它。它只接受第一个命令的输出,在一个命令中大写,在另一个命令中小写

代码

from subprocess import Popen, PIPE, check_output
from tempfile import TemporaryFile

cmd1 = ['echo', 'Hi']
cmd2 = ['tr', '[:lower:]', '[:upper:]']
cmd3 = ['tr', '[:upper:]', '[:lower:]']

with TemporaryFile() as f:
    p = Popen(cmd1, stdout=f)
    ret_code = p.wait()
    f.flush()
    f.seek(0)
    out2 = Popen(cmd2, stdin=f, stdout=PIPE).stdout.read()
    f.seek(0)
    out3 = Popen(cmd3, stdin=f, stdout=PIPE).stdout.read()
    print out2, out3
输出

HI
hi

解决方案中需要注意的一些事项。当需要处理临时文件时,该模块总是一个很好的方法,一旦with语句退出,它将自动删除临时文件作为清理,即使with块中抛出了一些io异常。cmd1运行一次并输出到临时文件,调用wait()方法以确保所有执行都已完成,然后每次都执行seek(0),以便在f上调用read()方法时返回到文件的开头。作为参考,这个问题帮助我获得了解决方案的第一部分。

如果您可以在内存中读取
command1
的所有输出,然后依次运行
command2
command3

#!/usr/bin/env python
from subprocess import Popen, PIPE, check_output as qx

cmd1_output = qx(['ls']) # get all output

# run commands in sequence
results = [Popen(cmd, stdin=PIPE, stdout=PIPE).communicate(cmd1_output)[0]
           for cmd in [['cat'], ['tr', 'a-z', 'A-Z']]]
或者,如果
command1
生成无法放入内存的巨大输出,则可以先写入临时文件,如下所示:

要并行处理子进程的输入/输出,可以使用
线程化

#!/usr/bin/env python3
from contextlib import ExitStack  # pip install contextlib2 (stdlib since 3.3)
from subprocess import Popen, PIPE
from threading  import Thread

def tee(fin, *files):
    try:
        for chunk in iter(lambda: fin.read(1 << 10), b''):
            for f in files:  # fan out
                f.write(chunk)
    finally:
        for f in (fin,) + files:
            try:
                f.close()
            except OSError:
                pass

with ExitStack() as stack:
    # run commands asynchronously
    source_proc = Popen(["command1", "arg1"], stdout=PIPE)
    stack.callback(source_proc.wait)
    stack.callback(source_proc.stdout.close)

    processes = []
    for command in [["tr", "a-z", "A-Z"], ["cat"]]:
        processes.append(Popen(command, stdin=PIPE, stdout=PIPE))
        stack.callback(processes[-1].wait)
        stack.callback(processes[-1].stdout.close) # use .terminate()
        stack.callback(processes[-1].stdin.close)  # if it doesn't kill it

    fout = open("test.txt", "wb")
    stack.callback(fout.close)

    # fan out source_proc's output
    Thread(target=tee, args=([source_proc.stdout, fout] +
                             [p.stdin for p in processes])).start()

    # collect results in parallel
    results = [[] for _ in range(len(processes))]
    threads = [Thread(target=r.extend, args=[iter(p.stdout.readline, b'')])
               for p, r in zip(processes, results)]
    for t in threads: t.start()
    for t in threads: t.join() # wait for completion
#/usr/bin/env蟒蛇3
从contextlib导入ExitStack#pip安装contextlib2(从3.3开始使用stdlib)
从子流程导入Popen、PIPE
从线程导入线程
def三通(fin,*文件):
尝试:

对于iter中的chunk(lambda:fin.read)(1您可以读取p1的输出,并将输出写入p2的输入流。请检查:我想您应该在对p2使用另一个Popen之前执行
result\u p1=p1.communicate()[0]
,并将stdin作为result\u p1传递给p2,这样您将始终在result\u p1
f.flush()中包含p1的stdout)主进程中的
可能不会以任何方式影响子进程中的文件缓冲区。您可以为
cmd2
cmd3
调用
.wait()
,以避免僵尸。
#!/usr/bin/env python3
from contextlib import ExitStack  # pip install contextlib2 (stdlib since 3.3)
from subprocess import Popen, PIPE
from threading  import Thread

def tee(fin, *files):
    try:
        for chunk in iter(lambda: fin.read(1 << 10), b''):
            for f in files:  # fan out
                f.write(chunk)
    finally:
        for f in (fin,) + files:
            try:
                f.close()
            except OSError:
                pass

with ExitStack() as stack:
    # run commands asynchronously
    source_proc = Popen(["command1", "arg1"], stdout=PIPE)
    stack.callback(source_proc.wait)
    stack.callback(source_proc.stdout.close)

    processes = []
    for command in [["tr", "a-z", "A-Z"], ["cat"]]:
        processes.append(Popen(command, stdin=PIPE, stdout=PIPE))
        stack.callback(processes[-1].wait)
        stack.callback(processes[-1].stdout.close) # use .terminate()
        stack.callback(processes[-1].stdin.close)  # if it doesn't kill it

    fout = open("test.txt", "wb")
    stack.callback(fout.close)

    # fan out source_proc's output
    Thread(target=tee, args=([source_proc.stdout, fout] +
                             [p.stdin for p in processes])).start()

    # collect results in parallel
    results = [[] for _ in range(len(processes))]
    threads = [Thread(target=r.extend, args=[iter(p.stdout.readline, b'')])
               for p, r in zip(processes, results)]
    for t in threads: t.start()
    for t in threads: t.join() # wait for completion