Loops 多处理循环和停止条件
我刚刚使用Loops 多处理循环和停止条件,loops,queue,pipe,python-multiprocessing,Loops,Queue,Pipe,Python Multiprocessing,我刚刚使用多线程库将一个脚本移植到多处理库,因此,我遇到了与进程之间共享内存的方式相关的问题 快速概述,我的员工正在使用一列单词;当工作进程发现命中时,它应该广播一个信号(全局变量或任何实现),以命令其他正在运行的进程终止 以下是我的员工的主要方法: def run(self): while not self.queue.empty(): entry = self.queue.get() try: payload = jwt.de
多线程
库将一个脚本移植到多处理
库,因此,我遇到了与进程之间共享内存的方式相关的问题
快速概述,我的员工正在使用一列单词;当工作进程发现命中时,它应该广播一个信号(全局变量或任何实现),以命令其他正在运行的进程终止
以下是我的员工的主要方法:
def run(self):
while not self.queue.empty():
entry = self.queue.get()
try:
payload = jwt.decode(self.token, entry, algorithm = 'HS256')
except jwt.InvalidTokenError:
if self.verbose:
print(DEBUG + "[{}] ".format(self.name) + "InvalidTokenError: " + Style.BRIGHT + entry + RESET)
continue
except jwt.DecodeError:
print(WARNING + "[{}] ".format(self.name) + "DecodingError: " + Style.BRIGHT + entry + RESET)
continue
except Exception as ex:
print(ERROR + "[{}] ".format(self.name) + "Exception: " + Style.BRIGHT + "{}".format(ex) + RESET)
continue
# Save the holy secret into a file in case sys.stdout is not responding
with open("jwtpot.pot", "a+") as file:
file.write("{0}:{1}:{2}".format(self.token, payload, entry))
print(RESULT + "[{}] ".format(self.name) + "Secret key saved to location: " + Style.BRIGHT + "{}".format(file.name) + RESET)
print(RESULT + "[{}] ".format(self.name) + "Secret key: " + Style.BRIGHT + entry + RESET)
print(RESULT + "[{}] ".format(self.name) + "Payload: " + Style.BRIGHT + "{}".format(payload) + RESET)
break
self.queue.task_done()
以下是我在main中实例化和启动流程的方式:
# Load and segmentate the wordlist into the queue
print(INFO + "Processing the wordlist..." + RESET)
queue = populate_queue(queue, wordlist, verbose)
print(INFO + "Total retrieved words: " + Style.BRIGHT + "{}".format(queue.qsize()) + RESET)
for i in range(process_count):
process = Process(queue, token, verbose)
process.daemon = True
print(INFO + "Starting {}".format(process.name) + RESET)
process.start()
processes.append(process)
print(WARNING + "Pour yourself some coffee, this might take a while..." + RESET)
# Block the parent-process until all the child-processes finish to process the queue
for process in processes:
process.join()
我将创建一个从所有子进程返回父进程的(共享)管道。然后,查找密钥的进程可以向管道写入一些内容,以指示找到了什么。如果进程找不到密钥并且队列为空,那么它将退出
父级只需等待,直到它从管道中获取某些内容,这将在子级写入管道或所有子级退出时发生。然后它杀死了所有仍然在奔跑的孩子
下面是一个快速破解演示:
from multiprocessing import *
from time import sleep
def process(pid, rpipe, wpipe):
rpipe.close()
sleep(1 + pid * 0.1)
if pid == 5:
print("I found it!")
wpipe.send((pid, "GOT IT"))
print("Process %d exiting" % pid)
wpipe.close()
def one_try(findit):
processes = []
rpipe, wpipe = Pipe()
for i in range(15):
# Start
if i != 5 or findit:
p = Process(target=process, args=(i, rpipe, wpipe))
p.start()
processes.append((i, p))
# Close write pipe in the parent so we get EOF when all children are gone
wpipe.close()
try:
pid, result = rpipe.recv()
print("%s was found by %s" % (result, pid))
print("Will kill other processes")
except EOFError:
print("Nobody found it!")
rpipe.close()
for i, p in processes:
p.terminate()
p.join()
one_try(True) # Should have one process that finds it
one_try(False) # Nobody found it