Python 线程模块与多处理模块的比较
因此,我试图比较线程处理是更快还是多处理。从理论上讲,由于GIL,多处理应该比多线程更快,因为一次只运行一个线程。但是我得到了相反的结果,即线程比多处理花费的时间少,我缺少什么请帮助 下面是线程的代码Python 线程模块与多处理模块的比较,python,python-multiprocessing,python-multithreading,Python,Python Multiprocessing,Python Multithreading,因此,我试图比较线程处理是更快还是多处理。从理论上讲,由于GIL,多处理应该比多线程更快,因为一次只运行一个线程。但是我得到了相反的结果,即线程比多处理花费的时间少,我缺少什么请帮助 下面是线程的代码 import threading from queue import Queue import time print_lock = threading.Lock() def exampleJob(worker): time.sleep(10) with print_lock
import threading
from queue import Queue
import time
print_lock = threading.Lock()
def exampleJob(worker):
time.sleep(10)
with print_lock:
print(threading.current_thread().name,worker)
def threader():
while True:
worker = q.get()
exampleJob(worker)
q.task_done()
q = Queue()
for x in range(4):
t = threading.Thread(target=threader)
print(x)
t.daemon = True
t.start()
start = time.time()
for worker in range(8):
q.put(worker)
q.join()
print('Entire job took:',time.time() - start)
import multiprocessing as mp
import time
def exampleJob(print_lock,worker): # function simulating some computation
time.sleep(10)
with print_lock:
print(mp.current_process().name,worker)
def processor(print_lock,q): # function where process pick up the job
while True:
worker = q.get()
if worker is None: # flag to exit the process
break
exampleJob(print_lock,worker)
if __name__ == '__main__':
print_lock = mp.Lock()
q = mp.Queue()
processes = [mp.Process(target=processor,args=(print_lock,q)) for _ in range(4)]
for process in processes:
process.start()
start = time.time()
for worker in range(8):
q.put(worker)
for process in processes:
q.put(None) # quit indicator
for process in processes:
process.join()
print('Entire job took:',time.time() - start)
以下是多处理的代码
import threading
from queue import Queue
import time
print_lock = threading.Lock()
def exampleJob(worker):
time.sleep(10)
with print_lock:
print(threading.current_thread().name,worker)
def threader():
while True:
worker = q.get()
exampleJob(worker)
q.task_done()
q = Queue()
for x in range(4):
t = threading.Thread(target=threader)
print(x)
t.daemon = True
t.start()
start = time.time()
for worker in range(8):
q.put(worker)
q.join()
print('Entire job took:',time.time() - start)
import multiprocessing as mp
import time
def exampleJob(print_lock,worker): # function simulating some computation
time.sleep(10)
with print_lock:
print(mp.current_process().name,worker)
def processor(print_lock,q): # function where process pick up the job
while True:
worker = q.get()
if worker is None: # flag to exit the process
break
exampleJob(print_lock,worker)
if __name__ == '__main__':
print_lock = mp.Lock()
q = mp.Queue()
processes = [mp.Process(target=processor,args=(print_lock,q)) for _ in range(4)]
for process in processes:
process.start()
start = time.time()
for worker in range(8):
q.put(worker)
for process in processes:
q.put(None) # quit indicator
for process in processes:
process.join()
print('Entire job took:',time.time() - start)
这不是一个合适的测试
time.sleep
可能无法获得GIL,因此您运行的是并发线程而不是并发进程。线程速度更快,因为没有启动成本
您应该在线程中执行一些计算,然后您就会看到差异。只有在执行计算密集型任务时,由于存在GIL,添加到@zmbq线程的速度才会较慢。如果您的操作是I/O绑定的,并且很少有其他类似的操作,那么线程肯定会更快,因为所涉及的开销更少。请参阅下面的博客,以更好地了解相同的内容
希望这有帮助 谢谢@zmbq你能建议我做些什么改变吗?这回答了你的问题吗?不,事实上,我知道这个理论,但在实施方面滞后。我接受的答案解决了这个问题