Linux 如果进程数超过内核数的一半,为什么性能会下降?
大家好,我有下面的Python程序,用来测试多处理应用程序的性能Linux 如果进程数超过内核数的一半,为什么性能会下降?,linux,python-3.x,concurrency,multiprocessing,centos7,Linux,Python 3.x,Concurrency,Multiprocessing,Centos7,大家好,我有下面的Python程序,用来测试多处理应用程序的性能 # # Date : 09/May/2018 # Platform : Linux # import os import sys import ctypes import signal import multiprocessing as mp ncpu = 4 counter = 0 child_index = 0 process_list = [] shared_array = None def HandleSignal(
#
# Date : 09/May/2018
# Platform : Linux
#
import os
import sys
import ctypes
import signal
import multiprocessing as mp
ncpu = 4
counter = 0
child_index = 0
process_list = []
shared_array = None
def HandleSignal(signum, frame) :
total = 0
print("Parent timeout hence terminate child")
[hProc.terminate() for hProc in process_list]
[hProc.join() for hProc in process_list]
for each_count in shared_array :
total += each_count
print("{:,}".format(total))
def ChildHandleSignal(signum, frame) :
# print("{} - {} : {:,}".format(child_index, os.getpid(), counter))
shared_array[child_index] = counter
sys.exit(0)
def entry_point(index, sarr) :
global counter
global child_index
global shared_array
child_index = index
shared_array = sarr
signal.signal(signal.SIGTERM, ChildHandleSignal)
while True : counter += 1
return
ncpu = int(sys.argv[1])
maxcpu = os.cpu_count()
if ncpu > maxcpu :
print("Number of CPU greater than maximum CPU")
print("Setting number of CPU to maximum")
ncpu = maxcpu
shared_array = mp.Array(ctypes.c_int64, range(ncpu))
signal.signal(signal.SIGALRM, HandleSignal)
signal.alarm(5)
for I in range(ncpu) :
p1 = mp.Process(target=entry_point, args=(I, shared_array, ))
process_list.append(p1)
p1.start()
# I tried both with and with-out the below
# statement. The outputs are much similar
os.sched_setaffinity(p1.pid, {I})
我在两台不同的机器上运行了这个程序
正如@IlyaBursov所说,这里的“问题”是超线程 超线程不仅仅是魔术。超线程的真正目的是能够在等待另一个进程的内存访问的延迟期间执行另一个进程或线程 在您的例子中,您的代码过于简单,无法使用超线程获得性能。这只是一个在无限循环中递增的计数器。所有的代码都可以放在一级缓存中,肯定没有缓存丢失
但是,如果您添加了太多的进程,那么在两个进程之间进行上下文切换的成本是不可忽略的。wild guess-超线程但是超线程应该会提高性能,但在这里性能会下降