python-类字段和方法的多处理问题
在一个数据分析python项目中,我需要同时使用类和多处理特性,而我在Google上还没有找到一个很好的例子 我的基本想法——这可能是错误的——是用一个大变量创建一个类(在我的例子中是一个数据帧),然后定义一个计算操作的方法(在本例中是求和) 代码不能正常工作:我得到以下打印输出python-类字段和方法的多处理问题,python,multiprocessing,pool,Python,Multiprocessing,Pool,在一个数据分析python项目中,我需要同时使用类和多处理特性,而我在Google上还没有找到一个很好的例子 我的基本想法——这可能是错误的——是用一个大变量创建一个类(在我的例子中是一个数据帧),然后定义一个计算操作的方法(在本例中是求和) 代码不能正常工作:我得到以下打印输出 sum(list(range(0, 10**7))) 49999995000000 49999995000000 n_procs 1 total time: 0.45133500000000026 4999999
sum(list(range(0, 10**7))) 49999995000000
49999995000000
n_procs 1 total time: 0.45133500000000026
49999995000000
n_procs 2 total time: 0.8055279999999954
49999995000000
n_procs 3 total time: 1.1330870000000033
即计算时间增加而不是减少。那么,这段代码中的错误是什么
但我也担心RAM的使用,因为当创建变量块时,self.\uu数据RAM的使用量增加了一倍。在处理多处理代码时,尤其是在本代码中,是否有可能避免这种内存浪费?(我保证将来我会把一切都放在Spark上:)这里似乎有一些事情在起作用:
块的生成大约占用了16%的时间。单进程、非池版本没有这种开销
chunks
数组是范围的所有原始数据,需要对这些数据进行pickle
并发送到新进程。只发送开始和结束索引,而不是发送所有原始数据,这将更容易func
中放置计时器,您会发现大部分时间都没有花在那里。这就是为什么你没有看到加速。大部分时间都花在了切块、酸洗、叉子和其他开销上import multiprocessing
import time
from math import sqrt; from itertools import count, islice
# credit to https://stackoverflow.com/a/27946768
def isPrime(n):
return n > 1 and all(n%i for i in islice(count(2), int(sqrt(n)-1)))
limit = 6
class C:
def __init__(self):
pass
def func(self, start_end_tuple):
start, end = start_end_tuple
primes = []
for x in range(start, end):
if isPrime(x):
primes.append(x)
return len(primes)
def get_chunks(self, total_size, n_procs):
# start and end value tuples
chunks = []
# Example: (10, 5) -> (2, 0) so 2 numbers per process
# (10, 3) -> (3, 1) or here the first process does 4 and the others do 3
quotient, remainder = divmod(total_size, n_procs)
current_start = 0
for i in range(0, n_procs):
my_amount = quotient
if i == 0:
# somebody needs to do extra
my_amount += remainder
chunks.append((current_start, current_start + my_amount))
current_start += my_amount
return chunks
def start_multi(self):
for n_procs in range(1, 4):
time_start = time.clock()
# chunk the start and end indices instead
chunks = self.get_chunks(10**limit, n_procs)
pool = multiprocessing.Pool(processes=n_procs)
results = pool.map_async(self.func, chunks)
results.wait()
results = results.get()
print(sum(results))
time_delta = time.clock() - time_start
print("n_procs {} time {}".format(n_procs, time_delta))
c = C()
time_start = time.clock()
print("serial func(...) = {}".format(c.func((1, 10**limit))))
print("total time {}".format(time.clock() - time_start))
c.start_multi()
这将导致多个进程的加速。假设您有它的核心。您似乎非常了解,您必须创建副本并将其发送到不同的进程。这不可能比正常的
和快。顺便说一句,是否有特定的原因使您的\u data
属性使用双下划线?但是,这与您使用类无关。您应该阅读多处理
文档中有关共享状态的部分。这不是小事。
import multiprocessing
import time
from math import sqrt; from itertools import count, islice
# credit to https://stackoverflow.com/a/27946768
def isPrime(n):
return n > 1 and all(n%i for i in islice(count(2), int(sqrt(n)-1)))
limit = 6
class C:
def __init__(self):
pass
def func(self, start_end_tuple):
start, end = start_end_tuple
primes = []
for x in range(start, end):
if isPrime(x):
primes.append(x)
return len(primes)
def get_chunks(self, total_size, n_procs):
# start and end value tuples
chunks = []
# Example: (10, 5) -> (2, 0) so 2 numbers per process
# (10, 3) -> (3, 1) or here the first process does 4 and the others do 3
quotient, remainder = divmod(total_size, n_procs)
current_start = 0
for i in range(0, n_procs):
my_amount = quotient
if i == 0:
# somebody needs to do extra
my_amount += remainder
chunks.append((current_start, current_start + my_amount))
current_start += my_amount
return chunks
def start_multi(self):
for n_procs in range(1, 4):
time_start = time.clock()
# chunk the start and end indices instead
chunks = self.get_chunks(10**limit, n_procs)
pool = multiprocessing.Pool(processes=n_procs)
results = pool.map_async(self.func, chunks)
results.wait()
results = results.get()
print(sum(results))
time_delta = time.clock() - time_start
print("n_procs {} time {}".format(n_procs, time_delta))
c = C()
time_start = time.clock()
print("serial func(...) = {}".format(c.func((1, 10**limit))))
print("total time {}".format(time.clock() - time_start))
c.start_multi()