Python 并行手工编码OneVsRest分类器

Python 并行手工编码OneVsRest分类器,python,parallel-processing,scikit-learn,Python,Parallel Processing,Scikit Learn,我能得到并行化此代码的帮助吗?我正在将多标签分类问题转换为OneVsRest(二进制相关性)问题。由于上面提到的内存问题,我正在手动操作 clf_label = {} for i, label in enumerate(label_index.keys()): print 'Fitting', i, 'label out of', len(label_index) clf = SGDClassifier(loss='hinge', shuffle=True, alpha=0.0

我能得到并行化此代码的帮助吗?我正在将多标签分类问题转换为OneVsRest(二进制相关性)问题。由于上面提到的内存问题,我正在手动操作

clf_label = {}

for i, label in enumerate(label_index.keys()):
    print 'Fitting', i, 'label out of', len(label_index)
    clf = SGDClassifier(loss='hinge', shuffle=True, alpha=0.000001, verbose=0, n_iter=5, n_jobs=4)
    temp_y = np.zeros(trainY.shape)
    temp_y[label_index[label]] = 1

    clf.fit(trainX, temp_y)
    clf_label[label] = clf
我正在遍历
label\u index
,并为每个标签构建一个分类器。在对每个分类器进行拟合后,我将其保存到另一个
dict
中,其中键再次是标签,但值是分类器。由于运行时间长,我想并行化这段代码。下面是我对
多处理的
池的尝试。map

def fit_label(label, trainX, trainY, label_index):
    # print 'Fitting', i, 'label out of', len(label_index)
    clf = SGDClassifier(loss='hinge', shuffle=True, alpha=0.000001, verbose=0, n_iter=5)
    temp_y = np.zeros(trainY.shape)
    temp_y[label_index[label]] = 1

    clf.fit(trainX, temp_y)
    return clf

def linear_svm():
    p = Pool(2)
    func = partial(fit_label, trainX=trainX, trainY=trainY, label_index=label_index)
    res = p.map(func, label_index.keys()[1:6])
    clf_label = dict(zip(label_index.keys()[1:6], res))
我得到了这个错误:

Exception in thread Thread-3:
Traceback (most recent call last):
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 808, in __bootstrap_inner
    self.run()
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 761, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 342, in _handle_tasks
    put(task)
SystemError: NULL result without error in PyObject_Call

对于知道如何使用Python进行并行编程的人来说,这似乎是一项非常简单的任务,因此如果有人能够并行地重写这项任务,而不是修改我的(不可靠的)代码,我将不胜感激。谢谢。

尝试定义函数,以便在函数的外部进行并行化
linear\u svm()
,如下所示:

def func(fit_label, trainX=None, trainY=None, label_index=None): 
    return partial(fit_label, trainX=trainX, trainY=trainY, label_index=label_index)


def linear_svm():
    numProcessors = multiprocessing.cpu_count()
    p = Pool(processes=numProcessors)
    res = p.map_async(func, label_index.keys()[1:6])
    poolres = res.get()
    clf_label = dict(zip(label_index.keys()[1:6], poolres))