Python 并行作业不';t完成scikit学习';s GridSearchCV

Python 并行作业不';t完成scikit学习';s GridSearchCV,python,multithreading,macos,machine-learning,scikit-learn,Python,Multithreading,Macos,Machine Learning,Scikit Learn,在下面的脚本中,我发现GridSearchCV启动的作业似乎挂起了 import json import pandas as pd import numpy as np import unicodedata import re from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text i

在下面的脚本中,我发现GridSearchCV启动的作业似乎挂起了

import json
import pandas as pd
import numpy as np
import unicodedata
import re
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.decomposition import TruncatedSVD
from sklearn.linear_model import SGDClassifier
import sklearn.cross_validation as CV
from sklearn.grid_search import GridSearchCV
from nltk.stem import WordNetLemmatizer

# Seed for randomization. Set to some definite integer for debugging and set to None for production
seed = None


### Text processing functions ###

def normalize(string):#Remove diacritics and whatevs
    return "".join(ch.lower() for ch in unicodedata.normalize('NFD', string) if not unicodedata.combining(ch))

wnl = WordNetLemmatizer()
def tokenize(string):#Ignores special characters and punct
    return [wnl.lemmatize(token) for token in re.compile('\w\w+').findall(string)]

def ngrammer(tokens):#Gets all grams in each ingredient
    max_n = 2
    return [":".join(tokens[idx:idx+n]) for n in np.arange(1,1 + min(max_n,len(tokens))) for idx in range(len(tokens) + 1 - n)]

print("Importing training data...")
with open('/Users/josh/dev/kaggle/whats-cooking/data/train.json','rt') as file:
    recipes_train_json = json.load(file)

# Build the grams for the training data
print('\nBuilding n-grams from input data...')
for recipe in recipes_train_json:
    recipe['grams'] = [term for ingredient in recipe['ingredients'] for term in ngrammer(tokenize(normalize(ingredient)))]

# Build vocabulary from training data grams. 
vocabulary = list({gram for recipe in recipes_train_json for gram in recipe['grams']})

# Stuff everything into a dataframe. 
ids_index = pd.Index([recipe['id'] for recipe in recipes_train_json],name='id')
recipes_train = pd.DataFrame([{'cuisine': recipe['cuisine'], 'ingredients': " ".join(recipe['grams'])} for recipe in recipes_train_json],columns=['cuisine','ingredients'], index=ids_index)


# Extract data for fitting
fit_data = recipes_train['ingredients'].values
fit_target = recipes_train['cuisine'].values

# extracting numerical features from the ingredient text
feature_ext = Pipeline([('vect', CountVectorizer(vocabulary=vocabulary)),
                        ('tfidf', TfidfTransformer(use_idf=True)),
                        ('svd', TruncatedSVD(n_components=1000))
])
lsa_fit_data = feature_ext.fit_transform(fit_data)

# Build SGD Classifier
clf =  SGDClassifier(random_state=seed)
# Hyperparameter grid for GRidSearchCV. 
parameters = {
    'alpha': np.logspace(-6,-2,5),
}

# Init GridSearchCV with k-fold CV object
cv = CV.KFold(lsa_fit_data.shape[0], n_folds=3, shuffle=True, random_state=seed)
gs_clf = GridSearchCV(
    estimator=clf,
    param_grid=parameters,
    n_jobs=-1,
    cv=cv,
    scoring='accuracy',
    verbose=2    
)
# Fit on training data
print("\nPerforming grid search over hyperparameters...")
gs_clf.fit(lsa_fit_data, fit_target)
控制台输出为:

Importing training data...

Building n-grams from input data...

Performing grid search over hyperparameters...
Fitting 3 folds for each of 5 candidates, totalling 15 fits
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-06 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=1e-05 .....................................................
[CV] alpha=0.0001 ....................................................
[CV] alpha=0.0001 .................................................... 
然后它就挂了。如果我在
GridSearchCV
中设置
n_jobs=1
,则脚本将按预期完成,并输出:

Importing training data...

Building n-grams from input data...

Performing grid search over hyperparameters...
Fitting 3 folds for each of 5 candidates, totalling 15 fits
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 -   6.5s
[Parallel(n_jobs=1)]: Done   1 jobs       | elapsed:    6.6s
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 -   6.6s
[CV] alpha=1e-06 .....................................................
[CV] ............................................ alpha=1e-06 -   6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 -   6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 -   6.7s
[CV] alpha=1e-05 .....................................................
[CV] ............................................ alpha=1e-05 -   6.6s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 -   6.6s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 -   6.7s
[CV] alpha=0.0001 ....................................................
[CV] ........................................... alpha=0.0001 -   6.7s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 -   7.0s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 -   6.8s
[CV] alpha=0.001 .....................................................
[CV] ............................................ alpha=0.001 -   6.6s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 -   6.7s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 -   7.3s
[CV] alpha=0.01 ......................................................
[CV] ............................................. alpha=0.01 -   7.1s
[Parallel(n_jobs=1)]: Done  15 out of  15 | elapsed:  1.7min finished
单线程执行完成得非常快,所以我确信我给了并行作业案例足够的时间来自己进行计算

环境规格: MacBook Pro(15英寸,2010年年中)、2.4 GHz Intel Core i5、8 GB 1067 MHz DDR3、OSX 10.10.5、python 3.4.3、ipython 3.2.0、numpy v1.9.3、scipy 0.16.0、scikit learn v0.16.1(python和软件包均来自anaconda发行版)

一些补充意见:

我在这台机器上一直使用
n_jobs=-1
GridSearchCV
,因此我的平台确实支持该功能。它通常一次有4个作业,因为我在这台机器上有4个核(2个物理核,但由于Mac超读,有4个“虚拟核”)。但是,除非我误解了控制台输出,否则在本例中,它有8个作业输出,没有任何返回。实时观察Activity Monitor中的CPU使用情况,启动4个作业,稍微工作一点,然后完成(或死亡?),然后再启动4个作业,稍微工作一点,然后完全闲置,但继续工作

在任何时候我都看不到明显的内存压力。主进程的最大容量约为1GB real mem,子进程的最大容量约为600MB。当它们挂起的时候,真正的记忆是微不足道的

如果从特征提取管道中删除
TruncatedSVD
步骤,则脚本可以很好地处理多个作业。但是请注意,此管道在网格搜索之前起作用,并且不是
GridSearchCV
作业的一部分

这个脚本是用于kaggle竞赛的,所以如果你想在我使用的相同数据上运行它,你可以从那里获取它。数据以JSON对象数组的形式出现。每个对象表示一个配方,并包含一个文本片段列表,这些文本片段是配料。由于每个示例都是一个文档集合,而不是一个文档,因此我不得不编写一些自己的n-gramming和标记化逻辑,因为我不知道如何让scikit的内置转换器完全按照我的要求学习。我怀疑这一切,但仅供参考


我通常使用%run在iPython CLI中运行脚本,但直接使用python(3.4.3)在OSX bash终端上运行脚本也会得到同样的行为。

我相信我也遇到了类似的问题,罪魁祸首是内存使用突然激增。该进程将尝试分配内存,但由于可用内存不足而立即停止

如果您可以访问一台具有更多可用内存(如128-256GB)的机器,则值得使用相同或更少数量的作业(n_jobs=4)进行检查。
这就是我解决这个问题的方法——只是将我的脚本移动到一个大型服务器上

通过显式设置随机种子,我能够解决类似的问题:

np.random.seed(0)


我的问题是由于多次运行GSCV造成的,所以这可能不会直接应用于您的用例

如果njob>1,GridSearchCV使用的多处理可能会出现问题。因此,您可以尝试使用多线程,而不是使用多处理,以查看它是否工作正常

from sklearn.externals.joblib import parallel_backend

clf = GridSearchCV(...)
with parallel_backend('threading'):
    clf.fit(x_train, y_train)
我在使用GSV和njob>1的估计器时遇到了同样的问题,并且在njob值中使用它非常有效

PS:我不确定对于所有估计器来说,“线程化”是否具有与“多处理”相同的优势。但从理论上讲,如果您的估计器受到GIL的限制,“线程化”将不是一个很好的选择,但是如果估计器是基于cython/numpy的,那么它将优于“多处理”

系统已试用:

MAC OS: 10.12.6
Python: 3.6
numpy==1.13.3
pandas==0.21.0
scikit-learn==0.19.1

看看this@olologin谢谢你的链接。这里有很多信息,这是关于scikit学习多处理问题的最新讨论,比我以前所能找到的要多。我仍然不清楚如何最好地继续。有什么特别的东西是我应该了解的吗?老实说,我不知道,但我几乎可以肯定这一切都是因为mac os。我在Ubuntu 14和mac OSXD中都观察到了这个问题。你有解决方案吗?2019年6月后不再支持sklearn.externals.joblib。