Python Sklearn随机森林预测比训练花费的时间多得多

Python Sklearn随机森林预测比训练花费的时间多得多,python,machine-learning,scikit-learn,random-forest,training-data,Python,Machine Learning,Scikit Learn,Random Forest,Training Data,我尝试使用随机森林算法来训练130000行。从代码中可以看出,打印“44”只需几分钟,但预测结果需要几个小时。脚本正在占用所有内存(16GB)。预测怎么可能比训练花费更多的时间?我怎样才能使它更快 训练数据有130000行 datasets = pd.read_csv('PPM Clean Data.csv',nrows=129330) X = datasets.iloc[:, [3,4]].values Y = datasets.iloc[:, 6].values print(X) enc

我尝试使用随机森林算法来训练130000行。从代码中可以看出,打印“44”只需几分钟,但预测结果需要几个小时。脚本正在占用所有内存(16GB)。预测怎么可能比训练花费更多的时间?我怎样才能使它更快

训练数据有130000行

datasets = pd.read_csv('PPM Clean Data.csv',nrows=129330)

X = datasets.iloc[:, [3,4]].values
Y = datasets.iloc[:, 6].values
print(X)

enc = OneHotEncoder(handle_unknown='ignore')
ans = enc.fit_transform(X)

X = ans.toarray();
print('11')
print(time.time())
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_Train, X_Test, Y_Train, Y_Test = train_test_split(X, Y, test_size = 0.25, random_state = 0)

# Feature Scaling
# Predicting the test set results
print('22')
print(time.time())

X_Test = [['SERLAB01', '1023']]
ans2 = enc.transform(X_Test)
X_Test = ans2.toarray();
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_Train = sc_X.fit_transform(X_Train)
X_Test = sc_X.transform(X_Test)

print('33')
print(time.time())

# Fitting the classifier into the Training set

from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators = 100,warm_start = True, criterion = 'entropy',   random_state = 0)
classifier.fit(X_Train,Y_Train)
print('44')
print(time.time())

pickle.dump(classifier, open("classifier.pkl", "wb"))
pickle.dump(sc_X, open("sc_X.pkl", "wb"))

Y_Pred = classifier.predict(X_Test)
print(Y_Pred)