python(Scikit Learn)和R(e1071)的精度不同
对于相同的数据集(这里是Bupa)和参数,我得到了不同的精度 我忽略了什么 R实施:python(Scikit Learn)和R(e1071)的精度不同,python,r,scikit-learn,libsvm,Python,R,Scikit Learn,Libsvm,对于相同的数据集(这里是Bupa)和参数,我得到了不同的精度 我忽略了什么 R实施: data_file = "bupa.data" dataset = read.csv(data_file, header = FALSE) nobs <- nrow(dataset) # 303 observations sample <- train <- sample(nrow(dataset), 0.95*nobs) # 227 observations # validate <
data_file = "bupa.data"
dataset = read.csv(data_file, header = FALSE)
nobs <- nrow(dataset) # 303 observations
sample <- train <- sample(nrow(dataset), 0.95*nobs) # 227 observations
# validate <- sample(setdiff(seq_len(nrow(dataset)), train), 0.1*nobs) # 30 observations
test <- setdiff(seq_len(nrow(dataset)), train) # 76 observations
svmfit <- svm(V7~ .,data=dataset[train,],
type="C-classification",
kernel="linear",
cost=1,
cross=10)
testpr <- predict(svmfit, newdata=na.omit(dataset[test,]))
accuracy <- sum(testpr==na.omit(dataset[test,])$V7)/length(na.omit(dataset[test,])$V7)
我得到的精确度0.67
请帮帮我。我在这篇文章中遇到了同样的问题-scikit learn和e1071绑定libSVM的准确性相差悬殊。我认为问题在于e1071缩放了训练数据,然后保留了用于预测新观测值的缩放参数。Scikit learn并没有做到这一点,而是让用户意识到需要对培训和测试数据采取相同的缩放方法。我只是在遇到并阅读了libSVM背后的好人之后才想检查一下 虽然我没有您的数据,
str(svmfit)
应该为您提供缩放参数(Bupa
列的平均值和标准偏差)。您可以使用这些来适当地扩展Python中的数据(请参见下面的想法)。或者,您可以用Python将整个数据集缩放到一起,然后进行测试/训练拆分;无论哪种方式,现在都应该给出相同的预测
def manual_scale(a, means, sds):
a1 = a - means
a1 = a1/sds
return a1
我可以证实这些说法。确实需要对列车和测试集应用相同的缩放。我特别做到了这一点:
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X = sc_X.fit_transform(X)
其中X是我的训练集。然后,在准备测试集时,我只使用了从训练测试的缩放中获得的StandardScaler实例。重要的是仅用于变换,而不是用于拟合和变换(如上所述),即:
这使得R和scikit学习结果之间取得了实质性一致。在Python/sklearn和R/e1071中使用支持向量回归时,x和y变量都需要缩放/不缩放。 下面是一个自包含的示例,使用rpy2显示了R和Python结果的等价性(第一部分在R中禁用了缩放,第二部分在Python中使用“手动”缩放):
更新:实际上,在R和Python中,缩放使用了稍微不同的方差定义,请参见(R中的1/(N-1)…与Python中的1/N…,其中N是样本大小)。但是,对于典型的样本量,这应该可以忽略不计。您的训练集相同吗?我在您的R中看到一个
sample
调用,没有set.seed
。您应该拆分数据集,然后在R和python中使用这些拆分进行比较。
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X = sc_X.fit_transform(X)
X_test = sc_X.transform(X_test)
# import modules
import matplotlib.pyplot as plt
import numpy as np
import sklearn
import sklearn.model_selection
import sklearn.datasets
import sklearn.svm
import rpy2
import rpy2.robjects
import rpy2.robjects.packages
# use R e1071 SVM function via rpy2
def RSVR(x_train, y_train, x_test,
cost=1.0, epsilon=0.1, gamma=0.01, scale=False):
# convert Python arrays to R matrices
rx_train = rpy2.robjects.r['matrix'](rpy2.robjects.FloatVector(np.array(x_train).T.flatten()), nrow = len(x_train))
ry_train = rpy2.robjects.FloatVector(np.array(y_train).flatten())
rx_test = rpy2.robjects.r['matrix'](rpy2.robjects.FloatVector(np.array(x_test).T.flatten()), nrow = len(x_test))
# train SVM
e1071 = rpy2.robjects.packages.importr('e1071')
rsvr = e1071.svm(x=rx_train,
y=ry_train,
kernel='radial',
cost=cost,
epsilon=epsilon,
gamma=gamma,
scale=scale)
# run SVM
predict = rpy2.robjects.r['predict']
ry_pred = np.array(predict(rsvr, rx_test))
return ry_pred
# define auxiliary function for plotting results
def plot_results(y_test, py_pred, ry_pred, title, lim=[-500, 500]):
plt.title(title)
plt.plot(lim, lim, lw=2, color='gray', zorder=-1)
plt.scatter(y_test, py_pred, color='black', s=40, label='Python/sklearn')
plt.scatter(y_test, ry_pred, color='orange', s=10, label='R/e1071')
plt.xlabel('observed')
plt.ylabel('predicted')
plt.legend(loc=0)
return None
# get example regression data
x_orig, y_orig = sklearn.datasets.make_regression(n_samples=100, n_features=10, random_state=42)
# split into train and test set
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(x_orig, y_orig, train_size=0.8)
# SVM parameters
# (identical but named differently for R/e1071 and Python/sklearn)
C = 1000.0
epsilon = 0.1
gamma = 0.01
# setup SVM and scaling classes
psvr = sklearn.svm.SVR(kernel='rbf', C=C, epsilon=epsilon, gamma=gamma)
x_sca = sklearn.preprocessing.StandardScaler()
y_sca = sklearn.preprocessing.StandardScaler()
# run R and Python SVMs without any scaling
# (see 'scale=False')
py_pred = psvr.fit(x_train, y_train).predict(x_test)
ry_pred = RSVR(x_train, y_train, x_test,
cost=C, epsilon=epsilon, gamma=gamma, scale=False)
# scale both x and y variables
sx_train = x_sca.fit_transform(x_train)
sy_train = y_sca.fit_transform(y_train.reshape(-1, 1))[:, 0]
sx_test = x_sca.transform(x_test)
sy_test = y_sca.transform(y_test.reshape(-1, 1))[:, 0]
# run Python SVM on scaled data and invert scaling afterwards
ps_pred = psvr.fit(sx_train, sy_train).predict(sx_test)
ps_pred = y_sca.inverse_transform(ps_pred.reshape(-1, 1))[:, 0]
# run R SVM with native scaling on original/unscaled data
# (see 'scale=True')
rs_pred = RSVR(x_train, y_train, x_test,
cost=C, epsilon=epsilon, gamma=gamma, scale=True)
# plot results
plt.subplot(121)
plot_results(y_test, py_pred, ry_pred, 'without scaling (Python/sklearn default)')
plt.subplot(122)
plot_results(y_test, ps_pred, rs_pred, 'with scaling (R/e1071 default)')
plt.tight_layout()