在R中使用插入符号进行分类的预测(模型)和预测(模型$finalModel)之间的差异
两者之间有什么区别在R中使用插入符号进行分类的预测(模型)和预测(模型$finalModel)之间的差异,r,classification,prediction,r-caret,R,Classification,Prediction,R Caret,两者之间有什么区别 predict(rf, newdata=testSet) 及 我使用preProcess=c(“中心”、“比例”) 在居中和缩放的测试集上: RF predicts very bad (Sensitivity= 0.00, Specificity=1.00) RF$finalModel predicts reasonable (Sensitivity= 0.33, Specificity=0.98) RF.CS predicts l
predict(rf, newdata=testSet)
及
我使用preProcess=c(“中心”、“比例”)
在居中和缩放的测试集上:
RF predicts very bad (Sensitivity= 0.00, Specificity=1.00)
RF$finalModel predicts reasonable (Sensitivity= 0.33, Specificity=0.98)
RF.CS predicts like RF (Sensitivity= 0.00, Specificity=1.00)
RF.CS$finalModel predicts like RF (Sensitivity= 0.00, Specificity=1.00)
因此,似乎$finalModel需要相同格式的trainingSet和testSet,而训练对象只接受未定心和未缩放的数据,而不管选择的预处理参数是什么
预测代码(其中testSet为正常数据,testSetCS居中并按比例缩放):
testSet$PredictionFrank
这和你的另一个问题很相似
你真的需要
1) 显示每个结果的精确预测代码
2) 给我们一个可复制的例子
使用正常的testSet
,RF.CS
和RF.CS$finalModel
不应该给出相同的结果,我们应该能够重现这些结果。另外,您的代码中存在语法错误,因此它不可能与您执行的完全相同
最后,我真的不知道为什么要使用finalModel
对象。train
的要点是处理细节,这样做(这是您的选择)可以绕过通常应用的完整代码集
以下是一个可复制的示例:
library(mlbench)
data(Sonar)
set.seed(1)
inTrain <- createDataPartition(Sonar$Class)
training <- Sonar[inTrain[[1]], ]
testing <- Sonar[-inTrain[[1]], ]
pp <- preProcess(training[,-ncol(Sonar)])
training2 <- predict(pp, training[,-ncol(Sonar)])
training2$Class <- training$Class
testing2 <- predict(pp, testing[,-ncol(Sonar)])
testing2$Class <- testing2$Class
tc <- trainControl("repeatedcv",
number=10,
repeats=10,
classProbs=TRUE,
savePred=T)
set.seed(2)
RF <- train(Class~., data= training,
method="rf",
trControl=tc)
#normal trainingData
set.seed(2)
RF.CS <- train(Class~., data= training,
method="rf",
trControl=tc,
preProc=c("center", "scale"))
#scaled and centered trainingData
Max您可以发布上一个train对象的预测代码吗,即RF,RF.CS?我使用$finalModel对象是因为我认为它包含最终(最佳)树,因此可以计算新数据集的预测和概率。它确实如此,这就是predict.train
使用的。但是,它可能会对这两者之间的数据做一些事情。也许您可以澄清测试
和测试2
之间的区别,因为预处理
,以及predict.train
上的调用在预测(xx$finalModel)时在内部使用预处理
没有。否则,帖子会读到一些“巫毒故事发生了”,因为preProcess
的角色从未被阐明。(显然是+1。)最后一个例子不应该比较RF
+testing2
和RF.CS
+测试吗?
testSetCS <- testSet
xTrans <- preProcess(testSetCS)
testSetCS<- predict(xTrans, testSet)
testSet$Prediction <- predict(rf, newdata=testSet)
testSetCS$Prediction <- predict(rf, newdata=testSetCS)
tc <- trainControl("repeatedcv", number=10, repeats=10, classProbs=TRUE, savePred=T)
RF <- train(Y~., data= trainingSet, method="rf", trControl=tc) #normal trainingData
RF.CS <- train(Y~., data= trainingSet, method="rf", trControl=tc, preProc=c("center", "scale")) #scaled and centered trainingData
RF predicts reasonable (Sensitivity= 0.33, Specificity=0.97)
RF$finalModel predicts bad (Sensitivity= 0.74, Specificity=0.36)
RF.CS predicts reasonable (Sensitivity= 0.31, Specificity=0.97)
RF.CS$finalModel same results like RF.CS (Sensitivity= 0.31, Specificity=0.97)
RF predicts very bad (Sensitivity= 0.00, Specificity=1.00)
RF$finalModel predicts reasonable (Sensitivity= 0.33, Specificity=0.98)
RF.CS predicts like RF (Sensitivity= 0.00, Specificity=1.00)
RF.CS$finalModel predicts like RF (Sensitivity= 0.00, Specificity=1.00)
testSet$Prediction <- predict(RF, newdata=testSet)
testSet$PredictionFM <- predict(RF$finalModel, newdata=testSet)
testSet$PredictionCS <- predict(RF.CS, newdata=testSet)
testSet$PredictionCSFM <- predict(RF.CS$finalModel, newdata=testSet)
testSetCS$Prediction <- predict(RF, newdata=testSetCS)
testSetCS$PredictionFM <- predict(RF$finalModel, newdata=testSetCS)
testSetCS$PredictionCS <- predict(RF.CS, newdata=testSetCS)
testSetCS$PredictionCSFM <- predict(RF.CS$finalModel, newdata=testSetCS)
library(mlbench)
data(Sonar)
set.seed(1)
inTrain <- createDataPartition(Sonar$Class)
training <- Sonar[inTrain[[1]], ]
testing <- Sonar[-inTrain[[1]], ]
pp <- preProcess(training[,-ncol(Sonar)])
training2 <- predict(pp, training[,-ncol(Sonar)])
training2$Class <- training$Class
testing2 <- predict(pp, testing[,-ncol(Sonar)])
testing2$Class <- testing2$Class
tc <- trainControl("repeatedcv",
number=10,
repeats=10,
classProbs=TRUE,
savePred=T)
set.seed(2)
RF <- train(Class~., data= training,
method="rf",
trControl=tc)
#normal trainingData
set.seed(2)
RF.CS <- train(Class~., data= training,
method="rf",
trControl=tc,
preProc=c("center", "scale"))
#scaled and centered trainingData
> ## These should not be the same
> all.equal(predict(RF, testing, type = "prob")[,1],
+ predict(RF, testing2, type = "prob")[,1])
[1] "Mean relative difference: 0.4067554"
>
> ## Nor should these
> all.equal(predict(RF.CS, testing, type = "prob")[,1],
+ predict(RF.CS, testing2, type = "prob")[,1])
[1] "Mean relative difference: 0.3924037"
>
> all.equal(predict(RF.CS, testing, type = "prob")[,1],
+ predict(RF.CS$finalModel, testing, type = "prob")[,1])
[1] "names for current but not for target"
[2] "Mean relative difference: 0.7452435"
>
> ## These should be and are close (just based on the
> ## random sampling used in the final RF fits)
> all.equal(predict(RF, testing, type = "prob")[,1],
+ predict(RF.CS, testing, type = "prob")[,1])
[1] "Mean relative difference: 0.04198887"