基于R树的方法,如randomForest、adaboost:以不同格式解释相同数据的结果

基于R树的方法,如randomForest、adaboost:以不同格式解释相同数据的结果,r,machine-learning,random-forest,adaboost,R,Machine Learning,Random Forest,Adaboost,假设我的数据集是一个充满分类变量的100x3矩阵。我想对响应变量进行二进制分类。让我们用以下代码组成一个数据集: set.seed(2013) y <- as.factor(round(runif(n=100,min=0,max=1),0)) var1 <- rep(c("red","blue","yellow","green"),each=25) var2 <- rep(c("shortest","short","tall","tallest"),25) df <- d

假设我的数据集是一个充满分类变量的
100x3
矩阵。我想对响应变量进行二进制分类。让我们用以下代码组成一个数据集:

set.seed(2013)
y <- as.factor(round(runif(n=100,min=0,max=1),0))
var1 <- rep(c("red","blue","yellow","green"),each=25)
var2 <- rep(c("shortest","short","tall","tallest"),25)
df <- data.frame(y,var1,var2)
我尝试用两种不同的方法对这些数据进行随机森林和adaboost。第一种方法是按原样使用数据:

> library(randomForest)
> randomForest(y~var1+var2,data=df,ntrees=500)

Call:
 randomForest(formula = y ~ var1 + var2, data = df, ntrees = 500) 
               Type of random forest: classification
                     Number of trees: 500
No. of variables tried at each split: 1

        OOB estimate of  error rate: 44%
Confusion matrix:
   0  1 class.error
0 29 22   0.4313725
1 22 27   0.4489796

----------------------------------------------------
> library(ada)
> ada(y~var1+var2,data=df)

Call:
ada(y ~ var1 + var2, data = df)

Loss: exponential Method: discrete   Iteration: 50 

Final Confusion Matrix for Data:
          Final Prediction
True value  0  1
         0 34 17
         1 16 33

Train Error: 0.33 

Out-Of-Bag Error:  0.33  iteration= 11 

Additional Estimates of number of iterations:

train.err1 train.kap1 
        10         16 
第二种方法是将数据集转换为宽格式,并将每个类别视为一个变量。我这样做的原因是,我的实际数据集在var1和var2中有500多个因子,因此,树分区总是将500个类别划分为2个拆分。这样做会丢失很多信息。要转换数据,请执行以下操作:

id <- 1:100
library(reshape2)
tmp1 <- dcast(melt(cbind(id,df),id.vars=c("id","y")),id+y~var1,fun.aggregate=length)
tmp2 <- dcast(melt(cbind(id,df),id.vars=c("id","y")),id+y~var2,fun.aggregate=length)
df2 <- merge(tmp1,tmp2,by=c("id","y"))
我将随机林和adaboost应用于此新数据集:

> library(randomForest)
> randomForest(y~blue+green+red+yellow+short+shortest+tall+tallest,data=df2,ntrees=500)

Call:
 randomForest(formula = y ~ blue + green + red + yellow + short +      shortest + tall + tallest, data = df2, ntrees = 500) 
               Type of random forest: classification
                     Number of trees: 500
No. of variables tried at each split: 2

        OOB estimate of  error rate: 39%
Confusion matrix:
   0  1 class.error
0 32 19   0.3725490
1 20 29   0.4081633

----------------------------------------------------
> library(ada)
> ada(y~blue+green+red+yellow+short+shortest+tall+tallest,data=df2)
Call:
ada(y ~ blue + green + red + yellow + short + shortest + tall + 
tallest, data = df2)

Loss: exponential Method: discrete   Iteration: 50 

Final Confusion Matrix for Data:
          Final Prediction
True value  0  1
         0 36 15
         1 20 29

Train Error: 0.35 

Out-Of-Bag Error:  0.33  iteration= 26 

Additional Estimates of number of iterations:

train.err1 train.kap1 
         5         10 

两种方法的结果是不同的。当我们在每个变量中引入更多级别时,差异更为明显,即
var1
var2
。我的问题是,既然我们使用的是完全相同的数据,为什么结果会不同?我们应该如何解释这两种方法的结果?哪一个更可靠?

虽然这两个模型看起来相同,但它们彼此有着本质上的不同——在第二个模型中,您隐含了一种可能性,即给定的观测可能具有多种颜色和多种高度。两种模型公式之间的正确选择将取决于实际观测的特征。如果这些特征是排他性的(即,每个观察都是单一颜色和高度),则模型的第一个公式将是正确的。但是,如果观察结果可能是蓝色和绿色,或任何其他颜色组合,则可以使用第二种公式。从你原始数据的直觉来看,第一个似乎是最合适的(即,一个观测如何具有多个高度??)

另外,为什么在df2中将逻辑变量列编码为0和2,而不是0/1?我不知道这是否会对拟合产生任何影响,这取决于数据如何被编码为因子或数字

> head(df2)
   id y blue green red yellow short shortest tall tallest
1   1 0    0     0   2      0     0        2    0       0
2  10 1    0     0   2      0     2        0    0       0
3 100 0    0     2   0      0     0        0    0       2
4  11 0    0     0   2      0     0        0    2       0
5  12 0    0     0   2      0     0        0    0       2
6  13 1    0     0   2      0     0        2    0       0
> library(randomForest)
> randomForest(y~blue+green+red+yellow+short+shortest+tall+tallest,data=df2,ntrees=500)

Call:
 randomForest(formula = y ~ blue + green + red + yellow + short +      shortest + tall + tallest, data = df2, ntrees = 500) 
               Type of random forest: classification
                     Number of trees: 500
No. of variables tried at each split: 2

        OOB estimate of  error rate: 39%
Confusion matrix:
   0  1 class.error
0 32 19   0.3725490
1 20 29   0.4081633

----------------------------------------------------
> library(ada)
> ada(y~blue+green+red+yellow+short+shortest+tall+tallest,data=df2)
Call:
ada(y ~ blue + green + red + yellow + short + shortest + tall + 
tallest, data = df2)

Loss: exponential Method: discrete   Iteration: 50 

Final Confusion Matrix for Data:
          Final Prediction
True value  0  1
         0 36 15
         1 20 29

Train Error: 0.35 

Out-Of-Bag Error:  0.33  iteration= 26 

Additional Estimates of number of iterations:

train.err1 train.kap1 
         5         10