Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/287.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 为什么Statsr模型和Statsr模型的逻辑回归结果不同?_Python_R_Logistic Regression_Statsmodels - Fatal编程技术网

Python 为什么Statsr模型和Statsr模型的逻辑回归结果不同?

Python 为什么Statsr模型和Statsr模型的逻辑回归结果不同?,python,r,logistic-regression,statsmodels,Python,R,Logistic Regression,Statsmodels,我试图比较python的statsmodels和R中的逻辑回归实现 Python版本: import statsmodels.api as sm import pandas as pd import pylab as pl import numpy as np df = pd.read_csv("http://www.ats.ucla.edu/stat/data/binary.csv") df.columns = list(df.columns)[:3] + ["prestige"] # df.

我试图比较python的statsmodels和R中的逻辑回归实现

Python版本:

import statsmodels.api as sm
import pandas as pd
import pylab as pl
import numpy as np
df = pd.read_csv("http://www.ats.ucla.edu/stat/data/binary.csv")
df.columns = list(df.columns)[:3] + ["prestige"]
# df.hist()
# pl.show()
dummy_ranks = pd.get_dummies(df["prestige"], prefix="prestige")
cols_to_keep = ["admit", "gre", "gpa"]
data = df[cols_to_keep].join(dummy_ranks.ix[:, "prestige_2":])
data["intercept"] = 1.0
train_cols = data.columns[1:]
logit = sm.Logit(data["admit"], data[train_cols])
result = logit.fit()
result.summary2()
结果:

                         Results: Logit
=================================================================
Model:              Logit            Pseudo R-squared: 0.083     
Dependent Variable: admit            AIC:              470.5175  
Date:               2014-12-19 01:11 BIC:              494.4663  
No. Observations:   400              Log-Likelihood:   -229.26   
Df Model:           5                LL-Null:          -249.99   
Df Residuals:       394              LLR p-value:      7.5782e-08
Converged:          1.0000           Scale:            1.0000    
No. Iterations:     6.0000                                       
------------------------------------------------------------------
               Coef.   Std.Err.     z     P>|z|    [0.025   0.975]
------------------------------------------------------------------
gre            0.0023    0.0011   2.0699  0.0385   0.0001   0.0044
gpa            0.8040    0.3318   2.4231  0.0154   0.1537   1.4544
prestige_2    -0.6754    0.3165  -2.1342  0.0328  -1.2958  -0.0551
prestige_3    -1.3402    0.3453  -3.8812  0.0001  -2.0170  -0.6634
prestige_4    -1.5515    0.4178  -3.7131  0.0002  -2.3704  -0.7325
intercept     -3.9900    1.1400  -3.5001  0.0005  -6.2242  -1.7557
=================================================================
Call:
glm(formula = admit ~ gre + gpa + rank2 + rank3 + rank4, family = binomial,
    data = data1)

Deviance Residuals:
    Min       1Q   Median       3Q      Max
-1.5133  -0.8661  -0.6573   1.1808   2.0629

Coefficients:
             Estimate Std. Error z value Pr(>|z|)
(Intercept) -4.184029   1.162421  -3.599 0.000319 ***
gre          0.002358   0.001112   2.121 0.033954 *
gpa          0.770591   0.343908   2.241 0.025046 *
rank2       -0.369711   0.310342  -1.191 0.233535
rank3       -1.015012   0.335147  -3.029 0.002457 **
rank4       -1.249251   0.414416  -3.014 0.002574 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 466.13  on 377  degrees of freedom
Residual deviance: 434.12  on 372  degrees of freedom
AIC: 446.12

Number of Fisher Scoring iterations: 4
R版本:

data = read.csv("http://www.ats.ucla.edu/stat/data/binary.csv", head=T)
require(reshape2)
data1 = dcast(data, admit + gre + gpa ~ rank)
require(dplyr)
names(data1)[4:7] = paste("rank", 1:4, sep="")
data1 = data1[, -4]
summary(glm(admit ~ gre + gpa + rank2 + rank3 + rank4, family=binomial, data=data1))
结果:

                         Results: Logit
=================================================================
Model:              Logit            Pseudo R-squared: 0.083     
Dependent Variable: admit            AIC:              470.5175  
Date:               2014-12-19 01:11 BIC:              494.4663  
No. Observations:   400              Log-Likelihood:   -229.26   
Df Model:           5                LL-Null:          -249.99   
Df Residuals:       394              LLR p-value:      7.5782e-08
Converged:          1.0000           Scale:            1.0000    
No. Iterations:     6.0000                                       
------------------------------------------------------------------
               Coef.   Std.Err.     z     P>|z|    [0.025   0.975]
------------------------------------------------------------------
gre            0.0023    0.0011   2.0699  0.0385   0.0001   0.0044
gpa            0.8040    0.3318   2.4231  0.0154   0.1537   1.4544
prestige_2    -0.6754    0.3165  -2.1342  0.0328  -1.2958  -0.0551
prestige_3    -1.3402    0.3453  -3.8812  0.0001  -2.0170  -0.6634
prestige_4    -1.5515    0.4178  -3.7131  0.0002  -2.3704  -0.7325
intercept     -3.9900    1.1400  -3.5001  0.0005  -6.2242  -1.7557
=================================================================
Call:
glm(formula = admit ~ gre + gpa + rank2 + rank3 + rank4, family = binomial,
    data = data1)

Deviance Residuals:
    Min       1Q   Median       3Q      Max
-1.5133  -0.8661  -0.6573   1.1808   2.0629

Coefficients:
             Estimate Std. Error z value Pr(>|z|)
(Intercept) -4.184029   1.162421  -3.599 0.000319 ***
gre          0.002358   0.001112   2.121 0.033954 *
gpa          0.770591   0.343908   2.241 0.025046 *
rank2       -0.369711   0.310342  -1.191 0.233535
rank3       -1.015012   0.335147  -3.029 0.002457 **
rank4       -1.249251   0.414416  -3.014 0.002574 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 466.13  on 377  degrees of freedom
Residual deviance: 434.12  on 372  degrees of freedom
AIC: 446.12

Number of Fisher Scoring iterations: 4
结果完全不同,例如,秩_2的p值分别为0.03和0.2。我想知道造成这种差异的原因是什么?注意,我已经为这两个版本创建了虚拟变量,为python版本创建了一个常量列,这在R中会自动处理

此外,python的速度似乎快了2倍:

##################################################
# python timing
def test():
    for i in range(5000):
        logit = sm.Logit(data["admit"], data[train_cols])
        result = logit.fit(disp=0)
import time
start = time.time()
test()
print(time.time() - start)
10.099738836288452
##################################################
# R timing
> f = function() for(i in 1:5000) {mod = glm(admit ~ gre + gpa + rank2 + rank3 + rank4, family=binomial, data=data1)}
> system.time(f())
   user  system elapsed
 17.505   0.021  17.526

不确定您的数据操作意图是什么,但它们似乎在R运行中丢失了信息。如果我保留了所有的排名信息,那么我在原始数据对象上得到了这些信息(结果在它们重叠的区域看起来非常相似)。(可能性仅估计为任意常数,因此您只能比较对数可能性的差异。即使有此警告,偏差也应为负对数可能性的两倍,因此这些结果也具有可比性。)


我重编了R部分,如下所示:

makeDummy = function(x, x1) { ifelse(is.na(x), NA, ifelse(x == x1, 1, 0)) }
data = read.csv("http://www.ats.ucla.edu/stat/data/binary.csv", head=T)
data$rank2 = makeDummy(data$rank, 2)
data$rank3 = makeDummy(data$rank, 3)
data$rank4 = makeDummy(data$rank, 4)
summary(glm(admit ~ gre + gpa + rank2 + rank3 + rank4, family=binomial, data=data))
结果与实验结果完全相同

Coefficients:
             Estimate Std. Error z value Pr(>|z|)
(Intercept) -3.989979   1.139951  -3.500 0.000465 ***
gre          0.002264   0.001094   2.070 0.038465 *
gpa          0.804038   0.331819   2.423 0.015388 *
rank2       -0.675443   0.316490  -2.134 0.032829 *
rank3       -1.340204   0.345306  -3.881 0.000104 ***
rank4       -1.551464   0.417832  -3.713 0.000205 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 499.98  on 399  degrees of freedom
Residual deviance: 458.52  on 394  degrees of freedom
AIC: 470.52

我想可能是我用了错误的方法使用了
dplyr::dcast
,或者
dcast

有问题,我只能回答,而不能在接受的答案中添加注释。在python中,通常必须删除一个伪类,使其成为引用类,但我认为不需要为R这样做,因为glm将使用它基本上,如果我正确理解了你的代码,你就不需要这一行了

data1 = data1[, -4]

尝试将prestige设置为原样,但使用as.factor()将其设置为因子首先。

看看这两次运行的自由度。一次是377,另一次是394。您的数据处理可能有问题。在R中产生的结果与在statsmodels中相同。您的R模型不一样。@qed:您不应该费心创建假人(然后扔掉一个)在R中,学会使用因子。statsmodels对应的公式版本是
print(smf.logit('admit~gre+gpa+C(rank)”,df).fit().summary())