Python 无法使用statsmodels.api计算多元线性回归的y截距

Python 无法使用statsmodels.api计算多元线性回归的y截距,python,linear-regression,data-analysis,Python,Linear Regression,Data Analysis,我的自变量数据集如下: >>> reg_data_pd a b c 0 0.794527 0.033651 0.352414 1 0.794914 0.001086 0.093222 2 0.794476 0.004711 0.027977 3 0.776916 0.035780 0.023156 4 0.773526 0.002273 0.035269 5

我的自变量数据集如下:

>>> reg_data_pd
             a         b         c
0     0.794527  0.033651  0.352414
1     0.794914  0.001086  0.093222
2     0.794476  0.004711  0.027977
3     0.776916  0.035780  0.023156
4     0.773526  0.002273  0.035269
5     0.797933  0.001838  0.131261
6     0.806997  0.011498  0.180022
7     0.780709  0.000766  0.522399
8     0.779954  0.001397  0.036386
9     0.756837  0.010448  0.035893
10    0.775064  0.029471  0.036798
11    0.787213  0.013467  0.081323
12    0.757511  0.016465  0.021611
13    0.794530  0.004141  0.157539
14    0.783696  0.019909  0.021765
15    0.793892  0.003597  0.063312
16    0.762702  0.003547  0.052479
17    0.780336  0.004958  0.084910
18    0.787005  0.006372  0.048153
19    0.824416  0.000513  0.045102
20    0.790552  0.009652  0.581571
21    0.773064  0.000889  0.263941
22    0.772039  0.021499  0.260455
23    0.780298  0.022814  0.061621
24    0.794924  0.020585  0.020638
25    0.772452  0.085798  0.215673
26    0.784202  0.000013  0.233638
27    0.822010  0.082684  0.028724
28    0.772587  0.027979  0.118953
29    0.765530  0.006655  0.018605
...        ...       ...       ...
4771  0.968364  0.227303  0.153739
4772  0.968401  0.159052  0.132388
4773  0.959733  0.278948  0.132163
4774  0.957354  0.315088  0.136973
4775  0.954627  0.447764  0.139494
4776  0.952442  0.305559  0.206204
4777  0.948925  0.235244  0.116273
4778  0.953192  0.228221  0.247231
4779  0.952769  0.327529  0.229617
4780  0.954471  0.396722  0.210942
4781  0.955292  0.336075  0.179493
4782  0.950516  0.320840  0.289505
4783  0.950454  0.316647  0.200065
4784  0.947313  0.291446  0.155215
4785  0.945677  0.292084  0.585302
4786  0.951083  0.285946  0.536361
4787  0.943909  0.346754  0.457234
4788  0.941971  0.276125  0.207159
4789  0.945111  0.440802  0.222561
4790  0.951011  0.407192  0.167613
4791  0.925485  0.464954  0.237568
4792  0.926332  0.252929  0.190035
4793  0.931606  0.020075  0.179730
4794  0.929963  0.426511  0.134418
4795  0.941986  0.640994  0.123444
4796  0.943526  0.232498  0.139800
4797  0.945268  0.460201  0.106471
4798  0.953572  0.398044  0.151489
4799  0.947673  0.479376  0.174330
4800  0.952663  0.532027  0.409197

[4801 rows x 3 columns]
>>> import statsmodels.api as sm
>>> model = sm.OLS(yu_pd,reg_data_pd)
>>> results = model.fit()
>>> results.summary()
<class 'statsmodels.iolib.summary.Summary'>
"""
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       0.896
Model:                            OLS   Adj. R-squared:                  0.896
Method:                 Least Squares   F-statistic:                 1.379e+04
Date:                Thu, 28 Jan 2016   Prob (F-statistic):               0.00
Time:                        16:45:03   Log-Likelihood:                 6693.6
No. Observations:                4801   AIC:                        -1.338e+04
Df Residuals:                    4798   BIC:                        -1.336e+04
Df Model:                           3                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
a              0.1933      0.002     78.058      0.000         0.188     0.198
b              0.0135      0.005      2.796      0.005         0.004     0.023
c             -0.0221      0.006     -3.984      0.000        -0.033    -0.011
==============================================================================
Omnibus:                      151.028   Durbin-Watson:                   0.452
Prob(Omnibus):                  0.000   Jarque-Bera (JB):              166.568
Skew:                           0.430   Prob(JB):                     6.77e-37
Kurtosis:                       3.306   Cond. No.                         6.75
==============================================================================

Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
"""
import numpy as np
from numpy import random
import statsmodels.api as sm

x = np.linspace(0,1,100)
X = 0.45*x
X1 = 0.45*x
x3 = np.zeros(200).reshape((100,2))
x3[:,0] = X[:]
x3[:,1] = X1[:]

y = 0.45*x

model = sm.OLS(y, x3)
results = model.fit()
print(results.summary())
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                         1.000
Model:                            OLS   Adj. R-squared:                  1.000
Method:                 Least Squares   F-statistic:                 5.710e+33
Date:                Thu, 28 Jan 2016   Prob (F-statistic):               0.00
Time:                        15:11:53   Log-Likelihood:                 3649.3
No. Observations:                 100   AIC:                            -7297.
Df Residuals:                      99   BIC:                            -7294.
Df Model:                           1                                         
Covariance Type:            nonrobust                                         
             coef    std err          t      P>|t|      [95.0% Conf. Int.]
x1             0.5000   6.62e-18   7.56e+16      0.000         0.500     0.500
x2             0.5000   6.62e-18   7.56e+16      0.000         0.500     0.500
因变量的数据集为:

>>> yu_pd
             y
0     0.290740
1     0.295920
2     0.295920
3     0.192100
4     0.266000
5     0.284700
6     0.284700
7     0.272300
8     0.282680
9     0.243260
10    0.243260
11    0.273150
12    0.273150
13    0.282850
14    0.300325
15    0.192525
16    0.192525
17    0.269620
18    0.286825
19    0.207700
20    0.207700
21    0.292380
22    0.292380
23    0.282600
24    0.278212
25    0.243512
26    0.243512
27    0.309025
28    0.361740
29    0.249520
...        ...
4771  0.251480
4772  0.287500
4773  0.287500
4774  0.282071
4775  0.313343
4776  0.287463
4777  0.287463
4778  0.298700
4779  0.272920
4780  0.272920
4781  0.371314
4782  0.388429
4783  0.305200
4784  0.305200
4785  0.296725
4786  0.287920
4787  0.271580
4788  0.305486
4789  0.318571
4790  0.337975
4791  0.337975
4792  0.319988
4793  0.192360
4794  0.312871
4795  0.323000
4796  0.347088
4797  0.347088
4798  0.324986
4799  0.184320
4800  0.352100

[4801 rows x 1 columns]
我计算多元线性回归的代码如下:

>>> reg_data_pd
             a         b         c
0     0.794527  0.033651  0.352414
1     0.794914  0.001086  0.093222
2     0.794476  0.004711  0.027977
3     0.776916  0.035780  0.023156
4     0.773526  0.002273  0.035269
5     0.797933  0.001838  0.131261
6     0.806997  0.011498  0.180022
7     0.780709  0.000766  0.522399
8     0.779954  0.001397  0.036386
9     0.756837  0.010448  0.035893
10    0.775064  0.029471  0.036798
11    0.787213  0.013467  0.081323
12    0.757511  0.016465  0.021611
13    0.794530  0.004141  0.157539
14    0.783696  0.019909  0.021765
15    0.793892  0.003597  0.063312
16    0.762702  0.003547  0.052479
17    0.780336  0.004958  0.084910
18    0.787005  0.006372  0.048153
19    0.824416  0.000513  0.045102
20    0.790552  0.009652  0.581571
21    0.773064  0.000889  0.263941
22    0.772039  0.021499  0.260455
23    0.780298  0.022814  0.061621
24    0.794924  0.020585  0.020638
25    0.772452  0.085798  0.215673
26    0.784202  0.000013  0.233638
27    0.822010  0.082684  0.028724
28    0.772587  0.027979  0.118953
29    0.765530  0.006655  0.018605
...        ...       ...       ...
4771  0.968364  0.227303  0.153739
4772  0.968401  0.159052  0.132388
4773  0.959733  0.278948  0.132163
4774  0.957354  0.315088  0.136973
4775  0.954627  0.447764  0.139494
4776  0.952442  0.305559  0.206204
4777  0.948925  0.235244  0.116273
4778  0.953192  0.228221  0.247231
4779  0.952769  0.327529  0.229617
4780  0.954471  0.396722  0.210942
4781  0.955292  0.336075  0.179493
4782  0.950516  0.320840  0.289505
4783  0.950454  0.316647  0.200065
4784  0.947313  0.291446  0.155215
4785  0.945677  0.292084  0.585302
4786  0.951083  0.285946  0.536361
4787  0.943909  0.346754  0.457234
4788  0.941971  0.276125  0.207159
4789  0.945111  0.440802  0.222561
4790  0.951011  0.407192  0.167613
4791  0.925485  0.464954  0.237568
4792  0.926332  0.252929  0.190035
4793  0.931606  0.020075  0.179730
4794  0.929963  0.426511  0.134418
4795  0.941986  0.640994  0.123444
4796  0.943526  0.232498  0.139800
4797  0.945268  0.460201  0.106471
4798  0.953572  0.398044  0.151489
4799  0.947673  0.479376  0.174330
4800  0.952663  0.532027  0.409197

[4801 rows x 3 columns]
>>> import statsmodels.api as sm
>>> model = sm.OLS(yu_pd,reg_data_pd)
>>> results = model.fit()
>>> results.summary()
<class 'statsmodels.iolib.summary.Summary'>
"""
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       0.896
Model:                            OLS   Adj. R-squared:                  0.896
Method:                 Least Squares   F-statistic:                 1.379e+04
Date:                Thu, 28 Jan 2016   Prob (F-statistic):               0.00
Time:                        16:45:03   Log-Likelihood:                 6693.6
No. Observations:                4801   AIC:                        -1.338e+04
Df Residuals:                    4798   BIC:                        -1.336e+04
Df Model:                           3                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
a              0.1933      0.002     78.058      0.000         0.188     0.198
b              0.0135      0.005      2.796      0.005         0.004     0.023
c             -0.0221      0.006     -3.984      0.000        -0.033    -0.011
==============================================================================
Omnibus:                      151.028   Durbin-Watson:                   0.452
Prob(Omnibus):                  0.000   Jarque-Bera (JB):              166.568
Skew:                           0.430   Prob(JB):                     6.77e-37
Kurtosis:                       3.306   Cond. No.                         6.75
==============================================================================

Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
"""
import numpy as np
from numpy import random
import statsmodels.api as sm

x = np.linspace(0,1,100)
X = 0.45*x
X1 = 0.45*x
x3 = np.zeros(200).reshape((100,2))
x3[:,0] = X[:]
x3[:,1] = X1[:]

y = 0.45*x

model = sm.OLS(y, x3)
results = model.fit()
print(results.summary())
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                         1.000
Model:                            OLS   Adj. R-squared:                  1.000
Method:                 Least Squares   F-statistic:                 5.710e+33
Date:                Thu, 28 Jan 2016   Prob (F-statistic):               0.00
Time:                        15:11:53   Log-Likelihood:                 3649.3
No. Observations:                 100   AIC:                            -7297.
Df Residuals:                      99   BIC:                            -7294.
Df Model:                           1                                         
Covariance Type:            nonrobust                                         
             coef    std err          t      P>|t|      [95.0% Conf. Int.]
x1             0.5000   6.62e-18   7.56e+16      0.000         0.500     0.500
x2             0.5000   6.62e-18   7.56e+16      0.000         0.500     0.500
>将statsmodels.api作为sm导入
>>>模型=sm.OLS(yu_pd,reg_data_pd)
>>>结果=model.fit()
>>>结果摘要()
"""
OLS回归结果
==============================================================================
副变量:y R平方:0.896
模型:OLS调整R平方:0.896
方法:最小二乘F统计量:1.379e+04
日期:2016年1月28日星期四概率(F-统计):0.00
时间:16:45:03对数可能性:6693.6
观察数量:4801 AIC:-1.338e+04
Df残差:4798 BIC:-1.336e+04
Df型号:3
协方差类型:非稳健
==============================================================================
coef标准错误t P>|t |[95.0%配置内部]
------------------------------------------------------------------------------
a 0.1933 0.002 78.058 0.000 0.188 0.198
b 0.0135 0.005 2.796 0.005 0.004 0.023
c-0.0221 0.006-3.984 0.000-0.033-0.011
==============================================================================
总括:151.028德宾沃森:0.452
概率(综合):0.000贾尔克贝拉(JB):166.568
倾斜:0.430 Prob(JB):6.77e-37
峰度:3.306秒6.75
==============================================================================
警告:
[1] 标准误差假设正确指定了误差的协方差矩阵。
"""

我已经得到了所有系数“a”、“b”和“c”,但我没有得到Y截距的值。

如果你有一个包含3个自变量的回归模型,为什么你希望在你的数据摘要中有一个唯一的Y截距定义?我认为这是一个适用于单一自变量的单一函数的概念

考虑以下几点:

>>> reg_data_pd
             a         b         c
0     0.794527  0.033651  0.352414
1     0.794914  0.001086  0.093222
2     0.794476  0.004711  0.027977
3     0.776916  0.035780  0.023156
4     0.773526  0.002273  0.035269
5     0.797933  0.001838  0.131261
6     0.806997  0.011498  0.180022
7     0.780709  0.000766  0.522399
8     0.779954  0.001397  0.036386
9     0.756837  0.010448  0.035893
10    0.775064  0.029471  0.036798
11    0.787213  0.013467  0.081323
12    0.757511  0.016465  0.021611
13    0.794530  0.004141  0.157539
14    0.783696  0.019909  0.021765
15    0.793892  0.003597  0.063312
16    0.762702  0.003547  0.052479
17    0.780336  0.004958  0.084910
18    0.787005  0.006372  0.048153
19    0.824416  0.000513  0.045102
20    0.790552  0.009652  0.581571
21    0.773064  0.000889  0.263941
22    0.772039  0.021499  0.260455
23    0.780298  0.022814  0.061621
24    0.794924  0.020585  0.020638
25    0.772452  0.085798  0.215673
26    0.784202  0.000013  0.233638
27    0.822010  0.082684  0.028724
28    0.772587  0.027979  0.118953
29    0.765530  0.006655  0.018605
...        ...       ...       ...
4771  0.968364  0.227303  0.153739
4772  0.968401  0.159052  0.132388
4773  0.959733  0.278948  0.132163
4774  0.957354  0.315088  0.136973
4775  0.954627  0.447764  0.139494
4776  0.952442  0.305559  0.206204
4777  0.948925  0.235244  0.116273
4778  0.953192  0.228221  0.247231
4779  0.952769  0.327529  0.229617
4780  0.954471  0.396722  0.210942
4781  0.955292  0.336075  0.179493
4782  0.950516  0.320840  0.289505
4783  0.950454  0.316647  0.200065
4784  0.947313  0.291446  0.155215
4785  0.945677  0.292084  0.585302
4786  0.951083  0.285946  0.536361
4787  0.943909  0.346754  0.457234
4788  0.941971  0.276125  0.207159
4789  0.945111  0.440802  0.222561
4790  0.951011  0.407192  0.167613
4791  0.925485  0.464954  0.237568
4792  0.926332  0.252929  0.190035
4793  0.931606  0.020075  0.179730
4794  0.929963  0.426511  0.134418
4795  0.941986  0.640994  0.123444
4796  0.943526  0.232498  0.139800
4797  0.945268  0.460201  0.106471
4798  0.953572  0.398044  0.151489
4799  0.947673  0.479376  0.174330
4800  0.952663  0.532027  0.409197

[4801 rows x 3 columns]
>>> import statsmodels.api as sm
>>> model = sm.OLS(yu_pd,reg_data_pd)
>>> results = model.fit()
>>> results.summary()
<class 'statsmodels.iolib.summary.Summary'>
"""
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       0.896
Model:                            OLS   Adj. R-squared:                  0.896
Method:                 Least Squares   F-statistic:                 1.379e+04
Date:                Thu, 28 Jan 2016   Prob (F-statistic):               0.00
Time:                        16:45:03   Log-Likelihood:                 6693.6
No. Observations:                4801   AIC:                        -1.338e+04
Df Residuals:                    4798   BIC:                        -1.336e+04
Df Model:                           3                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
a              0.1933      0.002     78.058      0.000         0.188     0.198
b              0.0135      0.005      2.796      0.005         0.004     0.023
c             -0.0221      0.006     -3.984      0.000        -0.033    -0.011
==============================================================================
Omnibus:                      151.028   Durbin-Watson:                   0.452
Prob(Omnibus):                  0.000   Jarque-Bera (JB):              166.568
Skew:                           0.430   Prob(JB):                     6.77e-37
Kurtosis:                       3.306   Cond. No.                         6.75
==============================================================================

Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
"""
import numpy as np
from numpy import random
import statsmodels.api as sm

x = np.linspace(0,1,100)
X = 0.45*x
X1 = 0.45*x
x3 = np.zeros(200).reshape((100,2))
x3[:,0] = X[:]
x3[:,1] = X1[:]

y = 0.45*x

model = sm.OLS(y, x3)
results = model.fit()
print(results.summary())
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                         1.000
Model:                            OLS   Adj. R-squared:                  1.000
Method:                 Least Squares   F-statistic:                 5.710e+33
Date:                Thu, 28 Jan 2016   Prob (F-statistic):               0.00
Time:                        15:11:53   Log-Likelihood:                 3649.3
No. Observations:                 100   AIC:                            -7297.
Df Residuals:                      99   BIC:                            -7294.
Df Model:                           1                                         
Covariance Type:            nonrobust                                         
             coef    std err          t      P>|t|      [95.0% Conf. Int.]
x1             0.5000   6.62e-18   7.56e+16      0.000         0.500     0.500
x2             0.5000   6.62e-18   7.56e+16      0.000         0.500     0.500
这告诉我,我的系数实际上是两条直线的斜率。因此,在您的例子中,值a、b和c是自变量输入的三条直线的斜率,与Y变量输出相匹配


如果这些线中的任何一条具有Y截距,例如Y=a0+a*x,其中x是自变量,则可以通过将x和Y设置为等于从deta获得的值来求解斜率;例如,如果点x=Y=1出现在同一行中,则设置x=Y=1,以获得Y-int:a0=1-a。你可以对另外两条适合的线做同样的处理。我相信这是你问题的完整答案。

如果你有一个包含3个自变量的回归模型,为什么你希望在你的数据摘要中有一个唯一的Y截距定义?我认为这是一个适用于单一自变量的单一函数的概念

考虑以下几点:

>>> reg_data_pd
             a         b         c
0     0.794527  0.033651  0.352414
1     0.794914  0.001086  0.093222
2     0.794476  0.004711  0.027977
3     0.776916  0.035780  0.023156
4     0.773526  0.002273  0.035269
5     0.797933  0.001838  0.131261
6     0.806997  0.011498  0.180022
7     0.780709  0.000766  0.522399
8     0.779954  0.001397  0.036386
9     0.756837  0.010448  0.035893
10    0.775064  0.029471  0.036798
11    0.787213  0.013467  0.081323
12    0.757511  0.016465  0.021611
13    0.794530  0.004141  0.157539
14    0.783696  0.019909  0.021765
15    0.793892  0.003597  0.063312
16    0.762702  0.003547  0.052479
17    0.780336  0.004958  0.084910
18    0.787005  0.006372  0.048153
19    0.824416  0.000513  0.045102
20    0.790552  0.009652  0.581571
21    0.773064  0.000889  0.263941
22    0.772039  0.021499  0.260455
23    0.780298  0.022814  0.061621
24    0.794924  0.020585  0.020638
25    0.772452  0.085798  0.215673
26    0.784202  0.000013  0.233638
27    0.822010  0.082684  0.028724
28    0.772587  0.027979  0.118953
29    0.765530  0.006655  0.018605
...        ...       ...       ...
4771  0.968364  0.227303  0.153739
4772  0.968401  0.159052  0.132388
4773  0.959733  0.278948  0.132163
4774  0.957354  0.315088  0.136973
4775  0.954627  0.447764  0.139494
4776  0.952442  0.305559  0.206204
4777  0.948925  0.235244  0.116273
4778  0.953192  0.228221  0.247231
4779  0.952769  0.327529  0.229617
4780  0.954471  0.396722  0.210942
4781  0.955292  0.336075  0.179493
4782  0.950516  0.320840  0.289505
4783  0.950454  0.316647  0.200065
4784  0.947313  0.291446  0.155215
4785  0.945677  0.292084  0.585302
4786  0.951083  0.285946  0.536361
4787  0.943909  0.346754  0.457234
4788  0.941971  0.276125  0.207159
4789  0.945111  0.440802  0.222561
4790  0.951011  0.407192  0.167613
4791  0.925485  0.464954  0.237568
4792  0.926332  0.252929  0.190035
4793  0.931606  0.020075  0.179730
4794  0.929963  0.426511  0.134418
4795  0.941986  0.640994  0.123444
4796  0.943526  0.232498  0.139800
4797  0.945268  0.460201  0.106471
4798  0.953572  0.398044  0.151489
4799  0.947673  0.479376  0.174330
4800  0.952663  0.532027  0.409197

[4801 rows x 3 columns]
>>> import statsmodels.api as sm
>>> model = sm.OLS(yu_pd,reg_data_pd)
>>> results = model.fit()
>>> results.summary()
<class 'statsmodels.iolib.summary.Summary'>
"""
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       0.896
Model:                            OLS   Adj. R-squared:                  0.896
Method:                 Least Squares   F-statistic:                 1.379e+04
Date:                Thu, 28 Jan 2016   Prob (F-statistic):               0.00
Time:                        16:45:03   Log-Likelihood:                 6693.6
No. Observations:                4801   AIC:                        -1.338e+04
Df Residuals:                    4798   BIC:                        -1.336e+04
Df Model:                           3                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
a              0.1933      0.002     78.058      0.000         0.188     0.198
b              0.0135      0.005      2.796      0.005         0.004     0.023
c             -0.0221      0.006     -3.984      0.000        -0.033    -0.011
==============================================================================
Omnibus:                      151.028   Durbin-Watson:                   0.452
Prob(Omnibus):                  0.000   Jarque-Bera (JB):              166.568
Skew:                           0.430   Prob(JB):                     6.77e-37
Kurtosis:                       3.306   Cond. No.                         6.75
==============================================================================

Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
"""
import numpy as np
from numpy import random
import statsmodels.api as sm

x = np.linspace(0,1,100)
X = 0.45*x
X1 = 0.45*x
x3 = np.zeros(200).reshape((100,2))
x3[:,0] = X[:]
x3[:,1] = X1[:]

y = 0.45*x

model = sm.OLS(y, x3)
results = model.fit()
print(results.summary())
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                         1.000
Model:                            OLS   Adj. R-squared:                  1.000
Method:                 Least Squares   F-statistic:                 5.710e+33
Date:                Thu, 28 Jan 2016   Prob (F-statistic):               0.00
Time:                        15:11:53   Log-Likelihood:                 3649.3
No. Observations:                 100   AIC:                            -7297.
Df Residuals:                      99   BIC:                            -7294.
Df Model:                           1                                         
Covariance Type:            nonrobust                                         
             coef    std err          t      P>|t|      [95.0% Conf. Int.]
x1             0.5000   6.62e-18   7.56e+16      0.000         0.500     0.500
x2             0.5000   6.62e-18   7.56e+16      0.000         0.500     0.500
这告诉我,我的系数实际上是两条直线的斜率。因此,在您的例子中,值a、b和c是自变量输入的三条直线的斜率,与Y变量输出相匹配


如果这些线中的任何一条具有Y截距,例如Y=a0+a*x,其中x是自变量,则可以通过将x和Y设置为等于从deta获得的值来求解斜率;例如,如果点x=Y=1出现在同一行中,则设置x=Y=1,以获得Y-int:a0=1-a。你可以对另外两条适合的线做同样的处理。我相信这是对你问题的完整回答。

我有三个自变量,正如我在问题输入中指定的,我的意思是独立的!修正了我的帖子。对不起:)我是多元线性回归新手,请帮助我了解一些基础知识,我知道有3个自变量会产生3行。y=a+x1a1+x2a2+x3a3将是方程式,其中a1、a2、a3是我的斜率。现在要计算y,需要另外一个变量的值,即:e‘a’。即使我也不知道哪条直线斜率(a1、a2、a3)会导致y截距。你能解释一下是什么线导致了y截距吗?@chickenjoe我做了一些数值实验,以确定a,b和c的输出值的真正含义。我很确定我的答案中已经给出了完整的解释。我有三个自变量,正如我在我的问题中指定的,我的意思是独立的!修正了我的帖子。对不起:)我是多元线性回归新手,请帮助我了解一些基础知识,我知道有3个自变量会产生3行。y=a+x1a1+x2a2+x3a3将是方程式,其中a1、a2、a3是我的斜率。现在要计算y,需要另外一个变量的值,即:e‘a’。即使我也不知道哪条直线斜率(a1、a2、a3)会导致y截距。你能解释一下是什么线导致了y截距吗?@chickenjoe我做了一些数值实验,以确定a,b和c的输出值的真正含义。我很确定我的答案中已经给出了完整的解释。如果您使用数组/数据帧接口,那么解释变量的数组不会因模型而改变,例如,没有添加截距。如果不添加截距(一列截距),则