Statistics Go语言的线性回归库

Statistics Go语言的线性回归库,statistics,go,linear-regression,Statistics,Go,Linear Regression,我正在寻找一个Go库,它实现了MLE或LSE的线性回归。 有人见过吗 有这个统计库,但它似乎没有我需要的: 谢谢 实现LSE(最小平方误差)线性回归相当简单 这是一个JavaScript实现——它对于端口来说应该是微不足道的 是一个(未测试)端口: 有一个叫做的项目,它有一个bayes包,应该能够做线性回归 不幸的是,文档有点缺乏,因此您可能需要阅读代码来学习如何使用它。我自己也涉猎过一点,但还没有接触过bayes软件包。我的port of Genter's AS75(在线)线性回归算法是用

我正在寻找一个Go库,它实现了MLE或LSE的线性回归。 有人见过吗

有这个统计库,但它似乎没有我需要的:

谢谢

实现LSE(最小平方误差)线性回归相当简单

这是一个JavaScript实现——它对于端口来说应该是微不足道的


是一个(未测试)端口:

有一个叫做的项目,它有一个bayes包,应该能够做线性回归


不幸的是,文档有点缺乏,因此您可能需要阅读代码来学习如何使用它。我自己也涉猎过一点,但还没有接触过bayes软件包。

我的port of Genter's AS75(在线)线性回归算法是用Go(golang)编写的。它做普通最小二乘回归。在线部分意味着它可以处理无限行的数据:如果您习惯于提供(n x p)设计矩阵,这有点不同:您调用Includ()n次(如果获得更多数据,则调用次数更多),每次给它一个p值向量。这有效地处理了n变大的情况,并且您可能必须从磁盘流式传输数据,因为数据不能全部放入内存


我使用梯度下降法实现了以下功能,它只给出了系数,但采用了任意数量的解释变量,并且相当准确:

package main

import "fmt"

func calc_ols_params(y []float64, x[][]float64, n_iterations int, alpha float64) []float64 {

    thetas := make([]float64, len(x))

    for i := 0; i < n_iterations; i++ {

        my_diffs := calc_diff(thetas, y, x)

        my_grad := calc_gradient(my_diffs, x)

        for j := 0; j < len(my_grad); j++ {
            thetas[j] += alpha * my_grad[j]
        }
    }
    return thetas
}

func calc_diff (thetas []float64, y []float64, x[][]float64) []float64 {
    diffs := make([]float64, len(y))
    for i := 0; i < len(y); i++ {
        prediction := 0.0
        for j := 0; j < len(thetas); j++ {
            prediction += thetas[j] * x[j][i]
        }
        diffs[i] = y[i] - prediction
    }
    return diffs
}

func calc_gradient(diffs[] float64, x[][]float64) []float64 {
    gradient := make([]float64, len(x))
    for i := 0; i < len(diffs); i++ {
        for j := 0; j < len(x); j++ {
            gradient[j] += diffs[i] * x[j][i]
        }
    }
    for i := 0; i < len(x); i++ {
        gradient[i] = gradient[i] / float64(len(diffs))
    }

    return gradient
}

func main(){
    y := []float64 {3,4,5,6,7}
    x := [][]float64 {{1,1,1,1,1}, {4,3,2,1,3}}

    thetas := calc_ols_params(y, x, 100000, 0.001)

    fmt.Println("Thetas : ", thetas)

    y_2 := []float64 {1,2,3,4,3,4,5,4,5,5,4,5,4,5,4,5,6,5,4,5,4,3,4}

    x_2 := [][]float64 {{1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1},
                            {4,2,3,4,5,4,5,6,7,4,8,9,8,8,6,6,5,5,5,5,5,5,5},
                    {4,1,2,3,4,5,6,7,5,8,7,8,7,8,7,8,7,7,7,7,7,6,5},
                    {4,1,2,5,6,7,8,9,7,8,7,8,7,7,7,7,7,7,6,6,4,4,4},}

    thetas_2 := calc_ols_params(y_2, x_2, 100000, 0.001)

    fmt.Println("Thetas_2 : ", thetas_2)

}

我用python.pandas检查了我的结果,结果非常接近:

In [24]: from pandas.stats.api import ols

In [25]: df = pd.DataFrame(np.array(x).T, columns=['x1','x2','x3','y'])

In [26]: from pandas.stats.api import ols

In [27]: x = [
     [4,2,3,4,5,4,5,6,7,4,8,9,8,8,6,6,5,5,5,5,5,5,5],
     [4,1,2,3,4,5,6,7,5,8,7,8,7,8,7,8,7,7,7,7,7,6,5],
     [4,1,2,5,6,7,8,9,7,8,7,8,7,7,7,7,7,7,6,6,4,4,4]
     ]

In [28]: y = [1,2,3,4,3,4,5,4,5,5,4,5,4,5,4,5,6,5,4,5,4,3,4]

In [29]: x.append(y)

In [30]: df = pd.DataFrame(np.array(x).T, columns=['x1','x2','x3','y'])

In [31]: ols(y=df['y'], x=df[['x1', 'x2', 'x3']])
Out[31]: 

-------------------------Summary of Regression Analysis-------------------------

Formula: Y ~ <x1> + <x2> + <x3> + <intercept>

Number of Observations:         23
Number of Degrees of Freedom:   4

R-squared:         0.5348
Adj R-squared:     0.4614

Rmse:              0.8254

F-stat (3, 19):     7.2813, p-value:     0.0019

Degrees of Freedom: model 3, resid 19

-----------------------Summary of Estimated Coefficients------------------------
      Variable       Coef    Std Err     t-stat    p-value    CI 2.5%   CI 97.5%
--------------------------------------------------------------------------------
            x1    -0.0618     0.1446      -0.43     0.6741    -0.3453     0.2217
            x2     0.2360     0.1487       1.59     0.1290    -0.0554     0.5274
            x3     0.2424     0.1394       1.74     0.0983    -0.0309     0.5156
     intercept     1.5704     0.6331       2.48     0.0226     0.3296     2.8113
---------------------------------End of Summary---------------------------------
[24]中的
:来自pandas.stats.api导入ols
在[25]中:df=pd.DataFrame(np.array(x).T,columns=['x1','x2','x3','y'])
在[26]中:从pandas.stats.api导入ols
在[27]中:x=[
[4,2,3,4,5,4,5,6,7,4,8,9,8,8,6,6,5,5,5,5,5,5,5],
[4,1,2,3,4,5,6,7,5,8,7,8,7,8,7,8,7,7,7,7,7,6,5],
[4,1,2,5,6,7,8,9,7,8,7,8,7,7,7,7,7,7,6,6,4,4,4]
]
[28]中:y=[1,2,3,4,3,4,5,4,5,5,4,5,4,5,5,4,5,6,5,4,5,5,5,5,5,5,4,4]
在[29]中:x.append(y)
在[30]中:df=pd.DataFrame(np.array(x).T,columns=['x1','x2','x3','y'])
在[31]中:ols(y=df['y'],x=df[['x1','x2','x3']]
出[31]:
-------------------------回归分析综述-------------------------
公式:Y~++
观察次数:23
自由度:4
R平方:0.5348
调整R平方:0.4614
Rmse:0.8254
F-stat(3,19):7.2813,p值:0.0019
自由度:模型3,剩余19
-----------------------估计系数摘要------------------------
可变系数标准误差t-stat p值CI 2.5%CI 97.5%
--------------------------------------------------------------------------------
x1-0.0618 0.1446-0.43 0.6741-0.3453 0.2217
x2 0.2360 0.1487 1.59 0.1290-0.0554 0.5274
x3 0.2424 0.1394 1.74 0.0983-0.0309 0.5156
截距1.5704 0.6331 2.48 0.0226 0.3296 2.8113
---------------------------------摘要结束---------------------------------

[34]中的
:df_1=pd.DataFrame(np.array([[3,4,5,6,7],[4,3,2,1,3]]).T,columns=['y',x'])
In[35]:df_1
出[35]:
y x
0  3  4
1  4  3
2  5  2
3  6  1
4  7  3
[5行x 2列]
在[36]中:ols(y=df_1['y'],x=df_1['x'])
出[36]:
-------------------------回归分析综述-------------------------
公式:Y~+
观察次数:5次
自由度:2
R平方:0.3077
调整R平方:0.0769
Rmse:1.5191
F-stat(1,3):1.3333,p值:0.3318
自由度:模型1,剩余3
-----------------------估计系数摘要------------------------
可变系数标准误差t-stat p值CI 2.5%CI 97.5%
--------------------------------------------------------------------------------
x-0.7692 0.6662-1.15 0.3318-2.0749 0.5365
截距7.0000 1.8605 3.76 0.0328 3.3534 10.6466
---------------------------------摘要结束---------------------------------
在[37]中:df_1=pd.DataFrame(np.array([[3,4,5,6,7],[4,3,2,1,3]]).T,columns=['y','x'])
在[38]中:ols(y=df_1['y'],x=df_1['x'])
出[38]:
-------------------------回归分析综述-------------------------
公式:Y~+
观察次数:5次
自由度:2
R平方:0.3077
调整R平方:0.0769
Rmse:1.5191
F-stat(1,3):1.3333,p值:0.3318
自由度:模型1,剩余3
-----------------------估计系数摘要------------------------
可变系数标准误差t-stat p值CI 2.5%CI 97.5%
--------------------------------------------------------------------------------
x-0.7692 0.6662-1.15 0.3318-2.0749 0.5365
截距7.0000 1.8605 3.76 0.0328 3.3534 10.6466
---------------------------------摘要结束---------------------------------

<代码>如果找不到,用C或C++库来互操作。这将是我的后退…有一天有人会为旧FORTRAN库编写一个GO包装器。也许是用户1094206。谢谢!这相当简单。我想我正在寻找一个库,它将支持几种不同类型的回归,我认为线性回归是一个很好的起点。谢谢分享!FWIW,如果x的值非常大且分布不均匀(例如时间戳),则该算法在数值上不太稳定,因此x应该标准化:减去平均值并除以标准偏差。这是一个难以置信的库-我很高兴发现它。谢谢你创造它!
Thetas :  [6.999959251448524 -0.769216974483968]
Thetas_2 :  [1.5694174539341945 -0.06169183063112409 0.2359981255871977 0.2424327101610395]
In [24]: from pandas.stats.api import ols

In [25]: df = pd.DataFrame(np.array(x).T, columns=['x1','x2','x3','y'])

In [26]: from pandas.stats.api import ols

In [27]: x = [
     [4,2,3,4,5,4,5,6,7,4,8,9,8,8,6,6,5,5,5,5,5,5,5],
     [4,1,2,3,4,5,6,7,5,8,7,8,7,8,7,8,7,7,7,7,7,6,5],
     [4,1,2,5,6,7,8,9,7,8,7,8,7,7,7,7,7,7,6,6,4,4,4]
     ]

In [28]: y = [1,2,3,4,3,4,5,4,5,5,4,5,4,5,4,5,6,5,4,5,4,3,4]

In [29]: x.append(y)

In [30]: df = pd.DataFrame(np.array(x).T, columns=['x1','x2','x3','y'])

In [31]: ols(y=df['y'], x=df[['x1', 'x2', 'x3']])
Out[31]: 

-------------------------Summary of Regression Analysis-------------------------

Formula: Y ~ <x1> + <x2> + <x3> + <intercept>

Number of Observations:         23
Number of Degrees of Freedom:   4

R-squared:         0.5348
Adj R-squared:     0.4614

Rmse:              0.8254

F-stat (3, 19):     7.2813, p-value:     0.0019

Degrees of Freedom: model 3, resid 19

-----------------------Summary of Estimated Coefficients------------------------
      Variable       Coef    Std Err     t-stat    p-value    CI 2.5%   CI 97.5%
--------------------------------------------------------------------------------
            x1    -0.0618     0.1446      -0.43     0.6741    -0.3453     0.2217
            x2     0.2360     0.1487       1.59     0.1290    -0.0554     0.5274
            x3     0.2424     0.1394       1.74     0.0983    -0.0309     0.5156
     intercept     1.5704     0.6331       2.48     0.0226     0.3296     2.8113
---------------------------------End of Summary---------------------------------
In [34]: df_1 = pd.DataFrame(np.array([[3,4,5,6,7], [4,3,2,1,3]]).T, columns=['y', 'x'])

In [35]: df_1
Out[35]: 
   y  x
0  3  4
1  4  3
2  5  2
3  6  1
4  7  3

[5 rows x 2 columns]

In [36]: ols(y=df_1['y'], x=df_1['x'])
Out[36]: 

-------------------------Summary of Regression Analysis-------------------------

Formula: Y ~ <x> + <intercept>

Number of Observations:         5
Number of Degrees of Freedom:   2

R-squared:         0.3077
Adj R-squared:     0.0769

Rmse:              1.5191

F-stat (1, 3):     1.3333, p-value:     0.3318

Degrees of Freedom: model 1, resid 3

-----------------------Summary of Estimated Coefficients------------------------
      Variable       Coef    Std Err     t-stat    p-value    CI 2.5%   CI 97.5%
--------------------------------------------------------------------------------
             x    -0.7692     0.6662      -1.15     0.3318    -2.0749     0.5365
     intercept     7.0000     1.8605       3.76     0.0328     3.3534    10.6466
---------------------------------End of Summary---------------------------------


In [37]: df_1 = pd.DataFrame(np.array([[3,4,5,6,7], [4,3,2,1,3]]).T, columns=['y', 'x'])

In [38]: ols(y=df_1['y'], x=df_1['x'])
Out[38]: 

-------------------------Summary of Regression Analysis-------------------------

Formula: Y ~ <x> + <intercept>

Number of Observations:         5
Number of Degrees of Freedom:   2

R-squared:         0.3077
Adj R-squared:     0.0769

Rmse:              1.5191

F-stat (1, 3):     1.3333, p-value:     0.3318

Degrees of Freedom: model 1, resid 3

-----------------------Summary of Estimated Coefficients------------------------
      Variable       Coef    Std Err     t-stat    p-value    CI 2.5%   CI 97.5%
--------------------------------------------------------------------------------
             x    -0.7692     0.6662      -1.15     0.3318    -2.0749     0.5365
     intercept     7.0000     1.8605       3.76     0.0328     3.3534    10.6466
---------------------------------End of Summary---------------------------------