Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/64.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
R 从sae.dnn(deepnet)转换为mx.mlp(mxnet)错误_R_Machine Learning_Neural Network_Mxnet - Fatal编程技术网

R 从sae.dnn(deepnet)转换为mx.mlp(mxnet)错误

R 从sae.dnn(deepnet)转换为mx.mlp(mxnet)错误,r,machine-learning,neural-network,mxnet,R,Machine Learning,Neural Network,Mxnet,我正在尝试将代码从deepnet翻译到mxnet,但我不确定我做错了什么。我收到一条错误消息,上面说: "Error in nn$W[[i -1]] %*% t(post)". requires numeric/complex matrix/vector arguments Calls: neural.predict -> nn.predict -> t 使用deepnet(由Johann C.Lotter编写)的代码是: library('deepnet',悄悄地=T) 库(

我正在尝试将代码从deepnet翻译到mxnet,但我不确定我做错了什么。我收到一条错误消息,上面说:

"Error in nn$W[[i -1]] %*% t(post)". 
requires numeric/complex matrix/vector arguments 
Calls: neural.predict -> nn.predict -> t
使用deepnet(由Johann C.Lotter编写)的代码是:

library('deepnet',悄悄地=T)
库('caret',T=T)
neural.train=功能(型号,XY)
{

XY请查找下面的工作代码。如果出于某种原因,它在您的计算机上不工作,请检查您的mxnet版本。我正在mac上运行mxnet版本0.10.1

由于您告诉我您希望复制与示例一样接近的代码,我已将属性的值更改为初始值。如果需要,请随意更改它们。例如,动量0.5似乎太小-通常使用0.9或更高的值。虽然学习率0.5的值太大,但通常学习率Is不高于0.1

library('mxnet') 

neural.train = function(model,XY) 
{
  XY <- as.matrix(XY)
  X <- XY[,-ncol(XY)]
  Y <- XY[,ncol(XY)]
  Y <- ifelse(Y > 0,1,0)
  Models[[model]] <<- mx.mlp(X,Y,
                             hidden_node = c(30,30,30), 
                             activation = "tanh", 
                             momentum = 0.5, 
                             learning.rate = 0.5, 
                             out_activation = "softmax",
                             num.round = 100,
                             out_node = 2,
                             array.batch.size = 100,
                             dropout = 0,
                             array.layout = "rowmajor")
}

neural.predict = function(model,X) 
{
  if(is.vector(X)) X <- t(X)
  return(predict(Models[[model]], X, array.layout = "rowmajor"))
}

neural.save = function(name)
{
  save(Models,file=name)  
}

neural.init = function()
{
  set.seed(365)
  Models <<- vector("list")
}

Var1 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2))
Var2 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1))
Var3 <- sample(c(0,1), replace=T, size=100)
training.data <- matrix(c(Var1, Var2, Var3), nrow = 100, ncol = 3)

Var4 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2))
Var5 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1))
test.data <- matrix(c(Var4, Var5), nrow = 100, ncol = 2)


neural.init()
neural.train("mx_mlp_model", training.data)
neural.predict("mx_mlp_model", test.data)

希望有帮助。

请查找下面的工作代码。如果出于某种原因,它无法在您的计算机上工作,请检查您的mxnet版本。我正在mac上使用mxnet版本0.10.1运行它

由于您告诉我您希望复制与示例一样接近的代码,我已将属性的值更改为初始值。如果需要,请随意更改它们。例如,动量0.5似乎太小-通常使用0.9或更高的值。虽然学习率0.5的值太大,但通常学习率Is不高于0.1

library('mxnet') 

neural.train = function(model,XY) 
{
  XY <- as.matrix(XY)
  X <- XY[,-ncol(XY)]
  Y <- XY[,ncol(XY)]
  Y <- ifelse(Y > 0,1,0)
  Models[[model]] <<- mx.mlp(X,Y,
                             hidden_node = c(30,30,30), 
                             activation = "tanh", 
                             momentum = 0.5, 
                             learning.rate = 0.5, 
                             out_activation = "softmax",
                             num.round = 100,
                             out_node = 2,
                             array.batch.size = 100,
                             dropout = 0,
                             array.layout = "rowmajor")
}

neural.predict = function(model,X) 
{
  if(is.vector(X)) X <- t(X)
  return(predict(Models[[model]], X, array.layout = "rowmajor"))
}

neural.save = function(name)
{
  save(Models,file=name)  
}

neural.init = function()
{
  set.seed(365)
  Models <<- vector("list")
}

Var1 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2))
Var2 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1))
Var3 <- sample(c(0,1), replace=T, size=100)
training.data <- matrix(c(Var1, Var2, Var3), nrow = 100, ncol = 3)

Var4 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2))
Var5 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1))
test.data <- matrix(c(Var4, Var5), nrow = 100, ncol = 2)


neural.init()
neural.train("mx_mlp_model", training.data)
neural.predict("mx_mlp_model", test.data)

希望有帮助。

回答得很好!除了动量和学习率,还有其他值需要更改吗?我不需要保留精确值,如果有更好的解决方案,我很高兴知道如何找到超参数的最佳值(学习率、动量和其他所有可能存在的值)这是一个非常热门的研究课题。不幸的是,没有一般性的答案,在每种特定情况下,都可以通过试错法找到最佳参数。但是很少有一般性的方法可以做到这一点。我建议从-很好的答案开始。除了动量和学习速度之外,还有其他价值观需要改变吗?我不需要保留精确的值,如果有更好的解决方案,我很高兴知道如何找到超参数的最佳值(学习率、动量和可能存在的其他一切)这是一个非常热门的研究课题。不幸的是,没有通用的答案,在每种特定情况下,最佳参数都是通过试错法找到的。但是很少有通用的方法可以做到这一点。我建议从-
library('mxnet') 

neural.train = function(model,XY) 
{
  XY <- as.matrix(XY)
  X <- XY[,-ncol(XY)]
  Y <- XY[,ncol(XY)]
  Y <- ifelse(Y > 0,1,0)
  Models[[model]] <<- mx.mlp(X,Y,
                             hidden_node = c(30,30,30), 
                             activation = "tanh", 
                             momentum = 0.5, 
                             learning.rate = 0.5, 
                             out_activation = "softmax",
                             num.round = 100,
                             out_node = 2,
                             array.batch.size = 100,
                             dropout = 0,
                             array.layout = "rowmajor")
}

neural.predict = function(model,X) 
{
  if(is.vector(X)) X <- t(X)
  return(predict(Models[[model]], X, array.layout = "rowmajor"))
}

neural.save = function(name)
{
  save(Models,file=name)  
}

neural.init = function()
{
  set.seed(365)
  Models <<- vector("list")
}

Var1 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2))
Var2 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1))
Var3 <- sample(c(0,1), replace=T, size=100)
training.data <- matrix(c(Var1, Var2, Var3), nrow = 100, ncol = 3)

Var4 <- c(rnorm(50, 1, 0.5), rnorm(50, -0.6, 0.2))
Var5 <- c(rnorm(50, -0.8, 0.2), rnorm(50, 2, 1))
test.data <- matrix(c(Var4, Var5), nrow = 100, ncol = 2)


neural.init()
neural.train("mx_mlp_model", training.data)
neural.predict("mx_mlp_model", test.data)
> neural.predict("mx_mlp_model", test.data)
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16] [,17] [,18] [,19] [,20]
[1,] 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47 0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47
[2,] 0.53 0.53 0.53 0.53 0.53 0.53 0.53 0.53 0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53
     [,21] [,22] [,23] [,24] [,25] [,26] [,27] [,28] [,29] [,30] [,31] [,32] [,33] [,34] [,35] [,36] [,37] [,38] [,39]
[1,]  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47
[2,]  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53
     [,40] [,41] [,42] [,43] [,44] [,45] [,46] [,47] [,48] [,49] [,50] [,51] [,52] [,53] [,54] [,55] [,56] [,57] [,58]
[1,]  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47
[2,]  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53
     [,59] [,60] [,61] [,62] [,63] [,64] [,65] [,66] [,67] [,68] [,69] [,70] [,71] [,72] [,73] [,74] [,75] [,76] [,77]
[1,]  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47
[2,]  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53
     [,78] [,79] [,80] [,81] [,82] [,83] [,84] [,85] [,86] [,87] [,88] [,89] [,90] [,91] [,92] [,93] [,94] [,95] [,96]
[1,]  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47  0.47
[2,]  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53  0.53
     [,97] [,98] [,99] [,100]
[1,]  0.47  0.47  0.47   0.47
[2,]  0.53  0.53  0.53   0.53