R 为什么深水网速慢?我怎样才能加速?
最近,我使用“deepnet”培训MNIST 我的代码是:R 为什么深水网速慢?我怎样才能加速?,r,performance,R,Performance,最近,我使用“deepnet”培训MNIST 我的代码是: testMNIST <- function(){ mnist <- load.mnist("./mnist/") cat ("Load MNIST data succeed!", "\n") train_x <- mnist$train$x train_y <- mnist$train$y train_y_mat <- mnist$train$yy te
testMNIST <- function(){
mnist <- load.mnist("./mnist/")
cat ("Load MNIST data succeed!", "\n")
train_x <- mnist$train$x
train_y <- mnist$train$y
train_y_mat <- mnist$train$yy
test_x <- mnist$test$x
test_y <- mnist$test$y
test_y_mat <- mnist$test$yy
dnn <- dbn.dnn.train(train_x, train_y, hidden = c(1000, 500, 200), learningrate = 0.01, numepochs = 100)
err_rate <- nn.test(dnn, test_x, test_y)
cat ("The Error rate of training DBN with label vector:", "\n")
print (err_rate)
}
testMNIST首先,deepnet
是由R本身编写的,所以速度有点慢。deepnet
中最耗时的函数是矩阵乘法。因此,在后端添加并行BLAS将非常有用,例如,甚至
在隐式并行模式部件中的一个例子是,与原生R+deepnet相比,OpenBLAS可以2.5X加速比
#install.packages("data.table")
#install.packages("deepnet")
library(data.table)
library(deepnet)
# download MNIST dataset in below links
# https://h2o-public-test-data.s3.amazonaws.com/bigdata/laptop/mnist/train.csv.gz
# https://h2o-public-test-data.s3.amazonaws.com/bigdata/laptop/mnist/test.csv.gz
mnist.train <- as.matrix(fread("./train.csv", header=F))
mnist.test <- as.matrix(fread("./test.csv", header=F))
# V785 is the label
x <- mnist.train[, 1:784]/255
y <- model.matrix(~as.factor(mnist.train[, 785])-1)
system.time(
nn <- dbn.dnn.train(x,y,
hidden=c(64),
#hidden=c(500,500,250,125),
output="softmax",
batchsize=128,
numepochs=100,
learningrate = 0.1)
)
其次,深度学习速度更快
最后,请注意你的hidden=c(1000500200)
网络非常庞大,即使使用多核也会非常缓慢:(Hi.关于“深度学习”还有其他好的R包吗?这个代码有两个问题。首先)你需要使用mnist$train$yy
来训练DNN。第二)没有标准化,您将得到错误的结果,因为dnn.train
没有标准化输入。
> R CMD BATCH deepnet_mnist.R
> cat deepnet_mnist.Rout
deep nn has been trained.
user system elapsed
2110.710 2.311 2115.042
> env LD_PRELOAD=/.../tools/OpenBLAS/lib/libopenblas.so R CMD BATCH deepnet_mnist.R
> cat deepnet_mnist.Rout
deep nn has been trained.
user system elapsed
2197.394 10496.190 867.748