MXNetR没有足够的信息来获取形状

MXNetR没有足够的信息来获取形状,r,neural-network,deep-learning,mxnet,R,Neural Network,Deep Learning,Mxnet,我正在MXNetR中实现一个神经网络。我试图自定义损失函数,以计算输出向量和目标向量之间的相关性。下面是我的代码: # Generate testing data train.x = matrix(data = rexp(200, rate = 10), nrow = 120, ncol = 6380) test.x = matrix(data = rexp(200, rate = 10), nrow = 60, ncol = 6380) train.y = matrix(data = rexp

我正在MXNetR中实现一个神经网络。我试图自定义损失函数,以计算输出向量和目标向量之间的相关性。下面是我的代码:

# Generate testing data
train.x = matrix(data = rexp(200, rate = 10), nrow = 120, ncol = 6380)
test.x = matrix(data = rexp(200, rate = 10), nrow = 60, ncol = 6380)
train.y = matrix(data = rexp(200, rate = 10), nrow = 120, ncol = 319)
test.y = matrix(data = rexp(200, rate = 10), nrow = 60, ncol = 319)

# Reshape testing data
train.array <-train.x
dim(train.array) <-c(20,319,1,ncol(train.x))
test.array<-test.x
dim(test.array) <-c (20,319,1,ncol(test.x))

# Define the input data
data <- mx.symbol.Variable("data")

# Define the first fully connected layer
fc1 <- mx.symbol.FullyConnected(data, num_hidden = 100)
act.fun <- mx.symbol.Activation(fc1, act_type = "relu") # create a hidden layer with Rectified Linear Unit as its activation function.
output <<- mx.symbol.FullyConnected(act.fun, num_hidden = 319)

# Customize loss function
label <- mx.symbol.Variable("label")

output_mean <- mx.symbol.mean(output)
label_mean <- mx.symbol.mean(label)

output_delta <-mx.symbol.broadcast_sub(output, output_mean)
label_delta <- mx.symbol.broadcast_sub(label, label_mean)

output_sqr <-mx.symbol.square(output_delta)
label_sqr <- mx.symbol.square(label_delta)

output_sd <- mx.symbol.sqrt(mx.symbol.sum(output_delta))
label_sd <- mx.symbol.sqrt(mx.symbol.sum(label_delta))

numerator <- mx.symbol.sum(output_delta * label_delta)
denominator <- output_sd * label_sd

lro <- mx.symbol.MakeLoss(numerator/denominator)

# Generate a new model
model <- mx.model.FeedForward.create(symbol=lro, X=train.array, y=train.y, 
                                 num.round=5000, array.batch.size=1, optimizer = "adam",
                                 learning.rate = 0.0003, eval.metric = mx.metric.rmse,
                                 epoch.end.callback = mx.callback.log.train.metric(20, logger))
下面是我的代码:

# Generate testing data
train.x = matrix(data = rexp(200, rate = 10), nrow = 120, ncol = 6380)
test.x = matrix(data = rexp(200, rate = 10), nrow = 60, ncol = 6380)
train.y = matrix(data = rexp(200, rate = 10), nrow = 120, ncol = 319)
test.y = matrix(data = rexp(200, rate = 10), nrow = 60, ncol = 319)

# Reshape testing data
train.array <-train.x
dim(train.array) <-c(20,319,1,ncol(train.x))
test.array<-test.x
dim(test.array) <-c (20,319,1,ncol(test.x))

# Define the input data
data <- mx.symbol.Variable("data")

# Define the first fully connected layer
fc1 <- mx.symbol.FullyConnected(data, num_hidden = 100)
act.fun <- mx.symbol.Activation(fc1, act_type = "relu") # create a hidden layer with Rectified Linear Unit as its activation function.
output <<- mx.symbol.FullyConnected(act.fun, num_hidden = 319)

# Customize loss function
label <- mx.symbol.Variable("label")

output_mean <- mx.symbol.mean(output)
label_mean <- mx.symbol.mean(label)

output_delta <-mx.symbol.broadcast_sub(output, output_mean)
label_delta <- mx.symbol.broadcast_sub(label, label_mean)

output_sqr <-mx.symbol.square(output_delta)
label_sqr <- mx.symbol.square(label_delta)

output_sd <- mx.symbol.sqrt(mx.symbol.sum(output_delta))
label_sd <- mx.symbol.sqrt(mx.symbol.sum(label_delta))

numerator <- mx.symbol.sum(output_delta * label_delta)
denominator <- output_sd * label_sd

lro <- mx.symbol.MakeLoss(numerator/denominator)

# Generate a new model
model <- mx.model.FeedForward.create(symbol=lro, X=train.array, y=train.y, 
                                 num.round=5000, array.batch.size=1, optimizer = "adam",
                                 learning.rate = 0.0003, eval.metric = mx.metric.rmse,
                                 epoch.end.callback = mx.callback.log.train.metric(20, logger))
我试图将整个相关性公式包装在MXNet中:

lro2 <- mx.symbol.MakeLoss(
    mx.symbol.negative((mx.symbol.sum(output * label) -
    (mx.symbol.sum(output) * mx.symbol.sum(label))) /
    mx.symbol.sqrt((mx.symbol.sum(mx.symbol.square(output)) -
    ((mx.symbol.sum(output)) * (mx.symbol.sum(output)))) *
    (mx.symbol.sum(mx.symbol.square(label)) - ((mx.symbol.sum(label)) * (mx.symbol.sum(label))))))
)

lro2MXNet执行形状推断以确定模型参数(权重和偏差)的所需形状,以便分配内存,第一次这样做是在初始化模型参数时

在你的符号中的某个地方,你有一个无法从邻居推断的形状,我怀疑它可能是你在内联定义中删除的广播子。由于整形中的错误,很难诊断出确切的问题。您还可以尝试使用NDArray测试逻辑,然后转换回使用符号


如果您正在查看批处理样本,则应更改
mx.model.FeedForward.create的
array.batch.size
参数,而不是将数据重新格式化为批处理。

很抱歉,在重新格式化步骤中遇到错误。你能更新代码吗?