c++;从模板返回 对于C++中的模板泛型编程,我有点陌生,它有一个关于如何从模板函数返回对象的问题。这是mlpack库神经网络模块的一部分。这来自可找到的前馈网络测试.cpp。如果我理解正确,模板函数BuildVanilla网络的设置方式,可以传递不同类型的网络参数来构建神经网络。我想让这个函数返回它构建的FFN对象,这样我就可以从调用它的地方访问它。我对那边的代码做了一些小改动: template <typename PerformanceFunction, typename OutputLayerType, typename PerformanceFunctionType, typename MatType = arma::mat > mlpack::ann::FFN<> BuildVanillaNetwork(MatType& trainData, MatType& trainLabels, MatType& testData, MatType& testLabels, const size_t hiddenLayerSize, const size_t maxEpochs, const double classificationErrorThreshold) { // input layer mlpack::ann::LinearLayer<> inputLayer(trainData.n_rows, hiddenLayerSize); mlpack::ann::BiasLayer<> inputBiasLayer(hiddenLayerSize); mlpack::ann::BaseLayer<PerformanceFunction> inputBaseLayer; // hidden layer mlpack::ann::LinearLayer<> hiddenLayer1(hiddenLayerSize, trainLabels.n_rows); mlpack::ann::BiasLayer<> hiddenBiasLayer1(trainLabels.n_rows); mlpack::ann::BaseLayer<PerformanceFunction> outputLayer; // output layer OutputLayerType classOutputLayer; auto modules = std::tie(inputLayer, inputBiasLayer, inputBaseLayer, hiddenLayer1, hiddenBiasLayer1, outputLayer); mlpack::ann::FFN<decltype(modules), decltype(classOutputLayer), mlpack::ann::RandomInitialization, PerformanceFunctionType> net(modules, classOutputLayer); net.Train(trainData, trainLabels); MatType prediction; net.Predict(testData, prediction); double classificationError; for (size_t i = 0; i < testData.n_cols; i++) { if (arma::sum(arma::sum(arma::abs(prediction.col(i) - testLabels.col(i)))) != 0) { classificationError++; } } classificationError = double(classificationError) / testData.n_cols; std::cout << "Classification Error = " << classificationError * 100 << "%" << std::endl; return net; } 模板 mlpack::ann::FFN BuildVanilla网络(MatType和trainData, MatType和trainLabels, MatType和testData, MatType和testLabels, const size_t hiddenLayerSize, 常量大小\u t最大纪元, 常数双重分类错误(保留) { //输入层 mlpack::ann::LinearLayer inputLayer(trainData.n_行,hiddenLayerSize); mlpack::ann::BiasLayer输入BiasLayer(hiddenLayerSize); mlpack::ann::BaseLayer输入BaseLayer; //隐藏层 mlpack::ann::LinearLayer hiddenLayer1(hiddenLayerSize,trainLabels.n_行); mlpack::ann::BiasLayer HIDDENBIASLAYER 1(列车标签n_行); mlpack::ann::基本层输出层; //输出层 OutputLayer类型classOutputLayer; 自动模块=std::tie(inputLayer、inputBiasLayer、inputBaseLayer、hiddenLayer1、hiddenBiasLayer1、outputLayer); mlpack::ann::FFN net(模块、类输出层); 净列车(列车数据、列车标签); 类型预测; net.Predict(测试数据、预测); 双重分类错误; 对于(大小i=0;i typename助手::FFNType BuildVanillaNetwork(…); 我不确定这是否是问题所在。因为即使我只调用BuildVanillaNetwork而不将其返回存储在任何位置,我在编译过程中也会遇到相同的错误。此外,我不确定如何计算实际类型,因为FFN的第一个模板参数是decltype(模块)据我所知,它决定了运行时的类型。好的,很抱歉,第一个错误抱怨缺少模板参数。对答案进行了更改。因此此错误是因为decltype(net)!=mlpack::ann::FFN。不管你是否存储结果,你仍然在告诉编译器函数返回后者,而实际上它返回前者。模板参数需要匹配才能使类型相同。@flatmouse:如果我尝试,我会得到一个错误,说明没有ar的模板名称的使用无效GuestListand类型也是在编译时确定的,C++标准为11?14?@乔尔:C++ 11 ASAIK。因为我在我的手机上设置了CMACHYCXXFLAG到STD= C++ +11,所以我不能真正地写出一个解决方案,但是你必须要指定正确的返回类型。口粮。另外,如果你愿意切换到C++14,你可以避免所有这些废话,用auto@lonut:我明白了。如果我显式地编写它们,那么我必须用不同的参数为FFN编写多个函数,对吗?另外,第一个模板参数是decltype(modules)这是一堆东西的std::tie(arg的数量会有所不同)。因此,我不确定自己如何推断类型以显式写出它们。FFN的模板参数可能取决于函数的模板参数,对吗?在这种情况下,您不必编写多个函数,也不能编写仅在返回类型上不同的多个函数。std::tie的参数可以不同,但是一个模板类,因此模板参数将在编译时完全解析。@lonut:从FFN代码来看,这似乎是FFN的定义:FFN@lonut:如果我编译它,我会得到一个错误,说LayerTypes等没有在作用域中声明。因此,我仍然不知道如何根据返回的对象定义我的函数对象类型为de [100%] Building CXX object CMakeFiles/ff_nn.dir/src/ff_nn.cpp.o In file included from /home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:16:0: /usr/local/include/mlpack/methods/ann/ffn.hpp: In instantiation of ‘class mlpack::ann::FFN<mlpack::ann::LogisticFunction, mlpack::ann::BinaryClassificationLayer, mlpack::ann::MeanSquaredErrorFunction, arma::Mat<double> >’: /home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:83:112: required from here /usr/local/include/mlpack/methods/ann/ffn.hpp:361:55: error: incomplete type ‘std::tuple_size<mlpack::ann::LogisticFunction>’ used in nested name specifier size_t Max = std::tuple_size<LayerTypes>::value - 1, ^ /usr/local/include/mlpack/methods/ann/ffn.hpp:369:55: error: incomplete type ‘std::tuple_size<mlpack::ann::LogisticFunction>’ used in nested name specifier size_t Max = std::tuple_size<LayerTypes>::value - 1, ^ /home/username/project-yanack/mlpack_nn/src/ff_nn.cpp: In instantiation of ‘mlpack::ann::FFN<PerformanceFunction, OutputLayerType, PerformanceFunctionType, MatType> BuildVanillaNetwork(MatType&, MatType&, MatType&, MatType&, size_t, size_t, double) [with PerformanceFunction = mlpack::ann::LogisticFunction; OutputLayerType = mlpack::ann::BinaryClassificationLayer; PerformanceFunctionType = mlpack::ann::MeanSquaredErrorFunction; MatType = arma::Mat<double>; size_t = long unsigned int]’: /home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:83:112: required from here /home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:64:12: error: could not convert ‘net’ from ‘mlpack::ann::FFN<std::tuple<mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >&>, mlpack::ann::BinaryClassificationLayer, mlpack::ann::RandomInitialization, mlpack::ann::MeanSquaredErrorFunction>’ to ‘mlpack::ann::FFN<mlpack::ann::LogisticFunction, mlpack::ann::BinaryClassificationLayer, mlpack::ann::MeanSquaredErrorFunction, arma::Mat<double> >’ return net; ^ make[2]: *** [CMakeFiles/ff_nn.dir/src/ff_nn.cpp.o] Error 1 make[1]: *** [CMakeFiles/ff_nn.dir/all] Error 2 make: *** [all] Error 2 mlpack::ann::FFN< PerformanceFunction, OutputLayerType, PerformanceFunctionType, MatType > BuildVanillaNetwork(MatType& trainData,... auto BuildVanillaNetwork(MatType& trainData,`...) -> decltype(auto) {... mlpack::ann::FFN<> BuildVanillaNetwork(...) template<typename PerformanceFunction, typename OutputLayerType, typename PerformanceFunctionType, typename MatType> struct BuildVanillaNetworkHelper { using LinearLayer = mlpack::ann::LinearLayer<>; using BiasLayer = mlpack::ann::BiasLayer<>; using BaseLayer = mlpack::ann::BaseLayer<PerformanceFunction>; using ModulesType = std::tuple<LinearLayer, BiasLayer, BaseLayer, LinearLayer, BiasLayer, BaseLayer>; using FFNType = mlpack::ann::FFN<ModulesType, OutputLayerType, mlpack::ann::RandomInitialization, PerformanceFunctionType>; }; template <typename PerformanceFunction, typename OutputLayerType, typename PerformanceFunctionType, typename MatType = arma::mat, typename Helper = BuildVanillaNetworkHelper< PerformanceFunction, OutputLayerType, PerformanceFunctionType, MatType> > typename Helper::FFNType BuildVanillaNetwork(...);

c++;从模板返回 对于C++中的模板泛型编程,我有点陌生,它有一个关于如何从模板函数返回对象的问题。这是mlpack库神经网络模块的一部分。这来自可找到的前馈网络测试.cpp。如果我理解正确,模板函数BuildVanilla网络的设置方式,可以传递不同类型的网络参数来构建神经网络。我想让这个函数返回它构建的FFN对象,这样我就可以从调用它的地方访问它。我对那边的代码做了一些小改动: template <typename PerformanceFunction, typename OutputLayerType, typename PerformanceFunctionType, typename MatType = arma::mat > mlpack::ann::FFN<> BuildVanillaNetwork(MatType& trainData, MatType& trainLabels, MatType& testData, MatType& testLabels, const size_t hiddenLayerSize, const size_t maxEpochs, const double classificationErrorThreshold) { // input layer mlpack::ann::LinearLayer<> inputLayer(trainData.n_rows, hiddenLayerSize); mlpack::ann::BiasLayer<> inputBiasLayer(hiddenLayerSize); mlpack::ann::BaseLayer<PerformanceFunction> inputBaseLayer; // hidden layer mlpack::ann::LinearLayer<> hiddenLayer1(hiddenLayerSize, trainLabels.n_rows); mlpack::ann::BiasLayer<> hiddenBiasLayer1(trainLabels.n_rows); mlpack::ann::BaseLayer<PerformanceFunction> outputLayer; // output layer OutputLayerType classOutputLayer; auto modules = std::tie(inputLayer, inputBiasLayer, inputBaseLayer, hiddenLayer1, hiddenBiasLayer1, outputLayer); mlpack::ann::FFN<decltype(modules), decltype(classOutputLayer), mlpack::ann::RandomInitialization, PerformanceFunctionType> net(modules, classOutputLayer); net.Train(trainData, trainLabels); MatType prediction; net.Predict(testData, prediction); double classificationError; for (size_t i = 0; i < testData.n_cols; i++) { if (arma::sum(arma::sum(arma::abs(prediction.col(i) - testLabels.col(i)))) != 0) { classificationError++; } } classificationError = double(classificationError) / testData.n_cols; std::cout << "Classification Error = " << classificationError * 100 << "%" << std::endl; return net; } 模板 mlpack::ann::FFN BuildVanilla网络(MatType和trainData, MatType和trainLabels, MatType和testData, MatType和testLabels, const size_t hiddenLayerSize, 常量大小\u t最大纪元, 常数双重分类错误(保留) { //输入层 mlpack::ann::LinearLayer inputLayer(trainData.n_行,hiddenLayerSize); mlpack::ann::BiasLayer输入BiasLayer(hiddenLayerSize); mlpack::ann::BaseLayer输入BaseLayer; //隐藏层 mlpack::ann::LinearLayer hiddenLayer1(hiddenLayerSize,trainLabels.n_行); mlpack::ann::BiasLayer HIDDENBIASLAYER 1(列车标签n_行); mlpack::ann::基本层输出层; //输出层 OutputLayer类型classOutputLayer; 自动模块=std::tie(inputLayer、inputBiasLayer、inputBaseLayer、hiddenLayer1、hiddenBiasLayer1、outputLayer); mlpack::ann::FFN net(模块、类输出层); 净列车(列车数据、列车标签); 类型预测; net.Predict(测试数据、预测); 双重分类错误; 对于(大小i=0;i typename助手::FFNType BuildVanillaNetwork(…); 我不确定这是否是问题所在。因为即使我只调用BuildVanillaNetwork而不将其返回存储在任何位置,我在编译过程中也会遇到相同的错误。此外,我不确定如何计算实际类型,因为FFN的第一个模板参数是decltype(模块)据我所知,它决定了运行时的类型。好的,很抱歉,第一个错误抱怨缺少模板参数。对答案进行了更改。因此此错误是因为decltype(net)!=mlpack::ann::FFN。不管你是否存储结果,你仍然在告诉编译器函数返回后者,而实际上它返回前者。模板参数需要匹配才能使类型相同。@flatmouse:如果我尝试,我会得到一个错误,说明没有ar的模板名称的使用无效GuestListand类型也是在编译时确定的,C++标准为11?14?@乔尔:C++ 11 ASAIK。因为我在我的手机上设置了CMACHYCXXFLAG到STD= C++ +11,所以我不能真正地写出一个解决方案,但是你必须要指定正确的返回类型。口粮。另外,如果你愿意切换到C++14,你可以避免所有这些废话,用auto@lonut:我明白了。如果我显式地编写它们,那么我必须用不同的参数为FFN编写多个函数,对吗?另外,第一个模板参数是decltype(modules)这是一堆东西的std::tie(arg的数量会有所不同)。因此,我不确定自己如何推断类型以显式写出它们。FFN的模板参数可能取决于函数的模板参数,对吗?在这种情况下,您不必编写多个函数,也不能编写仅在返回类型上不同的多个函数。std::tie的参数可以不同,但是一个模板类,因此模板参数将在编译时完全解析。@lonut:从FFN代码来看,这似乎是FFN的定义:FFN@lonut:如果我编译它,我会得到一个错误,说LayerTypes等没有在作用域中声明。因此,我仍然不知道如何根据返回的对象定义我的函数对象类型为de [100%] Building CXX object CMakeFiles/ff_nn.dir/src/ff_nn.cpp.o In file included from /home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:16:0: /usr/local/include/mlpack/methods/ann/ffn.hpp: In instantiation of ‘class mlpack::ann::FFN<mlpack::ann::LogisticFunction, mlpack::ann::BinaryClassificationLayer, mlpack::ann::MeanSquaredErrorFunction, arma::Mat<double> >’: /home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:83:112: required from here /usr/local/include/mlpack/methods/ann/ffn.hpp:361:55: error: incomplete type ‘std::tuple_size<mlpack::ann::LogisticFunction>’ used in nested name specifier size_t Max = std::tuple_size<LayerTypes>::value - 1, ^ /usr/local/include/mlpack/methods/ann/ffn.hpp:369:55: error: incomplete type ‘std::tuple_size<mlpack::ann::LogisticFunction>’ used in nested name specifier size_t Max = std::tuple_size<LayerTypes>::value - 1, ^ /home/username/project-yanack/mlpack_nn/src/ff_nn.cpp: In instantiation of ‘mlpack::ann::FFN<PerformanceFunction, OutputLayerType, PerformanceFunctionType, MatType> BuildVanillaNetwork(MatType&, MatType&, MatType&, MatType&, size_t, size_t, double) [with PerformanceFunction = mlpack::ann::LogisticFunction; OutputLayerType = mlpack::ann::BinaryClassificationLayer; PerformanceFunctionType = mlpack::ann::MeanSquaredErrorFunction; MatType = arma::Mat<double>; size_t = long unsigned int]’: /home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:83:112: required from here /home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:64:12: error: could not convert ‘net’ from ‘mlpack::ann::FFN<std::tuple<mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >&>, mlpack::ann::BinaryClassificationLayer, mlpack::ann::RandomInitialization, mlpack::ann::MeanSquaredErrorFunction>’ to ‘mlpack::ann::FFN<mlpack::ann::LogisticFunction, mlpack::ann::BinaryClassificationLayer, mlpack::ann::MeanSquaredErrorFunction, arma::Mat<double> >’ return net; ^ make[2]: *** [CMakeFiles/ff_nn.dir/src/ff_nn.cpp.o] Error 1 make[1]: *** [CMakeFiles/ff_nn.dir/all] Error 2 make: *** [all] Error 2 mlpack::ann::FFN< PerformanceFunction, OutputLayerType, PerformanceFunctionType, MatType > BuildVanillaNetwork(MatType& trainData,... auto BuildVanillaNetwork(MatType& trainData,`...) -> decltype(auto) {... mlpack::ann::FFN<> BuildVanillaNetwork(...) template<typename PerformanceFunction, typename OutputLayerType, typename PerformanceFunctionType, typename MatType> struct BuildVanillaNetworkHelper { using LinearLayer = mlpack::ann::LinearLayer<>; using BiasLayer = mlpack::ann::BiasLayer<>; using BaseLayer = mlpack::ann::BaseLayer<PerformanceFunction>; using ModulesType = std::tuple<LinearLayer, BiasLayer, BaseLayer, LinearLayer, BiasLayer, BaseLayer>; using FFNType = mlpack::ann::FFN<ModulesType, OutputLayerType, mlpack::ann::RandomInitialization, PerformanceFunctionType>; }; template <typename PerformanceFunction, typename OutputLayerType, typename PerformanceFunctionType, typename MatType = arma::mat, typename Helper = BuildVanillaNetworkHelper< PerformanceFunction, OutputLayerType, PerformanceFunctionType, MatType> > typename Helper::FFNType BuildVanillaNetwork(...);,c++,templates,mlpack,C++,Templates,Mlpack,mlpack::ann::FFN BuildVanillaNetwork(MatType&trainData, 将其更改为包含模板参数的内容,例如: mlpack::ann::FFN< PerformanceFunction, OutputLayerType, PerformanceFunctionType, MatType > BuildVanillaNetwork(MatType& trainData,... 问题在于您将BuildVan

mlpack::ann::FFN BuildVanillaNetwork(MatType&trainData,

将其更改为包含模板参数的内容,例如:

mlpack::ann::FFN<
    PerformanceFunction,
    OutputLayerType,
    PerformanceFunctionType,
    MatType
> BuildVanillaNetwork(MatType& trainData,...

问题在于您将
BuildVanillaNetwork
函数定义为:

mlpack::ann::FFN<> BuildVanillaNetwork(...)
mlpack::ann::FFN buildvanillarnetwork(…)
当涉及模板时,错误消息通常很难被人工读取,但通读这些行会得到如下结果:

错误:为“模板类mlpack::ann::FFN”提供的模板参数数量错误(0,应为4)

其余的错误都是由这个错误引起的(基本上,因为它不理解该函数的返回类型,编译器假定它是
int
,然后它抱怨它不能将
net
转换为
int


因此,您必须实际指定返回类型的模板参数。您在函数体中使用
decltype
来推断它们(这发生在编译时,而不是运行时),但在原型中并不那么容易。有一种方法可以使用
decltype
来声明函数的返回类型,但在这种情况下对您没有多大帮助。因此,您不妨继续明确地编写它们。

您可以使用以下模式稍微简化返回类型推断:

template<typename PerformanceFunction, typename OutputLayerType,
    typename PerformanceFunctionType, typename MatType>
struct BuildVanillaNetworkHelper
{
    using LinearLayer = mlpack::ann::LinearLayer<>;
    using BiasLayer = mlpack::ann::BiasLayer<>;
    using BaseLayer = mlpack::ann::BaseLayer<PerformanceFunction>;
    using ModulesType = std::tuple<LinearLayer, BiasLayer, BaseLayer,
        LinearLayer, BiasLayer, BaseLayer>;
    using FFNType = mlpack::ann::FFN<ModulesType, OutputLayerType,
        mlpack::ann::RandomInitialization, PerformanceFunctionType>;
};

template <typename PerformanceFunction,
         typename OutputLayerType,
         typename PerformanceFunctionType,
         typename MatType = arma::mat,
         typename Helper = BuildVanillaNetworkHelper<
             PerformanceFunction, OutputLayerType,
             PerformanceFunctionType, MatType>
         >
typename Helper::FFNType BuildVanillaNetwork(...);
模板
结构BuildVanillaNetworkHelper
{
使用LinearLayer=mlpack::ann::LinearLayer;
使用BiasLayer=mlpack::ann::BiasLayer;
使用BaseLayer=mlpack::ann::BaseLayer;
使用moduleType=std::tuple;
使用FFNType=mlpack::ann::FFN;
};
模板
>
typename助手::FFNType BuildVanillaNetwork(…);

我不确定这是否是问题所在。因为即使我只调用BuildVanillaNetwork而不将其返回存储在任何位置,我在编译过程中也会遇到相同的错误。此外,我不确定如何计算实际类型,因为FFN的第一个模板参数是decltype(模块)据我所知,它决定了运行时的类型。好的,很抱歉,第一个错误抱怨缺少模板参数。对答案进行了更改。因此此错误是因为
decltype(net)!=mlpack::ann::FFN
。不管你是否存储结果,你仍然在告诉编译器函数返回后者,而实际上它返回前者。模板参数需要匹配才能使类型相同。@flatmouse:如果我尝试,我会得到一个错误,说明没有ar的模板名称的使用无效GuestListand类型也是在编译时确定的,C++标准为11?14?@乔尔:C++ 11 ASAIK。因为我在我的手机上设置了CMACHYCXXFLAG到STD= C++ +11,所以我不能真正地写出一个解决方案,但是你必须要指定正确的返回类型。口粮。另外,如果你愿意切换到C++14,你可以避免所有这些废话,用
auto
@lonut:我明白了。如果我显式地编写它们,那么我必须用不同的参数为FFN编写多个函数,对吗?另外,第一个模板参数是decltype(modules)这是一堆东西的std::tie(arg的数量会有所不同)。因此,我不确定自己如何推断类型以显式写出它们。FFN的模板参数可能取决于函数的模板参数,对吗?在这种情况下,您不必编写多个函数,也不能编写仅在返回类型上不同的多个函数。std::tie的参数可以不同,但是一个模板类,因此模板参数将在编译时完全解析。@lonut:从FFN代码来看,这似乎是FFN的定义:FFN@lonut:如果我编译它,我会得到一个错误,说LayerTypes等没有在作用域中声明。因此,我仍然不知道如何根据返回的对象定义我的函数对象类型为de
[100%] Building CXX object CMakeFiles/ff_nn.dir/src/ff_nn.cpp.o
In file included from /home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:16:0:
/usr/local/include/mlpack/methods/ann/ffn.hpp: In instantiation of ‘class mlpack::ann::FFN<mlpack::ann::LogisticFunction, mlpack::ann::BinaryClassificationLayer, mlpack::ann::MeanSquaredErrorFunction, arma::Mat<double> >’:
/home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:83:112:   required from here
/usr/local/include/mlpack/methods/ann/ffn.hpp:361:55: error: incomplete type ‘std::tuple_size<mlpack::ann::LogisticFunction>’ used in nested name specifier
       size_t Max = std::tuple_size<LayerTypes>::value - 1,
                                                       ^
/usr/local/include/mlpack/methods/ann/ffn.hpp:369:55: error: incomplete type ‘std::tuple_size<mlpack::ann::LogisticFunction>’ used in nested name specifier
       size_t Max = std::tuple_size<LayerTypes>::value - 1,
                                                       ^
/home/username/project-yanack/mlpack_nn/src/ff_nn.cpp: In instantiation of ‘mlpack::ann::FFN<PerformanceFunction, OutputLayerType, PerformanceFunctionType, MatType> BuildVanillaNetwork(MatType&, MatType&, MatType&, MatType&, size_t, size_t, double) [with PerformanceFunction = mlpack::ann::LogisticFunction; OutputLayerType = mlpack::ann::BinaryClassificationLayer; PerformanceFunctionType = mlpack::ann::MeanSquaredErrorFunction; MatType = arma::Mat<double>; size_t = long unsigned int]’:
/home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:83:112:   required from here
/home/username/project-yanack/mlpack_nn/src/ff_nn.cpp:64:12: error: could not convert ‘net’ from ‘mlpack::ann::FFN<std::tuple<mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::LinearLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BiasLayer<arma::Mat<double>, arma::Mat<double> >&, mlpack::ann::BaseLayer<mlpack::ann::LogisticFunction, arma::Mat<double>, arma::Mat<double> >&>, mlpack::ann::BinaryClassificationLayer, mlpack::ann::RandomInitialization, mlpack::ann::MeanSquaredErrorFunction>’ to ‘mlpack::ann::FFN<mlpack::ann::LogisticFunction, mlpack::ann::BinaryClassificationLayer, mlpack::ann::MeanSquaredErrorFunction, arma::Mat<double> >’
     return net;
            ^
make[2]: *** [CMakeFiles/ff_nn.dir/src/ff_nn.cpp.o] Error 1
make[1]: *** [CMakeFiles/ff_nn.dir/all] Error 2
make: *** [all] Error 2
mlpack::ann::FFN<
    PerformanceFunction,
    OutputLayerType,
    PerformanceFunctionType,
    MatType
> BuildVanillaNetwork(MatType& trainData,...
auto BuildVanillaNetwork(MatType& trainData,`...) -> decltype(auto) {...
mlpack::ann::FFN<> BuildVanillaNetwork(...)
template<typename PerformanceFunction, typename OutputLayerType,
    typename PerformanceFunctionType, typename MatType>
struct BuildVanillaNetworkHelper
{
    using LinearLayer = mlpack::ann::LinearLayer<>;
    using BiasLayer = mlpack::ann::BiasLayer<>;
    using BaseLayer = mlpack::ann::BaseLayer<PerformanceFunction>;
    using ModulesType = std::tuple<LinearLayer, BiasLayer, BaseLayer,
        LinearLayer, BiasLayer, BaseLayer>;
    using FFNType = mlpack::ann::FFN<ModulesType, OutputLayerType,
        mlpack::ann::RandomInitialization, PerformanceFunctionType>;
};

template <typename PerformanceFunction,
         typename OutputLayerType,
         typename PerformanceFunctionType,
         typename MatType = arma::mat,
         typename Helper = BuildVanillaNetworkHelper<
             PerformanceFunction, OutputLayerType,
             PerformanceFunctionType, MatType>
         >
typename Helper::FFNType BuildVanillaNetwork(...);