Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/385.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java DL4J-带去噪自动编码器的精度_Java_Machine Learning_Neural Network_Deep Learning_Deeplearning4j - Fatal编程技术网

Java DL4J-带去噪自动编码器的精度

Java DL4J-带去噪自动编码器的精度,java,machine-learning,neural-network,deep-learning,deeplearning4j,Java,Machine Learning,Neural Network,Deep Learning,Deeplearning4j,我一直在尝试运行DL4J为堆叠去噪自动编码器提供的示例代码。然而,我的成绩非常糟糕: Warning: class 0 was never predicted by the model. This class was excluded from the average precision Warning: class 1 was never predicted by the model. This class was excluded from the average precision

我一直在尝试运行DL4J为堆叠去噪自动编码器提供的示例代码。然而,我的成绩非常糟糕:

Warning: class 0 was never predicted by the model. This class was excluded    from the average precision
Warning: class 1 was never predicted by the model. This class was excluded from the average precision
Warning: class 2 was never predicted by the model. This class was excluded from the average precision
Warning: class 3 was never predicted by the model. This class was excluded from the average precision
Warning: class 4 was never predicted by the model. This class was excluded from the average precision
Warning: class 5 was never predicted by the model. This class was excluded from the average precision
Warning: class 6 was never predicted by the model. This class was excluded from the average precision
Warning: class 7 was never predicted by the model. This class was excluded from the average precision
Warning: class 9 was never predicted by the model. This class was excluded from the average precision

==========================Scores========================================
 Accuracy:  0.0944
 Precision: 0.0944
 Recall:    0.1
 F1 Score:  0.0971
我不确定是什么原因导致结果如此糟糕。我使用的是MNIST数据集,堆叠去噪自动编码器的代码由DL4J提供。我的代码如下:

/**
 * Created by chris on 1/31/16.
 * Import statements above
 */
public class stackedDenoisingAutoencoderExample {

    private static Logger log = LoggerFactory.getLogger(stackedDenoisingAutoencoderExample.class);

    public static void main(String[] args) throws Exception {
        final int numRows = 28;
        final int numColumns = 28;
        int outputNum = 10;
        int numSamples = 60000;
        int batchSize = 100;
        int iterations = 10;
        int seed = 123;
        int listenerFreq = batchSize / 5;

        log.info("Load data....");
        DataSetIterator iter = new MnistDataSetIterator(batchSize,numSamples,true);

        log.info("Build model....");

        MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
            .seed(seed)
  .gradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue)
            .gradientNormalizationThreshold(1.0)
            .iterations(iterations)
            .momentum(0.5)
            .momentumAfter(Collections.singletonMap(3, 0.9))
            .optimizationAlgo(OptimizationAlgorithm.CONJUGATE_GRADIENT)
            .list(4)
            .layer(0, new AutoEncoder.Builder().nIn(numRows * numColumns).nOut(500)
                    .weightInit(WeightInit.XAVIER).lossFunction(LossFunctions.LossFunction.RMSE_XENT)
                    .corruptionLevel(0.3)
                    .build())
            .layer(1, new AutoEncoder.Builder().nIn(500).nOut(250)
                    .weightInit(WeightInit.XAVIER).lossFunction(LossFunctions.LossFunction.RMSE_XENT)
                    .corruptionLevel(0.3)

                    .build())
            .layer(2, new AutoEncoder.Builder().nIn(250).nOut(200)
                    .weightInit(WeightInit.XAVIER).lossFunction(LossFunctions.LossFunction.RMSE_XENT)
                    .corruptionLevel(0.3)
                    .build())
            .layer(3, new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD).activation("softmax")
                    .nIn(200).nOut(outputNum).build())
            .pretrain(true).backprop(false)
            .build();

        MultiLayerNetwork model = new MultiLayerNetwork(conf);
        model.init();
        model.setListeners(Arrays.asList((IterationListener) new ScoreIterationListener(listenerFreq)));

        log.info("Train model....");
        model.fit(iter); // achieves end to end pre-training

        log.info("Evaluate model....");
        Evaluation eval = new Evaluation(outputNum);

        DataSetIterator testIter = new MnistDataSetIterator(100,10000);
        while(testIter.hasNext()) {
            DataSet testMnist = testIter.next();
            INDArray predict2 = model.output(testMnist.getFeatureMatrix());
            eval.eval(testMnist.getLabels(), predict2);
        }

        log.info(eval.stats());
        log.info("****************Example finished********************");

    }
}

谢谢

既然你看起来比神经网络新,我就来看看CNN:

在nd4j页面上的gitter聊天中提问。