Deep learning 如何在Deeplearning4j上用Alexnet对Cifar图像进行分类

Deep learning 如何在Deeplearning4j上用Alexnet对Cifar图像进行分类,deep-learning,deeplearning4j,dl4j,Deep Learning,Deeplearning4j,Dl4j,我是Deeplearning4j的初学者,将参加Cifar-10图像分类测试。我只是从DL4j示例(animalClassification.java)复制Alexnet,如下所示: 当我运行代码时,它抛出了一个异常,比如新int[]{3,3}上的“layer-9”配置有一些问题,它应该大于0,小于pHeight+2*padH。在java代码中将weight*height从32*32更改为100*100时,它运行正常,但我不认为结果是好的。所以我对alexnet上处理32*32图像的层配置有点困

我是Deeplearning4j的初学者,将参加Cifar-10图像分类测试。我只是从DL4j示例(animalClassification.java)复制Alexnet,如下所示:


当我运行代码时,它抛出了一个异常,比如新int[]{3,3}上的“layer-9”配置有一些问题,它应该大于0,小于pHeight+2*padH。在java代码中将weight*height从32*32更改为100*100时,它运行正常,但我不认为结果是好的。所以我对alexnet上处理32*32图像的层配置有点困惑

这不是正确的例子。请等到我们完成从凯拉斯进口的新型号。这也将包括预训练模型

    MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
        .seed(seed)
        .weightInit(WeightInit.DISTRIBUTION)
        .dist(new NormalDistribution(0.0, 0.01))
        .activation(Activation.RELU)
        .updater(Updater.NESTEROVS)
        .iterations(iterations)
        .gradientNormalization(GradientNormalization.RenormalizeL2PerLayer) // normalize to prevent vanishing or exploding gradients
        .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
        .learningRate(1e-2)
        .biasLearningRate(1e-2*2)
        .learningRateDecayPolicy(LearningRatePolicy.Step)
        .lrPolicyDecayRate(0.1)
        .lrPolicySteps(100000)
        .regularization(true)
        .l2(5 * 1e-4)
        .momentum(0.9)
        .miniBatch(false)
        .list()
        .layer(0, convInit("cnn1", channels, 96, new int[]{11, 11}, new int[]{4, 4}, new int[]{3, 3}, 0))
        .layer(1, new LocalResponseNormalization.Builder().name("lrn1").build())
        .layer(2, maxPool("maxpool1", new int[]{3,3}))
        .layer(3, conv5x5("cnn2", 256, new int[] {1,1}, new int[] {2,2}, nonZeroBias))
        .layer(4, new LocalResponseNormalization.Builder().name("lrn2").build())
        .layer(5, maxPool("maxpool2", new int[]{3,3}))
        .layer(6,conv3x3("cnn3", 384, 0))
        .layer(7,conv3x3("cnn4", 384, nonZeroBias))
        .layer(8,conv3x3("cnn5", 256, nonZeroBias))
        .layer(9, maxPool("maxpool3", new int[]{3,3}))
        .layer(10, fullyConnected("ffn1", 4096, nonZeroBias, dropOut, new GaussianDistribution(0, 0.005)))
        .layer(11, fullyConnected("ffn2", 4096, nonZeroBias, dropOut, new GaussianDistribution(0, 0.005)))
        .layer(12, new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
            .name("output")
            .nOut(numLabels)
            .activation(Activation.SOFTMAX)
            .build())
        .backprop(true)
        .pretrain(false)
        .setInputType(InputType.convolutional(height, width, channels))
        .build();