Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/307.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 深度学习4J(DL4J)精确度低,召回率和F1_Java_Deep Learning_Deeplearning4j_Dl4j - Fatal编程技术网

Java 深度学习4J(DL4J)精确度低,召回率和F1

Java 深度学习4J(DL4J)精确度低,召回率和F1,java,deep-learning,deeplearning4j,dl4j,Java,Deep Learning,Deeplearning4j,Dl4j,我将使用DL4J根据条件矩阵找到一个好的模型。我已经准备了类似CSV的数据集(如下所示的样本),在微调超参数并多次训练模型后,我仍然无法获得合理的精度、召回率和F1结果。请问我是否有错误的地方 样本数据集: int OUTPUT_NEURONS = 2; // Only 2 classes for output int CLASS_INDEX = 0; // First column is the label int FILE_SIZE = 0; // FILE_SIZE

我将使用DL4J根据条件矩阵找到一个好的模型。我已经准备了类似CSV的数据集(如下所示的样本),在微调超参数并多次训练模型后,我仍然无法获得合理的精度、召回率和F1结果。请问我是否有错误的地方

样本数据集:

int OUTPUT_NEURONS = 2;  // Only 2 classes for output
int CLASS_INDEX = 0;     // First column is the label
int FILE_SIZE = 0;       // FILE_SIZE will be calculated while preparing the datavecRecords below

List<List<Writable>> datavecRecords = new ArrayList<>();

......
Prepare the datavecRecords using above csv data 
......

CollectionRecordReader crr = new CollectionRecordReader(datavecRecords);
RecordReaderDataSetIterator iter = new RecordReaderDataSetIterator(crr, FILE_SIZE, CLASS_INDEX, OUTPUT_NEURONS);
allData = iter.next();

SplitTestAndTrain testAndTrain = allData.splitTestAndTrain(0.6);
DataSet trainingData = testAndTrain.getTrain();
DataSet testData = testAndTrain.getTest();

DataNormalization normalizer = new NormalizerStandardize();
normalizer.fit(trainingData);
normalizer.transform(trainingData);
normalizer.transform(testData);

// For early escaping use
DataSetIterator trainSetIterator = new ListDataSetIterator(trainingData.asList()); 
DataSetIterator testSetIterator = new ListDataSetIterator(testData.asList()); 

// sortedKeys is the calculated number of input columns

INPUT_NEURONS = sortedKeys.size() - 1; 
HIDDEN_NEURONS = FILE_SIZE / (2 * (INPUT_NEURONS + OUTPUT_NEURONS));
HIDDEN_NEURONS = HIDDEN_NEURONS <= 0 ? 1 : HIDDEN_NEURONS;
int n=0;
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
    .seed(12345)
    .iterations(1)
    .learningRate(0.001)
    .weightInit(WeightInit.XAVIER)
    .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
    .regularization(true).l2(1e-4)
    .updater(new Nesterovs(0.001,0.9))
    .list()
        .layer(n++, new DenseLayer.Builder()
            .nIn(INPUT_NEURONS)
            .nOut(HIDDEN_NEURONS)
            .activation(Activation.RELU)
            .build())
        .layer(n++, new OutputLayer.Builder(LossFunction.NEGATIVELOGLIKELIHOOD)
            .nIn(HIDDEN_NEURONS)
            .nOut(OUTPUT_NEURONS)
            .activation(Activation.SOFTMAX)
            .build())
    .pretrain(false).backprop(true).build();            

EarlyStoppingConfiguration esConf = new EarlyStoppingConfiguration.Builder()
    .epochTerminationConditions(
        new MaxEpochsTerminationCondition(10000), 
        new ScoreImprovementEpochTerminationCondition(50))
    .iterationTerminationConditions(new MaxTimeIterationTerminationCondition(5, TimeUnit.MINUTES))
    .scoreCalculator(new DataSetLossCalculator(testSetIterator, true))
    .evaluateEveryNEpochs(1)
    .modelSaver(saver)
    .build();
Termination reason: EpochTerminationCondition
Termination details: ScoreImprovementEpochTerminationCondition(maxEpochsWithNoImprovement=50, minImprovement=0.0)
Total epochs: 55
Best epoch number: 4
Score at best epoch: 0.6579822991097982

Examples labeled as 0 classified by model as 0: 397 times
Examples labeled as 0 classified by model as 1: 58 times
Examples labeled as 1 classified by model as 0: 190 times
Examples labeled as 1 classified by model as 1: 55 times


==========================Scores========================================
 # of classes:    2
 Accuracy:        0.6457
 Precision:       0.5815
 Recall:          0.5485
 F1 Score:        0.3073
========================================================================
Pattern1 :      Accuracy: 0.6457142857142857 | Precision: 0.5815229681446081 | Recall: 0.54850863422292 | F1: 0.3072625698324022
##基本上,每列定义每个样本是否存在条件(1)或条件(0)。第一列是label类,只有2个输出,即1/0

DataVec部分:

int OUTPUT_NEURONS = 2;  // Only 2 classes for output
int CLASS_INDEX = 0;     // First column is the label
int FILE_SIZE = 0;       // FILE_SIZE will be calculated while preparing the datavecRecords below

List<List<Writable>> datavecRecords = new ArrayList<>();

......
Prepare the datavecRecords using above csv data 
......

CollectionRecordReader crr = new CollectionRecordReader(datavecRecords);
RecordReaderDataSetIterator iter = new RecordReaderDataSetIterator(crr, FILE_SIZE, CLASS_INDEX, OUTPUT_NEURONS);
allData = iter.next();

SplitTestAndTrain testAndTrain = allData.splitTestAndTrain(0.6);
DataSet trainingData = testAndTrain.getTrain();
DataSet testData = testAndTrain.getTest();

DataNormalization normalizer = new NormalizerStandardize();
normalizer.fit(trainingData);
normalizer.transform(trainingData);
normalizer.transform(testData);

// For early escaping use
DataSetIterator trainSetIterator = new ListDataSetIterator(trainingData.asList()); 
DataSetIterator testSetIterator = new ListDataSetIterator(testData.asList()); 

// sortedKeys is the calculated number of input columns

INPUT_NEURONS = sortedKeys.size() - 1; 
HIDDEN_NEURONS = FILE_SIZE / (2 * (INPUT_NEURONS + OUTPUT_NEURONS));
HIDDEN_NEURONS = HIDDEN_NEURONS <= 0 ? 1 : HIDDEN_NEURONS;
int n=0;
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
    .seed(12345)
    .iterations(1)
    .learningRate(0.001)
    .weightInit(WeightInit.XAVIER)
    .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
    .regularization(true).l2(1e-4)
    .updater(new Nesterovs(0.001,0.9))
    .list()
        .layer(n++, new DenseLayer.Builder()
            .nIn(INPUT_NEURONS)
            .nOut(HIDDEN_NEURONS)
            .activation(Activation.RELU)
            .build())
        .layer(n++, new OutputLayer.Builder(LossFunction.NEGATIVELOGLIKELIHOOD)
            .nIn(HIDDEN_NEURONS)
            .nOut(OUTPUT_NEURONS)
            .activation(Activation.SOFTMAX)
            .build())
    .pretrain(false).backprop(true).build();            

EarlyStoppingConfiguration esConf = new EarlyStoppingConfiguration.Builder()
    .epochTerminationConditions(
        new MaxEpochsTerminationCondition(10000), 
        new ScoreImprovementEpochTerminationCondition(50))
    .iterationTerminationConditions(new MaxTimeIterationTerminationCondition(5, TimeUnit.MINUTES))
    .scoreCalculator(new DataSetLossCalculator(testSetIterator, true))
    .evaluateEveryNEpochs(1)
    .modelSaver(saver)
    .build();
Termination reason: EpochTerminationCondition
Termination details: ScoreImprovementEpochTerminationCondition(maxEpochsWithNoImprovement=50, minImprovement=0.0)
Total epochs: 55
Best epoch number: 4
Score at best epoch: 0.6579822991097982

Examples labeled as 0 classified by model as 0: 397 times
Examples labeled as 0 classified by model as 1: 58 times
Examples labeled as 1 classified by model as 0: 190 times
Examples labeled as 1 classified by model as 1: 55 times


==========================Scores========================================
 # of classes:    2
 Accuracy:        0.6457
 Precision:       0.5815
 Recall:          0.5485
 F1 Score:        0.3073
========================================================================
Pattern1 :      Accuracy: 0.6457142857142857 | Precision: 0.5815229681446081 | Recall: 0.54850863422292 | F1: 0.3072625698324022
列车和测试代码

StatsStorage statsStorage = new InMemoryStatsStorage();
MultiLayerNetwork networkModel = new MultiLayerNetwork(conf);
networkModel.setListeners(new StatsListener(statsStorage), new ScoreIterationListener(10));


IEarlyStoppingTrainer trainer = new EarlyStoppingTrainer(esConf, networkModel, trainSetIterator);
EarlyStoppingResult<MultiLayerNetwork> result = trainer.fit();


// -------------------------- Evaluation trained model and print results --------------------------
System.out.println("Termination reason: " + result.getTerminationReason());
System.out.println("Termination details: " + result.getTerminationDetails());
System.out.println("Total epochs: " + result.getTotalEpochs());
System.out.println("Best epoch number: " + result.getBestModelEpoch());
System.out.println("Score at best epoch: " + result.getBestModelScore());

MultiLayerNetwork bestNetwork = result.getBestModel();
Evaluation eval1 = new Evaluation(OUTPUT_NEURONS);
testSetIterator.reset();

for (int i = 0; i < testData.numExamples(); i++) {
    DataSet t = testData.get(i);
    INDArray features = t.getFeatureMatrix();
    INDArray labels = t.getLabels();
    INDArray output = bestNetwork.output(features, false);
    eval1.eval(labels, output);
}

M.messageln(eval1.stats());
无论我如何调整学习速度、输入和输出激活方法、更新程序、规则等,我仍然无法获得令人满意的结果。如果您能帮助我如何更好地操作DL4J,我将不胜感激。我在做仲裁者,但运气不好。不确定我是否正在使用0.9.1稳定版本


非常感谢

您可以在这里的用户社区中找到问题的答案:

代码看起来不错。你没有做错什么。我想只需要调整一下

你的60/40测试/列车划分比我通常看到的70/30或80/20要高。你有什么理由想保留更多数据进行测试吗

当你说“合理”时,你是指与基线相比?如果不是,我会从一个基本上是逻辑回归的输出层开始,你可以用它作为基线


你试过添加更多层吗?再添加一个带有辍学的图层,可能会很有用。

将尝试添加该图层并查看结果。顺便说一下,我正在研究仲裁者方法,以找到更好的超参数组合。但是我在版本0.9.1上没有运气;我遇到了Maven问题,当我将Maven属性更改为1.0.0-beta2时,我的所有代码都无效。