apachespark中决策树的java实现问题

apachespark中决策树的java实现问题,java,machine-learning,bigdata,apache-spark,decision-tree,Java,Machine Learning,Bigdata,Apache Spark,Decision Tree,我正在尝试使用java和ApacheSpark1.0.0版本实现决策树分类器的简单演示。我的基础是。到目前为止,我已经编写了下面列出的代码 根据以下代码,我得到一个错误: org.apache.spark.mllib.tree.impurity.Impurity impurity = new org.apache.spark.mllib.tree.impurity.Entropy(); 类型不匹配:无法从熵转换为杂质。 对我来说很奇怪,而类熵实现了杂质接口: 我在寻找问题的答案,为什么我不能

我正在尝试使用java和ApacheSpark1.0.0版本实现决策树分类器的简单演示。我的基础是。到目前为止,我已经编写了下面列出的代码

根据以下代码,我得到一个错误:

org.apache.spark.mllib.tree.impurity.Impurity impurity = new org.apache.spark.mllib.tree.impurity.Entropy();
类型不匹配:无法从熵转换为杂质。 对我来说很奇怪,而类熵实现了杂质接口:

我在寻找问题的答案,为什么我不能做这个作业

package decisionTree;

import java.util.regex.Pattern;

import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.mllib.linalg.Vectors;
import org.apache.spark.mllib.regression.LabeledPoint;
import org.apache.spark.mllib.tree.DecisionTree;
import org.apache.spark.mllib.tree.configuration.Algo;
import org.apache.spark.mllib.tree.configuration.Strategy;
import org.apache.spark.mllib.tree.impurity.Gini;
import org.apache.spark.mllib.tree.impurity.Impurity;

import scala.Enumeration.Value;

public final class DecisionTreeDemo {

    static class ParsePoint implements Function<String, LabeledPoint> {
        private static final Pattern COMMA = Pattern.compile(",");
        private static final Pattern SPACE = Pattern.compile(" ");

        @Override
        public LabeledPoint call(String line) {
            String[] parts = COMMA.split(line);
            double y = Double.parseDouble(parts[0]);
            String[] tok = SPACE.split(parts[1]);
            double[] x = new double[tok.length];
            for (int i = 0; i < tok.length; ++i) {
                x[i] = Double.parseDouble(tok[i]);
            }
            return new LabeledPoint(y, Vectors.dense(x));
        }
    }

    public static void main(String[] args) throws Exception {

        if (args.length < 1) {
            System.err.println("Usage:DecisionTreeDemo <file>");
            System.exit(1);
        }

        JavaSparkContext ctx = new JavaSparkContext("local[4]", "Log Analizer",
                System.getenv("SPARK_HOME"),
                JavaSparkContext.jarOfClass(DecisionTreeDemo.class));

        JavaRDD<String> lines = ctx.textFile(args[0]);
        JavaRDD<LabeledPoint> points = lines.map(new ParsePoint()).cache();

        int iterations = 100;

        int maxBins = 2;
        int maxMemory = 512;
        int maxDepth = 1;

        org.apache.spark.mllib.tree.impurity.Impurity impurity = new org.apache.spark.mllib.tree.impurity.Entropy();

        Strategy strategy = new Strategy(Algo.Classification(), impurity, maxDepth,
                maxBins, null, null, maxMemory);

        ctx.stop();
    }
}   
错误更改为:构造函数熵()未定义

[已编辑] 我发现我认为正确调用方法():


不幸的是,我遇到了bug:(

现在可以通过以下途径获得针对bug 2197的Java解决方案:

对决策树的其他改进,以便于使用Java: *杂质类:添加instance()方法以帮助使用Java接口。 *策略:添加Java友好的构造函数 -->注意:我从Java友好的构造函数中删除了quantileCalculationStrategy,因为(a)它是一个特殊的类,(b)只有1个 选项。我怀疑我们会在另一个之前重做API 包括选项

您可以看到一个完整的示例,它使用Gini的intance()方法解决您的问题


奇怪。试着内联它而不是赋值给变量。毕竟你只使用了一次变量。另外,还建议使用Scala而不是Java API,你可以用几行文字来完成整个过程,而且它会更容易阅读。
Strategy strategy = new Strategy(Algo.Classification(), new org.apache.spark.mllib.tree.impurity.Entropy(), maxDepth, maxBins, null, null, maxMemory);
Strategy strategy = new Strategy(Algo.Classification(), new Impurity() {
@Override
public double calculate(double arg0, double arg1, double arg2)
{ return Gini.calculate(arg0, arg1, arg2); }

@Override
public double calculate(double arg0, double arg1)
{ return Gini.calculate(arg0, arg1); }

}, 5, 100, QuantileStrategy.Sort(), null, 256);
Strategy strategy = new Strategy(Algo.Classification(), Gini.instance(), maxDepth, numClasses,maxBins, categoricalFeaturesInfo);
DecisionTreeModel model = DecisionTree$.MODULE$.train(rdd.rdd(), strategy);