Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/350.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何在Java中的ApacheSpark中将DataFrame转换为Dataset?_Java_Apache Spark_Spark Dataframe_Apache Spark Dataset - Fatal编程技术网

如何在Java中的ApacheSpark中将DataFrame转换为Dataset?

如何在Java中的ApacheSpark中将DataFrame转换为Dataset?,java,apache-spark,spark-dataframe,apache-spark-dataset,Java,Apache Spark,Spark Dataframe,Apache Spark Dataset,我可以很容易地在Scala中将DataFrame转换为Dataset: case class Person(name:String, age:Long) val df = ctx.read.json("/tmp/persons.json") val ds = df.as[Person] ds.printSchema 但在Java版本中,我不知道如何将数据帧转换为数据集?有什么想法吗 我的努力是: DataFrame df = ctx.read().json(logFile); Encoder&

我可以很容易地在Scala中将DataFrame转换为Dataset:

case class Person(name:String, age:Long)
val df = ctx.read.json("/tmp/persons.json")
val ds = df.as[Person]
ds.printSchema
但在Java版本中,我不知道如何将数据帧转换为数据集?有什么想法吗

我的努力是:

DataFrame df = ctx.read().json(logFile);
Encoder<Person> encoder = new Encoder<>();
Dataset<Person> ds = new Dataset<Person>(ctx,df.logicalPlan(),encoder);
ds.printSchema();
编辑(解决方案): 基于@Leet Falcon答案的解决方案:

DataFrame df = ctx.read().json(logFile);
Encoder<Person> encoder = Encoders.bean(Person.class);
Dataset<Person> ds = new Dataset<Person>(ctx, df.logicalPlan(), encoder);
DataFrame df=ctx.read().json(日志文件);
编码器=编码器.bean(Person.class);
数据集ds=新数据集(ctx,df.logicalPlan(),编码器);

官方Spark文件建议如下:

Java编码器是通过调用上的静态方法来指定的


如果您想将通用DF转换为Java中的数据集,可以使用如下所示的RowEncoder类

DataFrame df = sql.read().json(sc.parallelize(ImmutableList.of(
            "{\"id\": 0, \"phoneNumber\": 109, \"zip\": \"94102\"}"
    )));

    Dataset<Row> dataset = df.as(RowEncoder$.MODULE$.apply(df.schema()));
DataFrame df=sql.read().json(sc.parallelize(ImmutableList.of(
“{\'id\':0,\'phoneNumber\':109,\'zip\':\'94102\'”
)));
Dataset Dataset=df.as(RowEncoder$.MODULE$.apply(df.schema());

Java/w Spark 1.6似乎缺少API。如何用Java编写缺少的API?就像scala
val encoder=Encoders.product[Foo]df.as[Foo](编码器)
问题是Spark 2中的数据帧类不存在了!它不能正常工作。Dataset Dataset=df.as(RowEncoder$.MODULE$.apply(df.schema());获取错误“不兼容类型”。将数据帧转换为数据集时是否会影响性能?具体来说,当我将编码器连接到数据帧时会发生什么?编码器是“懒惰”(即,除非调用类型化操作,否则它什么也不做)还是必须先处理数据帧?
List<String> data = Arrays.asList("abc", "abc", "xyz");
Dataset<String> ds = context.createDataset(data, Encoders.STRING());
Encoder<Tuple2<Integer, String>> encoder2 = Encoders.tuple(Encoders.INT(), Encoders.STRING());
List<Tuple2<Integer, String>> data2 = Arrays.asList(new scala.Tuple2(1, "a");
Dataset<Tuple2<Integer, String>> ds2 = context.createDataset(data2, encoder2);
Encoders.bean(MyClass.class);
DataFrame df = sql.read().json(sc.parallelize(ImmutableList.of(
            "{\"id\": 0, \"phoneNumber\": 109, \"zip\": \"94102\"}"
    )));

    Dataset<Row> dataset = df.as(RowEncoder$.MODULE$.apply(df.schema()));