Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/321.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/cocoa/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在Java中将CSV值转换为Spark数据帧中的矢量_Java_Hadoop_Apache Spark_Dataframe_Vector - Fatal编程技术网

在Java中将CSV值转换为Spark数据帧中的矢量

在Java中将CSV值转换为Spark数据帧中的矢量,java,hadoop,apache-spark,dataframe,vector,Java,Hadoop,Apache Spark,Dataframe,Vector,我有一个有两列的CSV文件 id, features id列是一个字符串,features列是一个逗号分隔的机器学习算法的特征值列表,即“[1,4,5]”。我基本上只需要对该值调用Vectors.parse()即可获得一个向量,但我不想先转换为RDD 我想把它放到Spark数据框中,其中features列是一个org.apache.Spark.mllib.linalg.Vector 我正在使用DataRicks csv api将其读入一个数据帧,并尝试将features列转换为向量 有人知道如

我有一个有两列的CSV文件

id, features
id列是一个字符串,features列是一个逗号分隔的机器学习算法的特征值列表,即“[1,4,5]”。我基本上只需要对该值调用Vectors.parse()即可获得一个向量,但我不想先转换为RDD

我想把它放到Spark数据框中,其中features列是一个
org.apache.Spark.mllib.linalg.Vector

我正在使用DataRicks csv api将其读入一个数据帧,并尝试将features列转换为向量


有人知道如何在Java中做到这一点吗?

我找到了一种使用UDF的方法。还有其他方法吗

  HashMap<String, String> options = new HashMap<String, String>();
  options.put("header", "true");
  String input= args[0];

  sqlc.udf().register("toVector", new UDF1<String, Vector>() {
     @Override
     public Vector call(String t1) throws Exception {
        return Vectors.parse(t1);
     }
  }, new VectorUDT());

  StructField[] fields = {new StructField("id",DataTypes.StringType,false, Metadata.empty()) , new StructField("features", DataTypes.StringType, false, Metadata.empty())};
  StructType schema = new StructType(fields);

  DataFrame df = sqlc.read().format("com.databricks.spark.csv").schema(schema).options(options).load(input);

  df = df.withColumn("features", functions.callUDF("toVector", df.col("features")));
HashMap options=newhashmap();
期权。看跌期权(“表头”、“真”);
字符串输入=args[0];
寄存器(“toVector”,新UDF1()){
@凌驾
公共向量调用(字符串t1)引发异常{
返回向量。parse(t1);
}
},新VectorUDT());
StructField[]fields={new StructField(“id”,DataTypes.StringType,false,Metadata.empty()),new StructField(“features”,DataTypes.StringType,false,Metadata.empty())};
StructType模式=新的StructType(字段);
DataFrame df=sqlc.read().format(“com.databricks.spark.csv”).schema(schema).options(options).load(input);
df=df.withColumn(“features”,functions.callUDF(“toVector”,df.col(“features”));