使用Java组合来自多个RDD的数据

使用Java组合来自多个RDD的数据,java,csv,apache-spark,rdd,Java,Csv,Apache Spark,Rdd,我有3个CSV文件,如下所示,试图创建RDD并将RDD组合成最终输出,我可以对其应用过滤器。我不知道从哪里开始 用这个。有什么建议吗 JavaRDD<String> file1 = sc.textFile("D:\\tmp\\file1.csv"); JavaRDD<String> file2 = sc.textFile("D:\\tmp\\file2.csv"); JavaRDD<String> file3 = sc.textFile("D:\\tmp\\f

我有3个CSV文件,如下所示,试图创建RDD并将RDD组合成最终输出,我可以对其应用过滤器。我不知道从哪里开始 用这个。有什么建议吗

JavaRDD<String> file1 = sc.textFile("D:\\tmp\\file1.csv");
JavaRDD<String> file2 = sc.textFile("D:\\tmp\\file2.csv");
JavaRDD<String> file3 = sc.textFile("D:\\tmp\\file3.csv");

JavaRDD<String> combRDD = file1.union(file2).union(file3); //doesn't give expected output
csv文件2

"user","url","type"
"abc","/test","TWO"
"xyz","/wonder","TWO"
csv文件3

"user","total_time","type","status"
"abc","5min","THREE","true"
"xyz","2min","THREE","fail"
最终预期产出

"user","source_ip","action","type","url","total_time","status"
"abc","10.0.0.1","login","ONE","","",""
"xyz","10.0.1.1","login","ONE","","",""
"abc","10.0.0.1","playing","ONE","","",""
"def","10.1.0.1","login","ONE","","",""
"abc","","","TWO","/test","",""
"xyz","","","TWO","/wonder","",""
"abc","","","THREE","","5min","true"
"xyz","","","THREE","","2min","fail"

每个csv文件每天都以相同的格式生成,因此,如果您有
SparkSession
对象作为
spark

spark.read.option("header", "true").csv("file1.csv").join(
  spark.read.option("header", "true").csv("file2.csv"), "user"
).join(
  spark.read.option("header", "true").csv("file3.csv"), "user"
).write.csv("some_output");
spark.read.option("header", "true").csv("file1.csv").join(
  spark.read.option("header", "true").csv("file2.csv"), "user"
).join(
  spark.read.option("header", "true").csv("file3.csv"), "user"
).write.csv("some_output");