Scala 如何聚合列表中的元素?

Scala 如何聚合列表中的元素?,scala,apache-spark,Scala,Apache Spark,我有这两种类型的一些列表(List[Array[String]]): 1) 列表(数组(“马克”、“2000”、“2002”)、数组(“约翰”、“2001”、“2003”)、数组(“安德鲁”、“1999”、“2001”)、数组(“埃里克”、“1996”、“1998”) 2) 列表(数组(“Steve”、“2000”、“2005”)) 基于此条件: 如果年份重叠,这意味着这两个家伙互相认识,否则就不会了 我所期望的是以这种方式对数据进行分组: 数组(名称、起始年、结束年、已知人、未知人) 因此,对

我有这两种类型的一些列表(
List[Array[String]]
):

1)
列表(数组(“马克”、“2000”、“2002”)、数组(“约翰”、“2001”、“2003”)、数组(“安德鲁”、“1999”、“2001”)、数组(“埃里克”、“1996”、“1998”)

2)
列表(数组(“Steve”、“2000”、“2005”))

基于此条件:

如果年份重叠,这意味着这两个家伙互相认识,否则就不会了

我所期望的是以这种方式对数据进行分组:

数组(名称、起始年、结束年、已知人、未知人)

因此,对于具体示例1),最终结果是:

List(
  Array("Mark",   "2000", "2002", "John#Andrew", "Erik"), 
  Array("John",   "2001", "2003", "Mark#Andrew", "Erik"), 
  Array("Andrew", "1999", "2001", "Mark#John",   "Erik"), 
  Array("Erik",   "1996", "1998", "",            "Mark#John#Andrew")
)
第二种情况是:

列表(数组(“Steve”、“2000”、“2005”和“,”)

我不知道该怎么做,因为我在做笛卡尔积时被绊倒了,并且过滤掉了相同的名称,如:

my_list.cartesian(my_list).filter{case(a,b)=>a(0)!=b(0)}

但在这一点上,我无法使工作成为一个
aggregateByKey

有什么想法吗?

回答 代码

class UnsortedTestSuite3 extends SparkFunSuite {
  configuredUnitTest("SO - aggregateByKey") { sc =>
    val sqlContext = new SQLContext(sc)
    import sqlContext.implicits._
    import org.apache.spark.rdd.RDD
    import org.apache.spark.sql.Row
    import org.apache.spark.sql.functions._
    import org.apache.spark.sql.types._
    import org.apache.spark.sql.{UserDefinedFunction, Column, SQLContext, DataFrame}

    val persons = Seq(
      Person("Mark",   2000, 2002),
      Person("John",   2001, 2003),
      Person("Andrew", 1999, 2001),
      Person("Erik",   1996, 1998)
    )

    // input
    val personDF = sc.parallelize( persons ).toDF
    val personRenamedDF = personDF.select(
      col("name").as("right_name"),
      col("fromYear").as("right_fromYear"),
      col("toYear").as("right_toYear")
    )

    /**
      * Group entries of a DateFrame by entries in second column.
      * @param df a dataframe with two string columns
      * @return dataframe, where second column contains group of values for the an identical entry in first column
      */
    def groupBySecond( df: DataFrame ) : DataFrame = {
      val st: StructType = df.schema
      if ( (st.size != 2) &&
           (! st(0).dataType.equals(StringType) ) &&
           (! st(1).dataType.equals(StringType) ) ) throw new RuntimeException("Wrong schema for groupBySecond.")

      df.rdd
        .map( row => (row.getString(0), row.getString(1)) )
        .groupByKey().map( x => ( x._1, x._2.toList))
        .toDF( st(0).name, st(1).name )
    }

    val joined = personDF.join(personRenamedDF, col("name") !== col("right_name"), "inner")
    val intervalOverlaps = (col("toYear") >= col("right_fromYear")) && (col("fromYear") <= col("right_toYear"))
    val known = groupBySecond( joined.filter( intervalOverlaps ).select(col("name"), col("right_name").as("knows")) )
    val unknown = groupBySecond( joined.filter( !intervalOverlaps ).select(col("name"), col("right_name").as("does_not_know")) )

    personDF.join( known, "name").join(unknown, "name").show()
  }
}
解释
  • 使用case类为您的个人建模,这样您就不必为
    Array
    而烦恼了
  • 使用Spark SQL,因为它最简洁
  • 技术上:
    • 使用内部联接创建所有人的成对。通过联接条件丢弃具有相同名称的对
    • 使用过滤器查找重叠或不重叠的间隔
    • 然后使用helper方法
      groupBySecond
      对数据帧执行
      groupBy
      。目前,这在Spark SQL中是不可能的,因为还不存在UDAF(用户定义的聚合函数)。将提出后续的SO罚单,以便听取专家对此的意见
    • 将原始数据帧
      personDF
      known
      unknown
      数据帧连接,以生成最终结果
编辑2015-11-13-下午2点 我刚刚发现当前的代码没有提供正确的结果。(埃里克失踪了!)

因此


您希望答案尊重您的
列表
结构吗?或者考虑对
RDD[Array[String]]
基本类型的答案?或者您实际上有
RDD[List[Array[String]]]
?最初是一个
aggregateByKey
返回的结果:
RDD[(String,String,String),List[Array[String]]]]
。从这里我做了
map(u._2)
,它返回了
RDD[List[Array[String]]]
+------+--------+------+--------------+-------------+
|  name|fromYear|toYear|         knows|does_not_know|
+------+--------+------+--------------+-------------+
|  John|    2001|  2003|[Mark, Andrew]|       [Erik]|
|  Mark|    2000|  2002|[John, Andrew]|       [Erik]|
|Andrew|    1999|  2001|  [Mark, John]|       [Erik]|
+------+--------+------+--------------+-------------+
case class Person(name: String, fromYear: Int, toYear: Int)

class UnsortedTestSuite3 extends SparkFunSuite {
  configuredUnitTest("SO - aggregateByKey") { sc =>
    val sqlContext = new SQLContext(sc)
    import sqlContext.implicits._
    import org.apache.spark.rdd.RDD
    import org.apache.spark.sql.Row
    import org.apache.spark.sql.functions._
    import org.apache.spark.sql.types._
    import org.apache.spark.sql.{UserDefinedFunction, Column, SQLContext, DataFrame}

    val persons = Seq(
      Person("Mark",   2000, 2002),
      Person("John",   2001, 2003),
      Person("Andrew", 1999, 2001),
      Person("Erik",   1996, 1998)
    )

    // input
    val personDF = sc.parallelize( persons ).toDF
    val personRenamedDF = personDF.select(
      col("name").as("right_name"),
      col("fromYear").as("right_fromYear"),
      col("toYear").as("right_toYear")
    )

    /**
      * Group entries of a DateFrame by entries in second column.
      * @param df a dataframe with two string columns
      * @return dataframe, where second column contains group of values for the an identical entry in first column
      */
    def groupBySecond( df: DataFrame ) : DataFrame = {
      val st: StructType = df.schema
      if ( (st.size != 2) &&
           (! st(0).dataType.equals(StringType) ) &&
           (! st(1).dataType.equals(StringType) ) ) throw new RuntimeException("Wrong schema for groupBySecond.")

      df.rdd
        .map( row => (row.getString(0), row.getString(1)) )
        .groupByKey().map( x => ( x._1, if (x._2 == List(null)) List() else x._2.toList ))
        .toDF( st(0).name, st(1).name )
    }

    val distinctName = col("name") !== col("right_name")
    val intervalOverlaps = (col("toYear") >= col("right_fromYear")) && (col("fromYear") <= col("right_toYear"))

    val knownDF_t = personDF.join(personRenamedDF, distinctName && intervalOverlaps, "leftouter")
    val knownDF = groupBySecond( knownDF_t.select(col("name").as("kname"), col("right_name").as("knows")) )

    val unknownDF_t = personDF.join(personRenamedDF, distinctName && !intervalOverlaps, "leftouter")
    val unknownDF = groupBySecond( unknownDF_t.filter( !intervalOverlaps ).select(col("name")as("uname"), col("right_name").as("does_not_know")) )

    personDF
      .join( knownDF, personDF("name") === knownDF("kname"), "leftouter")
      .join( unknownDF, personDF("name") === unknownDF("uname"), "leftouter")
      .select( col("name"), col("fromYear"), col("toYear"), col("knows"), col("does_not_know"))
      .show()

  }
}
+------+--------+------+--------------+--------------------+
|  name|fromYear|toYear|         knows|       does_not_know|
+------+--------+------+--------------+--------------------+
|  John|    2001|  2003|[Mark, Andrew]|              [Erik]|
|  Mark|    2000|  2002|[John, Andrew]|              [Erik]|
|Andrew|    1999|  2001|  [Mark, John]|              [Erik]|
|  Erik|    1996|  1998|            []|[Mark, John, Andrew]|
+------+--------+------+--------------+--------------------+