Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/sql-server-2008/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 火花-寻找重叠值或寻找共同朋友的变体_Hadoop_Apache Spark_Mapreduce_Apache Spark Sql - Fatal编程技术网

Hadoop 火花-寻找重叠值或寻找共同朋友的变体

Hadoop 火花-寻找重叠值或寻找共同朋友的变体,hadoop,apache-spark,mapreduce,apache-spark-sql,Hadoop,Apache Spark,Mapreduce,Apache Spark Sql,我有一个问题,我正试图用Spark解决。我是Spark的新手,所以我不确定设计它的最佳方式是什么 输入: 我想找出每对用户之间的相互组数。因此,对于上述输入,我期望的输出是: 输出: 我认为有几种方法可以解决这个问题,其中一种可能是: 创建一个键、值对,其中键是用户,值是组 按键分组,然后我们将有一个用户所属组的列表 然后找到两个组之间的交点/并集 例如: (1st stage): Map group1=user1,user2 ==> user1, group1

我有一个问题,我正试图用Spark解决。我是Spark的新手,所以我不确定设计它的最佳方式是什么

输入: 我想找出每对用户之间的相互组数。因此,对于上述输入,我期望的输出是:

输出: 我认为有几种方法可以解决这个问题,其中一种可能是:

  • 创建一个键、值对,其中键是用户,值是组
  • 按键分组,然后我们将有一个用户所属组的列表
  • 然后找到两个组之间的交点/并集
例如:

(1st stage): Map
group1=user1,user2 ==>
          user1, group1
          user2, group1
group2=user1,user2,user3 ==>
          user1, group2
          user2, group2
          user3, group2
....
....
....


(2nd stage): Reduce by key
user1 -> group1, group2, group4, group8
user2 -> group1, group2, group3, group7, group9
但我的问题是,在按键减少计数后,用我想要的方式表示计数的最佳方式是什么

有没有更好的办法来处理这个问题?用户的最大数量是恒定的,不会超过5000,因此这是它将创建的最大密钥数。但输入可能包含接近1B行的几行。我不认为那会是个问题,如果我错了,请纠正我

更新: 这是我用对Spark的一点了解来解决这个问题的一段代码(上个月刚开始学习Spark):

我很想得到一些关于我的代码和我缺少的东西的反馈。请随意批评我的代码,因为我刚刚开始学习Spark。再次感谢@axiom的回答,这是一个比我预想的更小更好的解决方案。

总结:

获取对计数,然后使用以下事实

并集(a,b)=计数(a)+计数(b)-交点(a,b)


详细信息:

  • 总共有5000个用户,2500万个密钥(每对1个)应该不会太多。我们可以使用
    reduceByKey
    计算交叉口计数

  • 在地图中可以轻松地
    广播单个计数

  • 现在大家都知道:

    联合(user1,user2)=计数(user1)+计数(user2)-交叉点(user1,user2)

  • 前两个计数从广播映射中读取,而我们映射成对计数的rdd

    代码:

    //为对计数生成((user1,user2),1)
    def对(str:String)={
    val users=str.split(“=”)1.split(“,”)
    val n=users.length
    为了
    
    1st user || 2nd user || mutual/intersection count || union count
    ------------------------------------------------------------
    user1        user2           2                       7
    user1        user3           1                       6
    user1        user4           1                       9
    user2        user4           3                       8
    
    (1st stage): Map
    group1=user1,user2 ==>
              user1, group1
              user2, group1
    group2=user1,user2,user3 ==>
              user1, group2
              user2, group2
              user3, group2
    ....
    ....
    ....
    
    
    (2nd stage): Reduce by key
    user1 -> group1, group2, group4, group8
    user2 -> group1, group2, group3, group7, group9
    
    def createPair(line: String): Array[(String, String)] = {
        val splits = line.split("=")
        val kuid = splits(0)
        splits(1).split(",").map { segment => (segment, kuid) }
    }
    
    
    val input = sc.textFile("input/test.log")
    val pair = input.flatMap { line => createPair(line) }
    
    val pairListDF = pair
      .aggregateByKey(scala.collection.mutable.ListBuffer.empty[String])(
        (kuidList, kuid) => { kuidList += kuid; kuidList },
        (kuidList1, kuidList2) => { kuidList1.appendAll(kuidList2); kuidList1 })
      .mapValues(_.toList).toDF().select($"_1".alias("user"), $"_2".alias("groups"))
    
    pairListDF.registerTempTable("table")
    
    sqlContext.udf.register("intersectCount", (list1: WrappedArray[String], list2: WrappedArray[String]) => list1.intersect(list2).size)
    sqlContext.udf.register("unionCount", (list1: WrappedArray[String], list2: WrappedArray[String]) => list1.union(list2).distinct.size)
    
    val populationDF = sqlContext.sql("SELECT t1.user AS user_first,"
      + "t2.user AS user_second,"
      + "intersectCount(t1.groups, t2.groups) AS intersect_count,"
      + "unionCount(t1.groups, t2.groups) AS union_count"
      + " FROM table t1 INNER JOIN table t2"
      + " ON t1.user < t2.user"
      + " ORDER BY user_first,user_second")
    
    +----------+-----------+---------------+-----------+
    |user_first|user_second|intersect_count|union_count|
    +----------+-----------+---------------+-----------+
    |     user1|      user2|              2|          7|
    |     user1|      user3|              1|          6|
    |     user1|      user4|              1|          9|
    |     user1|      user5|              1|          8|
    |     user2|      user3|              1|          7|
    |     user2|      user4|              3|          8|
    |     user2|      user5|              1|          9|
    |     user3|      user4|              1|          8|
    |     user3|      user5|              2|          6|
    |     user4|      user5|              3|          8|
    +----------+-----------+---------------+-----------+
    
    val data = sc.textFile("test")
    //optionally data.cache(), depending on size of data.
    val pairCounts  = data.flatMap(pairs).reduceByKey(_ + _)
    val singleCounts = data.flatMap(singles).reduceByKey(_ + _)
    val singleCountMap = sc.broadcast(singleCounts.collectAsMap())
    val result = pairCounts.map{case ((user1, user2), intersectionCount) =>(user1, user2, intersectionCount, singleCountMap.value(user1) + singleCountMap.value(user2) - intersectionCount)}
    
    //generate ((user1, user2), 1) for pair counts
    def pairs(str: String) = {
     val users = str.split("=")(1).split(",")
     val n = users.length
     for(i <- 0 until n; j <- i + 1 until n) yield {
      val pair = if(users(i) < users(j)) {
        (users(i), users(j))
      } else {
       (users(j), users(i))
      } //order of the user in a list shouldn't matter
      (pair, 1)
     } 
    }
    
    //generate (user, 1), to obtain single counts
    def singles(str: String) = {
      for(user <- str.split("=")(1).split(",")) yield (user, 1)
    }
    
    
    //read the rdd
    scala> val data = sc.textFile("test")
    scala> data.collect.map(println)
    group1=user1,user2
    group2=user1,user2,user3
    group3=user2,user4
    group4=user1,user4
    group5=user3,user5
    group6=user3,user4,user5
    group7=user2,user4
    group8=user1,user5
    group9=user2,user4,user5
    group10=user4,user5
    
    //get the pair counts
    scala> val pairCounts  = data.flatMap(pairs).reduceByKey(_ + _)
    pairCounts: org.apache.spark.rdd.RDD[((String, String), Int)] = ShuffledRDD[16] at reduceByKey at <console>:25
    
    
    
    //just checking
    scala> pairCounts.collect.map(println)
    ((user2,user3),1)
    ((user1,user3),1)
    ((user3,user4),1)
    ((user2,user5),1)
    ((user1,user5),1)
    ((user2,user4),3)
    ((user4,user5),3)
    ((user1,user4),1)
    ((user3,user5),2)
    ((user1,user2),2)
    
    //single counts
    scala> val singleCounts = data.flatMap(singles).reduceByKey(_ + _)
    singleCounts: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[20] at reduceByKey at <console>:25
    
    scala> singleCounts.collect.map(println)
    
    (user5,5)
    (user3,3)
    (user1,4)
    (user2,5)
    (user4,6)
    
    
    //broadcast single counts
    scala> val singleCountMap = sc.broadcast(singleCounts.collectAsMap())
    
    //calculate the results:
    
    scala> val res = pairCounts.map{case ((user1, user2), intersectionCount) => (user1, user2, intersectionCount, singleCountMap.value(user1) + singleCountMap.value(user2) - intersectionCount)}
    res: org.apache.spark.rdd.RDD[(String, String, Int, Int)] = MapPartitionsRDD[23] at map at <console>:33
    
    scala> res.collect.map(println)
    (user2,user3,1,7)
    (user1,user3,1,6)
    (user3,user4,1,8)
    (user2,user5,1,9)
    (user1,user5,1,8)
    (user2,user4,3,8)
    (user4,user5,3,8)
    (user1,user4,1,9)
    (user3,user5,2,6)
    (user1,user2,2,7)