Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 如何使用ApacheSpark1.4.0数据帧提取复杂的JSON结构_Apache Spark_Apache Spark Sql - Fatal编程技术网

Apache spark 如何使用ApacheSpark1.4.0数据帧提取复杂的JSON结构

Apache spark 如何使用ApacheSpark1.4.0数据帧提取复杂的JSON结构,apache-spark,apache-spark-sql,Apache Spark,Apache Spark Sql,我正在使用新的Apache Spark 1.4.0版Data frames API从Twitter的状态JSON中提取信息,主要集中在-与此问题相关的部分如下所示: { ... ... "entities": { "hashtags": [], "trends": [], "urls": [], "user_mentions": [ { "screen_name": "linobocchini", "name

我正在使用新的Apache Spark 1.4.0版Data frames API从Twitter的状态JSON中提取信息,主要集中在-与此问题相关的部分如下所示:

{
  ...
  ...
  "entities": {
    "hashtags": [],
    "trends": [],
    "urls": [],
    "user_mentions": [
      {
        "screen_name": "linobocchini",
        "name": "Lino Bocchini",
        "id": 187356243,
        "id_str": "187356243",
        "indices": [ 3, 16 ]
      },
      {
        "screen_name": "jeanwyllys_real",
        "name": "Jean Wyllys",
        "id": 111123176,
        "id_str": "111123176",
        "indices": [ 79, 95 ]
      }
    ],
    "symbols": []
  },
  ...
  ...
}
关于如何从原语类型中提取信息,有几个例子,如
string
integer
,等等,但我找不到任何关于如何处理这些复杂结构的例子

我尝试了下面的代码,但仍然不起作用,它抛出了一个异常

val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

val tweets = sqlContext.read.json("tweets.json")

// this function is just to filter empty entities.user_mentions[] nodes
// some tweets doesn't contains any mentions
import org.apache.spark.sql.functions.udf
val isEmpty = udf((value: List[Any]) => value.isEmpty)

import org.apache.spark.sql._
import sqlContext.implicits._
case class UserMention(id: Long, idStr: String, indices: Array[Long], name: String, screenName: String)

val mentions = tweets.select("entities.user_mentions").
  filter(!isEmpty($"user_mentions")).
  explode($"user_mentions") {
  case Row(arr: Array[Row]) => arr.map { elem =>
    UserMention(
      elem.getAs[Long]("id"),
      elem.getAs[String]("is_str"),
      elem.getAs[Array[Long]]("indices"),
      elem.getAs[String]("name"),
      elem.getAs[String]("screen_name"))
  }
}

mentions.first
当我尝试调用
时出现异常。首先

scala>     mentions.first
15/06/23 22:15:06 ERROR Executor: Exception in task 0.0 in stage 5.0 (TID 8)
scala.MatchError: [List([187356243,187356243,List(3, 16),Lino Bocchini,linobocchini], [111123176,111123176,List(79, 95),Jean Wyllys,jeanwyllys_real])] (of class org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema)
    at $line37.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:34)
    at $line37.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:34)
    at scala.Function1$$anonfun$andThen$1.apply(Function1.scala:55)
    at org.apache.spark.sql.catalyst.expressions.UserDefinedGenerator.eval(generators.scala:81)
注1:我知道使用
HiveQL解决这个问题是可能的,但一旦有了如此大的动力,我想使用数据帧

SELECT explode(entities.user_mentions) as mentions
FROM tweets

注2:UDF
val isEmpty=UDF((value:List[Any])=>value.isEmpty)是一个丑陋的黑客攻击,我在这里遗漏了一些东西,但这是我避免NPE的唯一方法。

这里有一个有效的解决方案,只需一个小黑客

主要思想是通过声明List[String]而不是List[Row]来解决类型问题:

val mentions = tweets.explode("entities.user_mentions", "mention"){m: List[String] => m}
这将创建第二个名为“提及”的列,类型为“Struct”:

现在执行map()来提取提提中的字段。getStruct(1)调用获取每行第1列中的值:

case class Mention(id: Long, id_str: String, indices: Seq[Int], name: String, screen_name: String)
val mentionsRdd = mentions.map(
  row => 
    {  
      val mention = row.getStruct(1)
      Mention(mention.getLong(0), mention.getString(1), mention.getSeq[Int](2), mention.getString(3), mention.getString(4))
    }
)
并将RDD转换回数据帧:

val mentionsDf = mentionsRdd.toDF()
好了

|       id|   id_str|     indices|         name|    screen_name|
+---------+---------+------------+-------------+---------------+
|187356243|187356243| List(3, 16)|Lino Bocchini|   linobocchini|
|111123176|111123176|List(79, 95)|  Jean Wyllys|jeanwyllys_real|
尝试这样做:

case Row(arr: Seq[Row]) => arr.map { elem =>

我认为您的
案例行(arr:Array[Row])
与您的输入不匹配。嗨@elmalto,我尝试了
List
Array
,但无论哪种方式,我都会得到相同的错误。感谢Xinh Huynh,我对这一黑客行为的担忧是,在提取元素之前,我正在对所有数据集进行
Row.toString()
,我没有具体的基准测试,但似乎我们会浪费大量的机器时间来完成这一步。这是我不认为你的问题是正确的唯一原因!请对您的解决方案添加一些注释,说明其解决问题的原因和方式
|       id|   id_str|     indices|         name|    screen_name|
+---------+---------+------------+-------------+---------------+
|187356243|187356243| List(3, 16)|Lino Bocchini|   linobocchini|
|111123176|111123176|List(79, 95)|  Jean Wyllys|jeanwyllys_real|
case Row(arr: Seq[Row]) => arr.map { elem =>