Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何使用spark(scala)提取RDD内容并放入数据帧_Scala_Apache Spark_Dataframe_Spark Streaming_Rdd - Fatal编程技术网

如何使用spark(scala)提取RDD内容并放入数据帧

如何使用spark(scala)提取RDD内容并放入数据帧,scala,apache-spark,dataframe,spark-streaming,rdd,Scala,Apache Spark,Dataframe,Spark Streaming,Rdd,我试图做的只是使用Spark(scala)从rdd中提取一些信息并将其放入数据帧中 到目前为止,我所做的是创建一个流媒体管道,连接到卡夫卡主题,并将主题内容放入RDD中: val kafkaParams = Map[String, Object]( "bootstrap.servers" -> "localhost:9092", "key.deserializer" -> classOf[StringDeserializer], "value.de

我试图做的只是使用Spark(scala)从rdd中提取一些信息并将其放入数据帧中

到目前为止,我所做的是创建一个流媒体管道,连接到卡夫卡主题,并将主题内容放入RDD中:

val kafkaParams = Map[String, Object](
      "bootstrap.servers" -> "localhost:9092",
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "group.id" -> "test",
      "auto.offset.reset" -> "latest",
      "enable.auto.commit" -> (false: java.lang.Boolean)
    )



   .outputMode("complete")


    val topics = Array("vittorio")
    val stream = KafkaUtils.createDirectStream[String, String](
      ssc,
      PreferConsistent,
      Subscribe[String, String](topics, kafkaParams)
    )

    val row = stream.map(record => record.value)
    row.foreachRDD { (rdd: RDD[String], time: Time) =>


      rdd.collect.foreach(println)

      val spark = SparkSessionSingleton.getInstance(rdd.sparkContext.getConf)
      import spark.implicits._
      val DF = rdd.toDF()

      DF.show()
    }

    ssc.start()             // Start the computation
    ssc.awaitTermination()

  }

  object SparkSessionSingleton {

    @transient  private var instance: SparkSession = _

    def getInstance(sparkConf: SparkConf): SparkSession = {
      if (instance == null) {
        instance = SparkSession
          .builder
          .config(sparkConf)
          .getOrCreate()
      }
      instance
    }
  }
现在,我的rdd的内容是:

{"event":"bank.legal.patch","ts":"2017-04-15T15:18:32.469+02:00","svc":"dpbank.stage.tlc-1","request":{"ts":"2017-04-15T15:18:32.993+02:00","aw":"876e6d71-47c4-40f6-8c49-5dbd7b8e246b","end_point":"/bank/v1/legal/mxHr+bhbNqEwFvXGn4l6jQ==","method":"PATCH","app_instance":"e73e93d9-e70d-4873-8f98-b00c6fe4d036-1491406011","user_agent":"Dry/1.0.st/Android/5.0.1/Sam-SM-N910C","user_id":53,"user_ip":"151.14.81.82","username":"7cV0Y62Rud3MQ==","app_id":"db2ffeac6c087712530981e9871","app_name":"DrApp"},"operation":{"scope":"mdpapp","result":{"http_status":200}},"resource":{"object_id":"mxHr+bhbNqEwFvXGn4l6jQ==","request_attributes":{"legal_user":{"sharing_id":"mxHr+bhbNqEwFvXGn4l6jQ==","ndg":"","taxcode":"IQ7hUUphxFBXnI0u2fxuCg==","status":"INCOMPLETE","residence":{"city":"CAA","address":"Via Batto 44","zipcode":"926","country_id":18,"city_id":122},"business_categories":[5],"company_name":"4Gzb+KJk1XAQ==","vat_number":"162340159"}},"response_attributes":{"legal_user":{"sharing_id":"mGn4l6jQ==","taxcode":"IQ7hFBXnI0u2fxuCg==","status":"INCOMPLETE","residence":{"city":"CATA","address":"Via Bllo 44","zipcode":"95126","country_id":128,"city_id":12203},"business_categories":[5],"company_name":"4GnU/Nczb+KJk1XAQ==","vat_number":"12960159"}}},"class":"DPAPI"}
执行
val DF=rdd.toDF()
显示:

+--------------------+
|               value|
+--------------------+
|{"event":"bank.le...|
+--------------------+
我想要实现的是一个数据帧,它将随着来自流媒体的新RDD的到来而被填充。一种
union
方法,但不确定是否正确,因为我不确定所有RDD都将具有相同的模式

例如,这就是我想要实现的目标:

+--------------------+------------+----------+-----+
|                 _id|     user_ip|    status|_type|
+--------------------+------------+----------+-----+
|AVtJFVOUVxUyIIcAklfZ|151.14.81.82|INCOMPLETE|DPAPI|
|AVtJFVOUVxUyIIcAklfZ|151.14.81.82|INCOMPLETE|DPAPI|
+--------------------+------------+----------+-----+
谢谢

如果您的rdd是

{"event":"bank.legal.patch","ts":"2017-04-15T15:18:32.469+02:00","svc":"dpbank.stage.tlc-1","request":{"ts":"2017-04-15T15:18:32.993+02:00","aw":"876e6d71-47c4-40f6-8c49-5dbd7b8e246b","end_point":"/bank/v1/legal/mxHr+bhbNqEwFvXGn4l6jQ==","method":"PATCH","app_instance":"e73e93d9-e70d-4873-8f98-b00c6fe4d036-1491406011","user_agent":"Dry/1.0.st/Android/5.0.1/Sam-SM-N910C","user_id":53,"user_ip":"151.14.81.82","username":"7cV0Y62Rud3MQ==","app_id":"db2ffeac6c087712530981e9871","app_name":"DrApp"},"operation":{"scope":"mdpapp","result":{"http_status":200}},"resource":{"object_id":"mxHr+bhbNqEwFvXGn4l6jQ==","request_attributes":{"legal_user":{"sharing_id":"mxHr+bhbNqEwFvXGn4l6jQ==","ndg":"","taxcode":"IQ7hUUphxFBXnI0u2fxuCg==","status":"INCOMPLETE","residence":{"city":"CAA","address":"Via Batto 44","zipcode":"926","country_id":18,"city_id":122},"business_categories":[5],"company_name":"4Gzb+KJk1XAQ==","vat_number":"162340159"}},"response_attributes":{"legal_user":{"sharing_id":"mGn4l6jQ==","taxcode":"IQ7hFBXnI0u2fxuCg==","status":"INCOMPLETE","residence":{"city":"CATA","address":"Via Bllo 44","zipcode":"95126","country_id":128,"city_id":12203},"business_categories":[5],"company_name":"4GnU/Nczb+KJk1XAQ==","vat_number":"12960159"}}},"class":"DPAPI"}
然后,您可以使用
sqlContext
read.json
来读取
rdd
以使
dataframe
有效,然后
仅选择所需字段,如下所示:

val df = sqlContext.read.json(sc.parallelize(rdd))

df.select($"request.user_id"as("user_id"),
          $"request.user_ip"as("user_ip"),
          $"request.app_id"as("app_id"),
          $"resource.request_attributes.legal_user.status"as("status"),
          $"class")
  .show(false)
这将导致以下数据帧

+-------+------------+---------------------------+----------+-----+
|user_id|user_ip     |app_id                     |status    |class|
+-------+------------+---------------------------+----------+-----+
|53     |151.14.81.82|db2ffeac6c087712530981e9871|INCOMPLETE|DPAPI|
+-------+------------+---------------------------+----------+-----+

您可以使用上述方法获得所需的
。我希望答案有帮助

您可以将当前数据帧与现有数据帧合并:

首先,在程序开始时创建空数据帧:

val df = // here create DF with required schema
df.createOrReplaceTempView("savedDF")
现在在foreachRDD中:

// here we are in foreachRDD
val df = // create DataFrame from RDD
val existingCachedDF = spark.table("savedDF") // get reference to existing DataFrame
val union = existingCachedDF.union(df)
union.createOrReplaceTempView("savedDF")
好主意是在一些微批次中检查数据帧,以减少非常长的数据帧逻辑计划

另一个想法是使用结构化流媒体,它将取代Spark流媒体