Apache spark spark streaming未执行spark sql查询

Apache spark spark streaming未执行spark sql查询,apache-spark,apache-spark-sql,spark-streaming,Apache Spark,Apache Spark Sql,Spark Streaming,在spark流之上执行spark sql时,我面临一个问题 我没有在var x=sqlContext.sql行(“从价格中选择计数(*))上打印x的值 请在下面找到我的代码 import spark.implicits._ import org.apache.spark.sql.types._ import org.apache.spark.sql.Encoders import org.apache.spark.streaming._ import org.apache.spark.s

在spark流之上执行spark sql时,我面临一个问题

我没有在var x=sqlContext.sql行(“从价格中选择计数(*))上打印x的值

请在下面找到我的代码

    import spark.implicits._
import org.apache.spark.sql.types._
import org.apache.spark.sql.Encoders
import org.apache.spark.streaming._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession
import spark.implicits._
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession
import spark.implicits._
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType}
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.storage.StorageLevel
import java.util.regex.Pattern
import java.util.regex.Matcher
import org.apache.spark.sql.hive.HiveContext;
import org.apache.spark.sql.streaming.Trigger
import org.apache.spark.sql._

val conf = new SparkConf().setAppName("streamHive").setMaster("local[*]").set("spark.driver.allowMultipleContexts", "true")
val ssc = new StreamingContext(conf, Seconds(5))    
val sc=ssc.sparkContext

val lines = ssc.textFileStream("file:///home/sdf/testHive")
case class Prices(name: String, age: String,sex: String, location: String)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)

def parse (rdd : org.apache.spark.rdd.RDD[String] ) = {
var l = rdd.map(_.split(","))
val prices = l.map(p => Prices(p(0),p(1),p(2),p(3)))
val pricesDf = sqlContext.createDataFrame(prices)
pricesDf.registerTempTable("prices")
println("showing printdfShow")
pricesDf.show()
var x = sqlContext.sql("select count(*) from prices")
println("hello")
println (x)
}

lines.foreachRDD { rdd => parse(rdd)}

ssc.start()
我得到以下结果,它不是打印spark sql结果:

   [count(1): bigint]
   showing printdfShow
   +----+---+---+--------+
   |name|age|sex|location|
   +----+---+---+--------+
   +----+---+---+--------+

   hello
   [count(1): bigint]
   showing printdfShow
   +----+---+---+--------+
   |name|age|sex|location|
   +----+---+---+--------+
   | rop| 22|  M|      uk|
   | fop| 24|  F|      us|
   | dop| 23|  M|     fok|
   +----+---+---+--------+

   hello
   [count(1): bigint]
   showing printdfShow
   +----+---+---+--------+
   |name|age|sex|location|
   +----+---+---+--------+
   +----+---+---+--------+

   hello
   [count(1): bigint]

请帮助我,如何在spark流媒体中使用spark sql,因为我是spark的新手。

请在pricesDf.show之后的代码中尝试这一点

println(pricesDf.count)
如果您想在相同的代码中使用,请尝试下面的方法,而不是println(x)

x是一个数据帧而不是一个值,这就是为什么在运行println(x)时它不会被打印出来的原因。要将其放入变量中,可以尝试以下方法

println(x.rdd.map(r => r.getString(0)).collect()(0))

场景的可能重复是不同的
println(x.rdd.map(r => r.getString(0)).collect()(0))