Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/assembly/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 如何运行spark读取json文件并显示内容?_Apache Spark - Fatal编程技术网

Apache spark 如何运行spark读取json文件并显示内容?

Apache spark 如何运行spark读取json文件并显示内容?,apache-spark,Apache Spark,spark_job.py文件包含以下内容: from pyspark import SparkContext from pyspark.sql import SparkSession from pyspark.streaming import StreamingContext from pyspark.sql.types import IntegerType, LongType, DecimalType,StructType, StructField, StringType from pyspa

spark_job.py文件包含以下内容:

from pyspark import SparkContext
from pyspark.sql import SparkSession
from pyspark.streaming import StreamingContext
from pyspark.sql.types import IntegerType, LongType, DecimalType,StructType, StructField, StringType
from pyspark.sql import Row
from pyspark.sql.functions import col
import pyspark.sql.functions as F
from pyspark.sql import Window

def readMyStream(rdd):
  if not rdd.isEmpty():
    df = spark.read.json(rdd)
    print('Started the Process')
    print('Selection of Columns')
    df = df.select('t1','t2','t3','timestamp').where(col("timestamp").isNotNull())
    df.show()

if __name__ == '__main__':
    sc = SparkContext.getOrCreate()
    spark = SparkSession(sc)
    ssc = StreamingContext(sc, 5)

    stream_data = ssc.textFileStream("jsondata.json")
    stream_data.foreachRDD( lambda rdd: readMyStream(rdd) )
    ssc.start()
    ssc.stop()
[{"timestamp": "1571053218000","t1": "55.23","t2": "10","t3": "ON"},

{"timestamp": "1571053278000","t1": "63.23","t2": "11","t3": "OFF"},

{"timestamp": "1571053338000","t1": "73.23","t2": "12","t3": "ON"},

{"timestamp": "1571053398000","t1": "83.23","t2": "13","t3": "ON"}]
jsondata.json文件包含以下内容:

from pyspark import SparkContext
from pyspark.sql import SparkSession
from pyspark.streaming import StreamingContext
from pyspark.sql.types import IntegerType, LongType, DecimalType,StructType, StructField, StringType
from pyspark.sql import Row
from pyspark.sql.functions import col
import pyspark.sql.functions as F
from pyspark.sql import Window

def readMyStream(rdd):
  if not rdd.isEmpty():
    df = spark.read.json(rdd)
    print('Started the Process')
    print('Selection of Columns')
    df = df.select('t1','t2','t3','timestamp').where(col("timestamp").isNotNull())
    df.show()

if __name__ == '__main__':
    sc = SparkContext.getOrCreate()
    spark = SparkSession(sc)
    ssc = StreamingContext(sc, 5)

    stream_data = ssc.textFileStream("jsondata.json")
    stream_data.foreachRDD( lambda rdd: readMyStream(rdd) )
    ssc.start()
    ssc.stop()
[{"timestamp": "1571053218000","t1": "55.23","t2": "10","t3": "ON"},

{"timestamp": "1571053278000","t1": "63.23","t2": "11","t3": "OFF"},

{"timestamp": "1571053338000","t1": "73.23","t2": "12","t3": "ON"},

{"timestamp": "1571053398000","t1": "83.23","t2": "13","t3": "ON"}]
运行:

python spark_job.py
就给我这个,

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
PS C:\Users\Admin\Desktop\madi_kafka> SUCCESS: The process with PID 10272 (child process of PID 2544) has been terminated.
SUCCESS: The process with PID 2544 (child process of PID 10652) has been terminated.
SUCCESS: The process with PID 10652 (child process of PID 4516) has been terminated.
show函数可以帮助您,我认为这个代码示例可以帮助您解决问题:

val data = session.sqlContext.read.format("json").load("data/input.json")
val first = data.show()

大多数情况下,spark可以隐式地找到数据的模式。

我在scala中使用了以下代码,我认为这可能会有所帮助:

import session.implicits._
case class TClass(timestamp:String,t1:String,t2:String,t3:String)
val jsonData= session.read.option("inferSchema","true").option("multiline","true").option("header","true").json("data/jsondata.json").as[TClass]
jsonData.printSchema()
jsonData.show()
print("Started the Process")
print("Selection of Columns")
val df = jsonData.select("timestamp","t1","t2","t3").where(col("timestamp") isNotNull)
df.show()
把这个拿出来:

+-------------+-----+---+---+
|    timestamp|   t1| t2| t3|
+-------------+-----+---+---+
|1571053218000|55.23| 10| ON|
|1571053278000|63.23| 11|OFF|
|1571053338000|73.23| 12| ON|
|1571053398000|83.23| 13| ON|
+-------------+-----+---+---+

我希望它能对您有所帮助。

它不是“仅仅”文本,而是结构化的JSON,因此使用
spark.readStream\.format(“JSON”).option().schema(my_schema)。load(“path/to/data”)
-是的,这将是Scala。你需要小溪吗?如果没有,您可以尝试使用Python
spark.read.schema(schema).json(inputPath)
——否则,您仍然应该使用json加载程序,而不是纯文本文件reader@UninformedUser,thx但是什么是模式在我的例子中,我只是没有这个模式。我想你可以省略这个模式。正如您在单独的方法中所做的那样。还是不确定,这里需要小溪吗?我是说,你基本上是从静态文件中读取的,不是吗?或者这只是为了测试,稍后会提供JSON流?@UninformedUser,是的,这是一个静态文件,我正在尝试运行任何基本示例,但无法lol,我实际上在做一些不同的事情-我有一个要处理的注释流(最多10个单词)thx,但它不是完整的代码,我知道show()