Python 基于时间戳值的流和过程数据(使用Kafka和Spark流)

Python 基于时间戳值的流和过程数据(使用Kafka和Spark流),python,apache-spark,pyspark,apache-kafka,spark-streaming,Python,Apache Spark,Pyspark,Apache Kafka,Spark Streaming,我会尽量简化我要解决的问题。我有一个从JSON文件读取的员工数据流,其模式如下: StructType([ \ StructField("timeStamp", TimestampType()),\ StructField("emp_id", LongType()),\ StructField("on_duty", LongType()) ]) # on_duty is an int boolean-> 0,1 样本

我会尽量简化我要解决的问题。我有一个从JSON文件读取的员工数据流,其模式如下:

StructType([ \
  StructField("timeStamp", TimestampType()),\
  StructField("emp_id", LongType()),\
  StructField("on_duty", LongType()) ])
# on_duty is an int boolean-> 0,1
样本:

{"timeStamp": 1514765160, "emp_id": 12471979, "on_duty": 0}
{"timeStamp": 1514765161, "emp_id": 12472154, "on_duty": 1}
我想每分钟了解两件事,即在线员工和非值班员工的总数,并使用结构化spark流媒体进行处理

这是每分钟wrt。时间戳,而不是系统时间

卡夫卡作品 火花流 我不知道如何进行。我是在卡夫卡制作人中进行更改,还是在使用spark处理流时进行更改?我该怎么做呢

如有任何提示或帮助,将不胜感激


更新 根据@Srinivas解决方案

....----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+
|[1970-01-18 04:46:00, 1970-01-18 04:47:00]|1970-01-18 04:46:05|1070         |[1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,....


更新2

如何获得如下输出:

Time    Online_Emp  Available_Emp
2019-01-01 00:00:00 52  23
2019-01-01 00:01:00 58  19
2019-01-01 00:02:00 65  28

使用
窗口
功能

卡夫卡中的样本数据

{"timeStamp": 1592669811475, "emp_id": 12471979, "on_duty": 0}
{"timeStamp": 1592669811475, "emp_id": 12472154, "on_duty": 1}
{"timeStamp": 1592669811475, "emp_id": 12471980, "on_duty": 0}
{"timeStamp": 1592669811475, "emp_id": 12472181, "on_duty": 1}
{"timeStamp": 1592669691475, "emp_id": 12471982, "on_duty": 0}
{"timeStamp": 1592669691475, "emp_id": 12472183, "on_duty": 1}
{"timeStamp": 1592669691475, "emp_id": 12471984, "on_duty": 0}
{"timeStamp": 1592669571475, "emp_id": 12472185, "on_duty": 1}
{"timeStamp": 1592669571475, "emp_id": 12472186, "on_duty": 1}
{"timeStamp": 1592669571475, "emp_id": 12472187, "on_duty": 0}
{"timeStamp": 1592669571475, "emp_id": 12472188, "on_duty": 1}
{"timeStamp": 1592669631475, "emp_id": 12472185, "on_duty": 1}
{"timeStamp": 1592669631475, "emp_id": 12472186, "on_duty": 1}
{"timeStamp": 1592669631475, "emp_id": 12472187, "on_duty": 0}
{"timeStamp": 1592669631475, "emp_id": 12472188, "on_duty": 1}
输出

+---------------------------------------------+-------------------+---------------+------------+-----------+
|window                                       |timestamp          |total_employees|on_duty     |not_on_duty|
+---------------------------------------------+-------------------+---------------+------------+-----------+
|[2020-06-20 21:42:00.0,2020-06-20 21:43:00.0]|2020-06-20 21:42:51|4              |[1, 1, 0, 1]|1          |
|[2020-06-20 21:44:00.0,2020-06-20 21:45:00.0]|2020-06-20 21:44:51|3              |[0, 1, 0]   |2          |
|[2020-06-20 21:46:00.0,2020-06-20 21:47:00.0]|2020-06-20 21:46:51|4              |[0, 1, 0, 1]|2          |
|[2020-06-20 21:43:00.0,2020-06-20 21:44:00.0]|2020-06-20 21:43:51|4              |[1, 1, 0, 1]|1          |
+---------------------------------------------+-------------------+---------------+------------+-----------+
火花批处理

spark \
    .read \
    .schema(schema) \
    .json("/tmp/data/emp_data.json") \
    .select(F.to_json(F.struct("*")).cast("string").alias("value")) \
    .write \
    .format("kafka") \
    .option("kafka.bootstrap.servers", "localhost:9092") \
    .option("topic", "emp_data") \
    .save()
火花流媒体

spark \
    .readStream \
    .schema(schema) \
    .json("/tmp/data/emp_data.json") \
    .select(F.to_json(F.struct("*")).cast("string").alias("value")) \
    .writeStream \
    .format("kafka") \
    .option("kafka.bootstrap.servers", "localhost:9092") \
    .option("topic", "emp_data") \
    .start()

卡夫卡中的JSON数据

/tmp/data> kafka-console-consumer --bootstrap-server localhost:9092 --topic emp_data
{"timeStamp":1592669811475,"emp_id":12471979,"on_duty":0}
{"timeStamp":1592669811475,"emp_id":12472154,"on_duty":1}
{"timeStamp":1592669811475,"emp_id":12471980,"on_duty":0}
{"timeStamp":1592669811475,"emp_id":12472181,"on_duty":1}
{"timeStamp":1592669691475,"emp_id":12471982,"on_duty":0}
{"timeStamp":1592669691475,"emp_id":12472183,"on_duty":1}
{"timeStamp":1592669691475,"emp_id":12471984,"on_duty":0}
{"timeStamp":1592669571475,"emp_id":12472185,"on_duty":1}
{"timeStamp":1592669571475,"emp_id":12472186,"on_duty":1}
{"timeStamp":1592669571475,"emp_id":12472187,"on_duty":0}
{"timeStamp":1592669571475,"emp_id":12472188,"on_duty":1}
{"timeStamp":1592669631475,"emp_id":12472185,"on_duty":1}
{"timeStamp":1592669631475,"emp_id":12472186,"on_duty":1}
{"timeStamp":1592669631475,"emp_id":12472187,"on_duty":0}
{"timeStamp":1592669631475,"emp_id":12472188,"on_duty":1}
^CProcessed a total of 15 messages

我自己也在研究类似的问题。文档让我有点困惑。期待找到解决方案你的意思是我确定吗?是的,这些是时间戳值。如果我将模式中的longtype更改为timestamp,现在我明白了,我使用的是以毫秒为单位的时间戳,但您的时间戳不是以毫秒为单位的。您必须使用此
更改
。withColumn(“timestamp”,F.from_unixtime(F.col(“timestamp”)/1000))
。withColumn(“timestamp”,F.from_unixtime(F.col(“timestamp”))
是的,它可以工作,但所有批次重复显示相同的信息。它并没有用卡夫卡的产品更新。我是否通过逐行发送并使用sleep(1)来错误地制作卡夫卡大街(kafka strea)谢谢你的回答,但我的控制台输出完全奇怪。我将在问题中发布我的控制台输出的一部分,它与此代码
datetime.fromtimestamp(项['timestamp']).strftime(“%Y-%m-%d%H:%m:%S”)
,这些tiestamps是2017-2019年的。它非常有帮助:)谢谢。我在其他地方找不到的一个小问题是,如何拆分json并每隔一段时间一点一点地发送给卡夫卡?必须是在unix命令中使用
split
,rightNo。。您可以直接使用dataframe,将字符串和列名类型的数据转换为值,然后使用格式为Kafka发送消息spark batch的更新代码检查&流式发送消息到Kafka。
schema = StructType([ \
    StructField("timeStamp", LongType()), \
    StructField("emp_id", LongType()), \
    StructField("on_duty", LongType())])

df = spark\
    .readStream\
    .format("kafka")\
    .option("kafka.bootstrap.servers", "localhost:9092")\
    .option("subscribe","emp_dstream")\
    .option("startingOffsets", "earliest")\
    .load()\
    .selectExpr("CAST(value AS STRING)")\
    .select(F.from_json(F.col("value"), schema).alias("value"))\
    .select(F.col("value.*"))\
    .withColumn("timestamp",F.from_unixtime(F.col("timestamp") / 1000))\
    .groupBy(F.window(F.col("timestamp"), "1 minutes"), F.col("timestamp"))\
    .agg(F.count(F.col("timeStamp")).alias("total_employees"),F.collect_list(F.col("on_duty")).alias("on_duty"),F.sum(F.when(F.col("on_duty") == 0, F.lit(1)).otherwise(F.lit(0))).alias("not_on_duty"))\
    .writeStream\
    .format("console")\
    .outputMode("complete")\
    .option("truncate", "false")\
    .start()\
    .awaitTermination()
+---------------------------------------------+-------------------+---------------+------------+-----------+
|window                                       |timestamp          |total_employees|on_duty     |not_on_duty|
+---------------------------------------------+-------------------+---------------+------------+-----------+
|[2020-06-20 21:42:00.0,2020-06-20 21:43:00.0]|2020-06-20 21:42:51|4              |[1, 1, 0, 1]|1          |
|[2020-06-20 21:44:00.0,2020-06-20 21:45:00.0]|2020-06-20 21:44:51|3              |[0, 1, 0]   |2          |
|[2020-06-20 21:46:00.0,2020-06-20 21:47:00.0]|2020-06-20 21:46:51|4              |[0, 1, 0, 1]|2          |
|[2020-06-20 21:43:00.0,2020-06-20 21:44:00.0]|2020-06-20 21:43:51|4              |[1, 1, 0, 1]|1          |
+---------------------------------------------+-------------------+---------------+------------+-----------+
spark \
    .read \
    .schema(schema) \
    .json("/tmp/data/emp_data.json") \
    .select(F.to_json(F.struct("*")).cast("string").alias("value")) \
    .write \
    .format("kafka") \
    .option("kafka.bootstrap.servers", "localhost:9092") \
    .option("topic", "emp_data") \
    .save()
spark \
    .readStream \
    .schema(schema) \
    .json("/tmp/data/emp_data.json") \
    .select(F.to_json(F.struct("*")).cast("string").alias("value")) \
    .writeStream \
    .format("kafka") \
    .option("kafka.bootstrap.servers", "localhost:9092") \
    .option("topic", "emp_data") \
    .start()

/tmp/data> kafka-console-consumer --bootstrap-server localhost:9092 --topic emp_data
{"timeStamp":1592669811475,"emp_id":12471979,"on_duty":0}
{"timeStamp":1592669811475,"emp_id":12472154,"on_duty":1}
{"timeStamp":1592669811475,"emp_id":12471980,"on_duty":0}
{"timeStamp":1592669811475,"emp_id":12472181,"on_duty":1}
{"timeStamp":1592669691475,"emp_id":12471982,"on_duty":0}
{"timeStamp":1592669691475,"emp_id":12472183,"on_duty":1}
{"timeStamp":1592669691475,"emp_id":12471984,"on_duty":0}
{"timeStamp":1592669571475,"emp_id":12472185,"on_duty":1}
{"timeStamp":1592669571475,"emp_id":12472186,"on_duty":1}
{"timeStamp":1592669571475,"emp_id":12472187,"on_duty":0}
{"timeStamp":1592669571475,"emp_id":12472188,"on_duty":1}
{"timeStamp":1592669631475,"emp_id":12472185,"on_duty":1}
{"timeStamp":1592669631475,"emp_id":12472186,"on_duty":1}
{"timeStamp":1592669631475,"emp_id":12472187,"on_duty":0}
{"timeStamp":1592669631475,"emp_id":12472188,"on_duty":1}
^CProcessed a total of 15 messages