Python 在输出到控制台之前,在Spark结构化流媒体上处理数据

Python 在输出到控制台之前,在Spark结构化流媒体上处理数据,python,apache-spark,pyspark,apache-kafka,apache-spark-sql,Python,Apache Spark,Pyspark,Apache Kafka,Apache Spark Sql,我会尽量保持简单。 我定期从卡夫卡制作人那里读取一些数据,并使用Spark结构化流媒体输出以下内容 我有如下输出的数据: +------------------------------------------+-------------------+--------------+-----------------+ |window |timestamp |Online_Emp |Available_Em

我会尽量保持简单。 我定期从卡夫卡制作人那里读取一些数据,并使用Spark结构化流媒体输出以下内容

我有如下输出的数据:

+------------------------------------------+-------------------+--------------+-----------------+
|window                                    |timestamp          |Online_Emp    |Available_Emp    |
+------------------------------------------+-------------------+--------------+-----------------+
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:27|1             |0                |
|[2017-12-31 16:00:00, 2017-12-31 16:01:00]|2017-12-31 16:00:41|1             |0                |
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:29|1             |0                |
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:20|1             |0                |
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:23|2             |0                |
|[2017-12-31 16:00:00, 2017-12-31 16:01:00]|2017-12-31 16:00:52|1             |0                |
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:08|1             |0                |
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:12|1             |0                |
|[2017-12-31 16:00:00, 2017-12-31 16:01:00]|2017-12-31 16:00:02|1             |1                |
|[2017-12-31 16:00:00, 2017-12-31 16:01:00]|2017-12-31 16:00:11|1             |0                |
+------------------------------------------+-------------------+--------------+-----------------+
Time         Online_Emp Available_Emp
2017-01-01 00:00:00  52  23
2017-01-01 00:01:00  58  19
2017-01-01 00:02:00  65  28
我希望它输出如下:

+------------------------------------------+-------------------+--------------+-----------------+
|window                                    |timestamp          |Online_Emp    |Available_Emp    |
+------------------------------------------+-------------------+--------------+-----------------+
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:27|1             |0                |
|[2017-12-31 16:00:00, 2017-12-31 16:01:00]|2017-12-31 16:00:41|1             |0                |
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:29|1             |0                |
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:20|1             |0                |
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:23|2             |0                |
|[2017-12-31 16:00:00, 2017-12-31 16:01:00]|2017-12-31 16:00:52|1             |0                |
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:08|1             |0                |
|[2017-12-31 16:01:00, 2017-12-31 16:02:00]|2017-12-31 16:01:12|1             |0                |
|[2017-12-31 16:00:00, 2017-12-31 16:01:00]|2017-12-31 16:00:02|1             |1                |
|[2017-12-31 16:00:00, 2017-12-31 16:01:00]|2017-12-31 16:00:11|1             |0                |
+------------------------------------------+-------------------+--------------+-----------------+
Time         Online_Emp Available_Emp
2017-01-01 00:00:00  52  23
2017-01-01 00:01:00  58  19
2017-01-01 00:02:00  65  28
因此,基本上它计算每分钟在线的员工数(通过他们唯一的驱动程序id),并显示有多少人可用

请注意,一个特定的雇员id可能在一分钟内在
可用
值勤
之间变化,我们需要在一分钟结束前的最终计票

卡夫卡作品 Spark结构化流媒体 如何获得所需的输出


如有任何提示或帮助,将不胜感激

您的代码工作得非常完美。请检查以下卡夫卡数据和火花流输出

Batch 5
是您的最终结果,忽略其他批次,如批次0到4。总是考虑最新批处理有关于卡夫卡可用数据的更新记录。

批处理:0

No data in kafka.

Spark Streaming
+------+---------+---------------+-------+-----------+
|window|timestamp|total_employees|on_duty|not_on_duty|
+------+---------+---------------+-------+-----------+
+------+---------+---------------+-------+-----------+

批次:1

Published to kafka.

{"timeStamp": 1592669691475, "emp_id": 12471114, "on_duty": 0} //2020-06-20T21:44:51
{"timeStamp": 1592669691475, "emp_id": 12471124, "on_duty": 0} //2020-06-20T21:44:51
{"timeStamp": 1592669691475, "emp_id": 12471134, "on_duty": 0} //2020-06-20T21:44:51

Spark Streaming
+---------------------------------------------+-------------------+---------------+---------+-----------+
|window                                       |timestamp          |total_employees|on_duty  |not_on_duty|
+---------------------------------------------+-------------------+---------------+---------+-----------+
|[2020-06-20 21:44:00.0,2020-06-20 21:45:00.0]|2020-06-20 21:44:51|3              |[0, 0, 0]|3          |
+---------------------------------------------+-------------------+---------------+---------+-----------+

批次:2个

Published to kafka.
{"timeStamp": 1592669691475, "emp_id": 12471144, "on_duty": 0} //2020-06-20T21:44:51 // seconds difference
{"timeStamp": 1592669691575, "emp_id": 12471124, "on_duty": 0} //2020-06-20T21:44:51
{"timeStamp": 1592669691575, "emp_id": 12471234, "on_duty": 0} //2020-06-20T21:44:51
{"timeStamp": 1592669691575, "emp_id": 12471334, "on_duty": 1} //2020-06-20T21:44:51

Spark Streaming
+---------------------------------------------+-------------------+---------------+---------------------+-----------+
|window                                       |timestamp          |total_employees|on_duty              |not_on_duty|
+---------------------------------------------+-------------------+---------------+---------------------+-----------+
|[2020-06-20 21:44:00.0,2020-06-20 21:45:00.0]|2020-06-20 21:44:51|7              |[0, 0, 0, 1, 0, 0, 0]|6          |
+---------------------------------------------+-------------------+---------------+---------------------+-----------+
批次:3个

Published to kafka.
{"timeStamp": 1592669691575, "emp_id": 12471124, "on_duty": 0} // 2020-06-20T21:44:51
{"timeStamp": 1592669691575, "emp_id": 12471424, "on_duty": 1} // 2020-06-20T21:44:51
{"timeStamp": 1592669631475, "emp_id": 12472188, "on_duty": 1} // 2020-06-20T21:43:51
{"timeStamp": 1592669631475, "emp_id": 12472288, "on_duty": 0} // 2020-06-20T21:43:51
{"timeStamp": 1592669631475, "emp_id": 12472388, "on_duty": 0} // 2020-06-20T21:43:51
{"timeStamp": 1592669631475, "emp_id": 12472488, "on_duty": 1} // 2020-06-20T21:43:51

Spark Streaming
+---------------------------------------------+-------------------+---------------+---------------------------+-----------+
|window                                       |timestamp          |total_employees|on_duty                    |not_on_duty|
+---------------------------------------------+-------------------+---------------+---------------------------+-----------+
|[2020-06-20 21:44:00.0,2020-06-20 21:45:00.0]|2020-06-20 21:44:51|9              |[0, 1, 0, 0, 0, 1, 0, 0, 0]|7          |
|[2020-06-20 21:43:00.0,2020-06-20 21:44:00.0]|2020-06-20 21:43:51|4              |[1, 0, 0, 1]               |2          |
+---------------------------------------------+-------------------+---------------+---------------------------+-----------+
批次:4个

Published to kafka.
{"timeStamp": 1592669691575, "emp_id": 12471524, "on_duty": 0} // 2020-06-20T21:44:51
{"timeStamp": 1592669691575, "emp_id": 12471624, "on_duty": 0} // 2020-06-20T21:44:51
{"timeStamp": 1592669631475, "emp_id": 12471188, "on_duty": 1} // 2020-06-20T21:43:51
{"timeStamp": 1592669631475, "emp_id": 12472288, "on_duty": 0} // 2020-06-20T21:43:51
{"timeStamp": 1592669631475, "emp_id": 12473388, "on_duty": 0} // 2020-06-20T21:43:51
{"timeStamp": 1592669631475, "emp_id": 12474488, "on_duty": 1} // 2020-06-20T21:43:51

Spark Streaming
+---------------------------------------------+-------------------+---------------+---------------------------------+-----------+
|window                                       |timestamp          |total_employees|on_duty                          |not_on_duty|
+---------------------------------------------+-------------------+---------------+---------------------------------+-----------+
|[2020-06-20 21:44:00.0,2020-06-20 21:45:00.0]|2020-06-20 21:44:51|11             |[0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0]|9          |
|[2020-06-20 21:43:00.0,2020-06-20 21:44:00.0]|2020-06-20 21:43:51|8              |[1, 0, 0, 1, 1, 0, 0, 1]         |4          |
+---------------------------------------------+-------------------+---------------+---------------------------------+-----------+
批次:5个

Published to kafka.
{"timeStamp": 1592669571475, "emp_id": 12482185, "on_duty": 1} // 2020-06-20T21:42:51
{"timeStamp": 1592669571475, "emp_id": 12483185, "on_duty": 1} // 2020-06-20T21:42:51
{"timeStamp": 1592669631475, "emp_id": 12484488, "on_duty": 1} // 2020-06-20T21:43:51
{"timeStamp": 1592669691575, "emp_id": 12491524, "on_duty": 0} // 2020-06-20T21:44:51
{"timeStamp": 1592669091575, "emp_id": 12491124, "on_duty": 0} // 2020-06-20T21:34:51
{"timeStamp": 1592669091575, "emp_id": 12491224, "on_duty": 1} // 2020-06-20T21:34:51

Spark Streaming
+---------------------------------------------+-------------------+---------------+------------------------------------+-----------+
|window                                       |timestamp          |total_employees|on_duty                             |not_on_duty|
+---------------------------------------------+-------------------+---------------+------------------------------------+-----------+
|[2020-06-20 21:34:00.0,2020-06-20 21:35:00.0]|2020-06-20 21:34:51|2              |[0, 1]                              |1          |
|[2020-06-20 21:42:00.0,2020-06-20 21:43:00.0]|2020-06-20 21:42:51|2              |[1, 1]                              |0          |
|[2020-06-20 21:44:00.0,2020-06-20 21:45:00.0]|2020-06-20 21:44:51|12             |[0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0]|10         |
|[2020-06-20 21:43:00.0,2020-06-20 21:44:00.0]|2020-06-20 21:43:51|9              |[1, 1, 0, 0, 1, 1, 0, 0, 1]         |4          |
+---------------------------------------------+-------------------+---------------+------------------------------------+-----------+

将此.outputMode(“完成”)更改为.outputMode(“更新”)&检查您是否获得了异常输出??该输出在单独的表中显示新输出,但仍然凌乱。我们的想法是在卡夫卡10点01分将相同的时间段合并成一个结果(例如,16:00-16:01只需一个条目),10个记录可用。。spark将读取这些记录,并在那一分钟内聚合。。如果您在10:10获得相同的时间戳数据,此数据将被视为新数据,并将分为另一批…因此,在您的最终数据中将有多条记录..检查spark streaming中的更多窗口功能..谢谢。是的,我明白这一点,但我正试图在同一分钟内把所有的东西排成一行。也许如果我把时间戳改成忽略秒?这是DON吗?检查有一个窗口函数,用3个参数…最后一个参数来定义你的数据在当前组或批处理中要考虑的时间。