Sql AWS动态分析-按在线分组
我需要一些关于AWS运动分析功能的帮助 我有一个包含以下数据的流:Sql AWS动态分析-按在线分组,sql,amazon-web-services,amazon-kinesis,Sql,Amazon Web Services,Amazon Kinesis,我需要一些关于AWS运动分析功能的帮助 我有一个包含以下数据的流: hubId (Integer) datetime (timestamp) fid (varchar) path (varchar) 我想将这些数据聚合到另一个流中,以计算每小时的行数(页面浏览量)和每小时的不同fid数(访客),按hubId分组 目标流: profilesite_id(Integer) = hubId from source stream datetime (timestamp) visitors (Integ
hubId (Integer)
datetime (timestamp)
fid (varchar)
path (varchar)
我想将这些数据聚合到另一个流中,以计算每小时的行数(页面浏览量)和每小时的不同fid数(访客),按hubId分组
目标流:
profilesite_id(Integer) = hubId from source stream
datetime (timestamp)
visitors (Integer)
pageviews (Integer)
所以在MySQL中,我的函数是这样的:
SELECT hubId, CONCAT_WS(':', SUBSTR(datetime, 1, 13), '00:00') datetime, COUNT(*) pageviews, COUNT(DISTINCT(fid)) visitors
FROM tableStream
WHERE timestamp >= CURDATE()
GROUP BY hubId, CONCAT_WS(':', SUBSTR(datetime, 1, 13), '00:00');
我试图将此请求转换为运动分析,但这相当困难(我第一次…对不起:)
- CURDATE()函数在运动中不起作用
- CONCAT_也是
CREATE OR REPLACE STREAM "bore_agg" (profilsite_id SMALLINT, datetime TIMESTAMP, visitors INT, pageviews INT);
-- Create pump to insert into output
CREATE OR REPLACE PUMP "STREAM_PUMP" AS INSERT INTO "bore_agg"
-- Select all columns from source stream
SELECT
SOURCE_SQL_STREAM_001."hubId" profilsite_id,
CHAR_TO_TIMESTAMP('yyyy-MM-DD hh:mm:ss', TIMESTAMP_TO_CHAR('YYYY-MM-DD HH:00:00', SOURCE_SQL_STREAM_001."datetime")) datetime,
COUNT(DISTINCT(SOURCE_SQL_STREAM_001."fid")) visitors,
COUNT(*) pageviews
FROM SOURCE_SQL_STREAM_001
WHERE SOURCE_SQL_STREAM_001."datetime" >= CHAR_TO_TIMESTAMP('yyyy-MM-DD hh:mm:ss', TIME_TO_CHAR('YYYY-MM-DD HH:00:00', CURRENT_TIME))
GROUP BY
SOURCE_SQL_STREAM_001."hubId",
CHAR_TO_TIMESTAMP('yyyy-MM-DD hh:mm:ss', TIMESTAMP_TO_CHAR('YYYY-MM-DD HH:00:00', SOURCE_SQL_STREAM_001."datetime"));
但我有这个错误,我真的不知道该怎么办:
您的SQL代码中有错误更新您的SQL代码时出现问题
应用错误消息:SQL命令失败:创建或更换泵
插入“孔”中的“流泵”选择
SOURCE_SQL_STREAM_001.“hubId”profilsite_id,
CHAR_TO_时间戳('yyyy-MM-DD hh:MM:ss',TIMESTAMP_TO_CHAR('yyyy-MM-DD
HH:00:00',SOURCE_SQL_STREAM_001.“datetime”))datetime,
计数(不同的(源\ SQL \流\ 001.“fid”))访问者,计数(*)
来自源\u SQL\u流\u 001的页面视图,其中
SOURCE\u SQL\u STREAM\u 001.“datetime”>=CHAR\u TO\u时间戳('yyyy-MM-DD
hh:mm:ss',时间到字符('YYYY-mm-DD hh:00:00',当前时间))分组依据
SOURCE_SQL_STREAM_001.“hubId”,字符到时间戳('yyyy-MM-DD
hh:mm:ss',时间戳到字符('YYYY-mm-DD hh:00:00',
SOURCE_SQL_STREAM_001.“datetime”))。SQL错误消息:来自第9行,
第1列至第11行第120列:无法聚合无限流:
GROUP BY子句未指定或不包含任何单调
表达式。
有人能给我指出正确的方向吗
提前感谢:)
托马斯我知道已经有一段时间没有收到亚当的(唯一)回复了。因此,以防万一,正如Adam指出的那样,如果你仔细想想,数据分析流可以是一个“无限”的输入。因此,你需要知道在哪里停下来;i、 e.“聚合我的流的最后一分钟或最后一小时的数据”。因此,在本例中(下面的代码),它将聚合流的传入数据,直到指定的分钟或小时 注意:请记住,您首先需要创建一个具有相同结构(返回的列数和数据类型)的流,然后通过创建一个泵将“INSERT-SELECT”运行到新流中,该泵将是扫描传入数据并返回结果的过程(将在初始步骤中插入到流中) 示例:
-- ** Aggregate (COUNT, AVG, etc.) + Tumbling Time Window **
-- Performs function on the aggregate rows over a 10 second tumbling window for a specified column.
-- .----------. .----------. .----------.
-- | SOURCE | | INSERT | | DESTIN. |
-- Source-->| STREAM |-->| & SELECT |-->| STREAM |-->Destination
-- | | | (PUMP) | | |
-- '----------' '----------' '----------'
-- STREAM (in-application): a continuously updated entity that you can SELECT from and INSERT into like a TABLE
-- PUMP: an entity used to continuously 'SELECT ... FROM' a source STREAM, and INSERT SQL results into an output STREAM
-- Create output stream, which can be used to send to a destination
CREATE OR REPLACE STREAM DESTINATION_SQL_STREAM (ingest_time TIMESTAMP, vendorid int, count_vs_time int);
CREATE OR REPLACE PUMP "STREAM_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
-- Query 1):
-- Group by VendorID over the last 60 seconds of the stream.
SELECT STREAM STEP("SOURCE_SQL_STREAM_001".ROWTIME BY INTERVAL '60' SECOND) AS ingest_time, "vendorid", COUNT(*)
FROM "SOURCE_SQL_STREAM_001"
GROUP BY STEP("SOURCE_SQL_STREAM_001".ROWTIME BY INTERVAL '60' SECOND), "vendorid";
--Query 2)
-- Group by VendorID and count, over the last hour of the stream.
CREATE OR REPLACE STREAM DESTINATION_SQL_STREAM (hour_range TIMESTAMP, vendorid int, count_last_hr int);
CREATE OR REPLACE PUMP "STREAM_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
SELECT STREAM FLOOR("SOURCE_SQL_STREAM_001".ROWTIME TO HOUR) AS hour_range, "vendorid", COUNT(*) as count_last_hr
FROM "SOURCE_SQL_STREAM_001"
GROUP BY FLOOR("SOURCE_SQL_STREAM_001".ROWTIME TO HOUR), "vendorid";
嗯
卡洛斯。我知道已经有一段时间了,因为这里有亚当(唯一)的回应。因此,以防万一,正如Adam指出的那样,如果你仔细想想,数据分析流可以是一个“无限”的输入。因此,你需要知道在哪里停下来;i、 e.“聚合我的流的最后一分钟或最后一小时的数据”。因此,在本例中(下面的代码),它将聚合流的传入数据,直到指定的分钟或小时 注意:请记住,您首先需要创建一个具有相同结构(返回的列数和数据类型)的流,然后通过创建一个泵将“INSERT-SELECT”运行到新流中,该泵将是扫描传入数据并返回结果的过程(将在初始步骤中插入到流中) 示例:
-- ** Aggregate (COUNT, AVG, etc.) + Tumbling Time Window **
-- Performs function on the aggregate rows over a 10 second tumbling window for a specified column.
-- .----------. .----------. .----------.
-- | SOURCE | | INSERT | | DESTIN. |
-- Source-->| STREAM |-->| & SELECT |-->| STREAM |-->Destination
-- | | | (PUMP) | | |
-- '----------' '----------' '----------'
-- STREAM (in-application): a continuously updated entity that you can SELECT from and INSERT into like a TABLE
-- PUMP: an entity used to continuously 'SELECT ... FROM' a source STREAM, and INSERT SQL results into an output STREAM
-- Create output stream, which can be used to send to a destination
CREATE OR REPLACE STREAM DESTINATION_SQL_STREAM (ingest_time TIMESTAMP, vendorid int, count_vs_time int);
CREATE OR REPLACE PUMP "STREAM_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
-- Query 1):
-- Group by VendorID over the last 60 seconds of the stream.
SELECT STREAM STEP("SOURCE_SQL_STREAM_001".ROWTIME BY INTERVAL '60' SECOND) AS ingest_time, "vendorid", COUNT(*)
FROM "SOURCE_SQL_STREAM_001"
GROUP BY STEP("SOURCE_SQL_STREAM_001".ROWTIME BY INTERVAL '60' SECOND), "vendorid";
--Query 2)
-- Group by VendorID and count, over the last hour of the stream.
CREATE OR REPLACE STREAM DESTINATION_SQL_STREAM (hour_range TIMESTAMP, vendorid int, count_last_hr int);
CREATE OR REPLACE PUMP "STREAM_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
SELECT STREAM FLOOR("SOURCE_SQL_STREAM_001".ROWTIME TO HOUR) AS hour_range, "vendorid", COUNT(*) as count_last_hr
FROM "SOURCE_SQL_STREAM_001"
GROUP BY FLOOR("SOURCE_SQL_STREAM_001".ROWTIME TO HOUR), "vendorid";
嗯
卡洛斯。这是我刚刚学到的东西。。基本上把运动分析集团看作是一种拦截器。当你分组时,它需要知道分组何时结束。例如,1,1,1,2,2,3。由于缺少更好的词,group by可以在看到下一个值为
2
时,“释放”1的
组。因此,如果您要按步骤分组(按间隔“1”秒的行时间),您将每秒得到一行。如果你一步一组(ROWTIME按间隔'1'小时)
你将每小时得到一行,也就是当时间戳从00:01:00T00-00-00
更改为00:02:00T00-00-00
时,我刚刚学到了以下内容。。基本上把运动分析集团看作是一种拦截器。当你分组时,它需要知道分组何时结束。例如,1,1,1,2,2,3。由于缺少更好的词,group by可以在看到下一个值为2
时,“释放”1的
组。因此,如果您要按步骤分组(按间隔“1”秒的行时间),您将每秒得到一行。如果您按步骤分组(按间隔'1'小时的行时间)
您将每小时获得一行,即当时间戳从00:01:00T00-00-00
更改为00:02:00T00-00-00