Apache flink 具有时间窗口联接的Flink组
两条流如表1、表2所示。我们知道具有常规联接的组将无法工作,因此我们必须使用时间窗口联接。下面是我的flink sql的样子:Apache flink 具有时间窗口联接的Flink组,apache-flink,flink-streaming,flink-sql,Apache Flink,Flink Streaming,Flink Sql,两条流如表1、表2所示。我们知道具有常规联接的组将无法工作,因此我们必须使用时间窗口联接。下面是我的flink sql的样子: SELECT a.account account, SUM(a.value) + SUM(b.value), UNIX_TIMESTAMP(TUMBLE_START(a.producer_timestamp, INTERVAL '3' MINUTE)) FROM (SELECT
SELECT
a.account account,
SUM(a.value) + SUM(b.value),
UNIX_TIMESTAMP(TUMBLE_START(a.producer_timestamp, INTERVAL '3' MINUTE))
FROM
(SELECT
account,
value,
producer_timestamp
FROM
table1) a,
(SELECT
account,
value,
producer_timestamp
FROM
table2) b
WHERE
a.account = b.account AND
a.producer_timestamp BETWEEN b.producer_timestamp - INTERVAL '3' MINUTE AND b.producer_timestamp)
group by
a.account,
TUMBLE(a.producer_timestamp, INTERVAL '3' MINUTE)
但我还是从弗林克那里得到了错误:
Rowtime attributes must not be in the input rows of a regular join. As a workaround you can cast the time attributes of input tables to TIMESTAMP before.
Please check the documentation for the set of currently supported SQL features.
at org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:450)
at org.apache.flink.table.api.TableEnvironment.optimizePhysicalPlan(TableEnvironment.scala:369)
at org.apache.flink.table.api.StreamTableEnvironment.optimize(StreamTableEnvironment.scala:814)
at org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:860)
at org.apache.flink.table.api.StreamTableEnvironment.writeToSink(StreamTableEnvironment.scala:344)
at org.apache.flink.table.api.TableEnvironment.insertInto(TableEnvironment.scala:1048)
at org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:962)
at org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:922)
....
我想我使用时间窗口连接,就像这个文档所说的那样:。但弗林克告诉我这是一个固定的聚会。有什么我没有注意到的错误吗?你能提供优化的计划吗?使用
TableEnvironment.explain()
?您是对的,它不应该被解释为常规联接。但可能翻滚窗口具有更高的优先级。你能提供优化方案吗?使用TableEnvironment.explain()
?您是对的,它不应该被解释为常规联接。但可能翻滚窗口具有更高的优先级。