Sql 如何按分区显示所有行的相同值?

Sql 如何按分区显示所有行的相同值?,sql,apache-spark,hive,alias,partition-by,Sql,Apache Spark,Hive,Alias,Partition By,我有一个消费者购买清单,其中一些消费者在范围内的时间范围内进行了多次购买。我想用每个消费者第一次购买的位置填充一列,但我发现以下错误: Error in SQL statement: ParseException: mismatched input '(' expecting <EOF>(line 2, pos 25) == SQL == SELECT consumer_id ,location OVER(partition BY table.consumer_id

我有一个消费者购买清单,其中一些消费者在范围内的时间范围内进行了多次购买。我想用每个消费者第一次购买的位置填充一列,但我发现以下错误:

Error in SQL statement: ParseException: 
mismatched input '(' expecting <EOF>(line 2, pos 25)

== SQL ==
SELECT consumer_id
       ,location OVER(partition BY table.consumer_id) AS first_purchase_site
---------------------^^^
FROM table
我想用每个消费者第一次购买的位置填充一列

你在寻找第一个价值吗

错误,您的窗口函数缺少该函数。

您首先需要窗口函数\u值:


使用订购购买的列更改消费者的购买顺序。

使用窗口计算很难做到这一点。您可以使用联接

SELECT 
  table.consumer_id,
  table.location,
  a.first_purchase_site
FROM table LEFT JOIN
  (SELECT consumer_id,location AS first_purchase_site FROM table WHERE 
  consumer_purchase_order_sequence = 1) a ON a.consumer_id=table.consumer_id

您应该将窗口函数分区方式与聚合函数一起使用。您不能在上使用位置。请使用此SELECT consumer_id,位置从SELECT a.*中选择,行数按表超额分配。consumer_id作为rn从表a rs中选择,其中rs.rn=1
SELECT consumer_id,
       FIRST_VALUE(location) OVER (partition BY table.consumer_id) AS first_purchase_site
FROM table;
SELECT DISTINCT consumer_id,
       FIRST_VALUE(location) OVER(PARTITION BY consumer_id ORDER BY consumer_purchase_order_sequence) AS first_purchase_site
FROM table
SELECT 
  table.consumer_id,
  table.location,
  a.first_purchase_site
FROM table LEFT JOIN
  (SELECT consumer_id,location AS first_purchase_site FROM table WHERE 
  consumer_purchase_order_sequence = 1) a ON a.consumer_id=table.consumer_id