Hadoop 如何根据列获取Pig脚本中的序列号?

Hadoop 如何根据列获取Pig脚本中的序列号?,hadoop,apache-pig,hdfs,Hadoop,Apache Pig,Hdfs,目前,我的数据是这样来的,但我希望我的数据显示相对于pid字段变化顺序的秩。我的脚本是这样的。我尝试了秩运算符和密集秩运算符,但仍然没有期望的输出 trans_c1 = LOAD '/mypath/data_file.csv' using PigStorage(',') as (date,Product_id); (DATE,Product id) (2015-01-13T18:00:40.622+05:30,B00XT) (2015-01-13T18:00:4

目前,我的数据是这样来的,但我希望我的数据显示相对于pid字段变化顺序的秩。我的脚本是这样的。我尝试了秩运算符和密集秩运算符,但仍然没有期望的输出

trans_c1 = LOAD '/mypath/data_file.csv' using PigStorage(',') as (date,Product_id);  



    (DATE,Product id)
    (2015-01-13T18:00:40.622+05:30,B00XT)
    (2015-01-13T18:00:40.622+05:30,B00XT)
    (2015-01-13T18:00:40.622+05:30,B00XT)
    (2015-01-13T18:00:40.622+05:30,B00XT)
    (2015-01-13T18:00:40.622+05:30,B00OZ)
    (2015-01-13T18:00:40.622+05:30,B00OZ)
    (2015-01-13T18:00:40.622+05:30,B00OZ)
    (2015-01-13T18:00:40.622+05:30,B00VB)
    (2015-01-13T18:00:40.622+05:30,B00VB)
    (2015-01-13T18:00:40.622+05:30,B00VB)
    (2015-01-13T18:00:40.622+05:30,B00VB)
最终的输出应该是这样的,其中秩序列随着(Product_id)的变化而变化,并重置为1。在pig中是否可以这样做

    (1,2015-01-13T18:00:40.622+05:30,B00XT)
    (2,2015-01-13T18:00:40.622+05:30,B00XT)
    (3,2015-01-13T18:00:40.622+05:30,B00XT)
    (4,2015-01-13T18:00:40.622+05:30,B00XT)
    (1,2015-01-13T18:00:40.622+05:30,B00OZ)
    (2,2015-01-13T18:00:40.622+05:30,B00OZ)
    (3,2015-01-13T18:00:40.622+05:30,B00OZ)
    (1,2015-01-13T18:00:40.622+05:30,B00VB)
    (2,2015-01-13T18:00:40.622+05:30,B00VB)
    (3,2015-01-13T18:00:40.622+05:30,B00VB)
    (4,2015-01-13T18:00:40.622+05:30,B00VB)

这个问题可以通过使用piggybank函数
Stitch
Over
来解决。它也可以通过使用dataFu的
枚举
函数来解决

使用Piggybank函数的脚本:

REGISTER <path to piggybank folder>/piggybank.jar;
DEFINE Stitch org.apache.pig.piggybank.evaluation.Stitch;
DEFINE Over org.apache.pig.piggybank.evaluation.Over('int');
input_data = LOAD 'data_file.csv' USING PigStorage(',') AS (date:chararray, pid:chararray);
group_data = GROUP input_data BY pid;
rank_grouped_data = FOREACH group_data GENERATE FLATTEN(Stitch(input_data, Over(input_data, 'row_number')));
display_data = FOREACH rank_grouped_data GENERATE stitched::result AS rank_number, stitched::date AS date, stitched::pid AS pid;
DUMP display_data;
REGISTER <path to pig libraries>/datafu-1.2.0.jar;
DEFINE Enumerate datafu.pig.bags.Enumerate('1');
input_data = LOAD 'data_file.csv' USING PigStorage(',') AS (date:chararray, pid:chararray);
group_data = GROUP input_data BY pid;
data = FOREACH group_data GENERATE FLATTEN(Enumerate(input_data));
display_data = FOREACH data GENERATE $2, $0, $1;
DUMP display_data;
参考:

(1,2015-01-13T18:00:40.622+05:30,B00OZ)
(2,2015-01-13T18:00:40.622+05:30,B00OZ)
(3,2015-01-13T18:00:40.622+05:30,B00OZ)
(1,2015-01-13T18:00:40.622+05:30,B00VB)
(2,2015-01-13T18:00:40.622+05:30,B00VB)
(3,2015-01-13T18:00:40.622+05:30,B00VB)
(4,2015-01-13T18:00:40.622+05:30,B00VB)
(1,2015-01-13T18:00:40.622+05:30,B00XT)
(2,2015-01-13T18:00:40.622+05:30,B00XT)
(3,2015-01-13T18:00:40.622+05:30,B00XT)
(4,2015-01-13T18:00:40.622+05:30,B00XT)