Apache pig 集操作A-B的清管器查询

Apache pig 集操作A-B的清管器查询,apache-pig,Apache Pig,我有以下要求- 我有一个包含json格式数据行的大文件- { "_length": "88", "_id" : "1", "_store": { "meta": { "value": { "uid": "sam", } } } } { "_length": "22", "_id" : "2", "_store": {

我有以下要求-

我有一个包含json格式数据行的大文件-

{
    "_length": "88",
    "_id" : "1",
    "_store": {
        "meta": {
            "value": {
                "uid": "sam",
            }
        }
    }
}
{
    "_length": "22",
    "_id" : "2",
    "_store": {
        "meta": {
            "value": {
                "uid": "uncle",
            }
        }
    }
}

我有另一个包含以下内容的文件-

 {
      "uid" : "sam",
      "zid" : "121212121"
  }
  {
      "uid" : "aborted",
      "zid" : "9989821"
  }

现在我需要从包含所有记录的第一个文件生成一个新文件 第二个文件中没有udi


我是Pig新手,想知道支持哪种类型的联接或集合操作。

我认为elephantbird可以在这里帮助您。我从未尝试过这样的东西,但由于您的是嵌套json,您可以使用elephant bird将2个文件读入2个变量,然后加入并实现您的目标

以下是几个链接,可以帮助您从elephantbird开始


以下是示例文件以及相应的中间和最终结果-

cat ids_test.json
{"A":"a1","B":"a2"}

cat part-test
{"content":"both_A_a1_B_a2","meta":{"A":"a1","B":"a2"}}
{"content":"only_B_a2","meta":{"A":"","B":"a2"}}
{"content":"only_A_a1","meta":{"A":"a1","B":""}}
{"content":"both_A_b1_B_b2","meta":{"A":"b1","B":"b2"}}
{"content":"only_A_c1","meta":{"A":"c1","B":""}}

cat /tmp/j1/part-m-00000
{"user_data::json":{"meta":"{B=a2, A=a1}","content":"both_A_a1_B_a2"},"ids::json":{"B":"a2","A":"a1"}}
{"user_data::json":{"meta":"{B=a2, A=}","content":"only_B_a2"},"ids::json":null}
{"user_data::json":{"meta":"{B=, A=a1}","content":"only_A_a1"},"ids::json":{"B":"a2","A":"a1"}}
{"user_data::json":{"meta":"{B=b2, A=b1}","content":"both_A_b1_B_b2"},"ids::json":null}
{"user_data::json":{"meta":"{B=, A=c1}","content":"only_A_c1"},"ids::json":null}

cat /tmp/j1_filter/part-m-00000
{"user_data::json":{"meta":"{B=a2, A=}","content":"only_B_a2"},"ids::json":null}
{"user_data::json":{"meta":"{B=b2, A=b1}","content":"both_A_b1_B_b2"},"ids::json":null}
{"user_data::json":{"meta":"{B=, A=c1}","content":"only_A_c1"},"ids::json":null}

cat /tmp/j2/part-m-00000
{"J1_FILTER::user_data::json":{"meta":"{B=a2, A=}","content":"only_B_a2"},"J1_FILTER::ids::json":null,"ids::json":{"B":"a2","A":"a1"}}
{"J1_FILTER::user_data::json":{"meta":"{B=b2, A=b1}","content":"both_A_b1_B_b2"},"J1_FILTER::ids::json":null,"ids::json":null}
{"J1_FILTER::user_data::json":{"meta":"{B=, A=c1}","content":"only_A_c1"},"J1_FILTER::ids::json":null,"ids::json":null}

cat /tmp/results/part-m-00000
{"J1_FILTER::user_data::json":{"meta":"{B=b2, A=b1}","content":"both_A_b1_B_b2"}}
{"J1_FILTER::user_data::json":{"meta":"{B=, A=c1}","content":"only_A_c1"}}
下面是脚本-

user_data = LOAD 'part-test' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad') as (json:map[]);
ids = LOAD 'ids_test.json' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad') as (json:map[]);
J1 = JOIN user_data BY json#'meta'#'A' LEFT OUTER, ids BY json#'A' USING 'replicated';

rmf /tmp/j1
store J1 into '/tmp/j1' USING JsonStorage;

J1_FILTER = FILTER J1 BY ids::json is null;

rmf /tmp/j1_filter
store J1_FILTER into '/tmp/j1_filter' USING JsonStorage;

J2 = JOIN J1_FILTER BY user_data::json#'meta'#'B' left outer, ids BY json#'B' USING 'replicated';

rmf /tmp/j2
store J2 into '/tmp/j2' USING JsonStorage;

J2_FILTER = FILTER J2 BY ids::json is null;

RESULTS = FOREACH J2_FILTER GENERATE J1_FILTER::user_data::json;
--filtered_ids = FOREACH user_data_MINUS_ids GENERATE user_data AS data;
--DUMP filtered_ids;
rmf /tmp/results
store RESULTS into '/tmp/results' USING JsonStorage;

看这里,我已经在使用大象鸟了,我发现我需要使用2个带过滤器的“复制”连接来实现这种情况。你介意发布代码吗,也许它对将来的任何连接都有帮助!