使用数组解析Json对象,并使用Java中的ApacheSpark映射到多对

使用数组解析Json对象,并使用Java中的ApacheSpark映射到多对,java,json,apache-spark,Java,Json,Apache Spark,我在谷歌上搜索了一整天,找不到直接的答案,所以最后在这里发布了一个问题 我有一个包含行分隔json对象的文件: {"device_id": "103b", "timestamp": 1436941050, "rooms": ["Office", "Foyer"]} {"device_id": "103b", "timestamp": 1435677490, "rooms": ["Office", "Lab"]} {"device_id": "103b", "timestamp": 1436673

我在谷歌上搜索了一整天,找不到直接的答案,所以最后在这里发布了一个问题

我有一个包含行分隔json对象的文件:

{"device_id": "103b", "timestamp": 1436941050, "rooms": ["Office", "Foyer"]}
{"device_id": "103b", "timestamp": 1435677490, "rooms": ["Office", "Lab"]}
{"device_id": "103b", "timestamp": 1436673850, "rooms": ["Office", "Foyer"]}
我的目标是用Java中的ApacheSpark解析这个文件。我引用了,到目前为止,我可以使用成功地将每一行json解析为JavaRDD

换句话说,从这一行:

{"device_id": "103b", "timestamp": 1436941050, "rooms": ["Office", "Foyer"]}
我想在Spark中创建两个事件对象:

obj1: deviceId = "103b", timestamp = 1436941050, room = "Office"
obj2: deviceId = "103b", timestamp = 1436941050, room = "Foyer"
我做了我的小搜索并尝试了flatMapVlue,但没有运气。。。它给了我一个错误

JavaRDD<Event> events = records.flatMapValue(new Function<JsonObject, Iterable<Event>>() {
    public Iterable<Event> call(JsonObject json) throws Exception {
        JsonArray rooms = json.get("rooms").getAsJsonArray();
        List<Event> data = new LinkedList<Event>();
        for (JsonElement room : rooms) {
            data.add(new Event(json.get("device_id").getAsString(), json.get("timestamp").getAsInt(), room.toString()));
        }
        return data;
    }
});

我对Spark和Map/Reduce非常陌生。如果你能帮助我,我将不胜感激。提前谢谢

如果将json数据加载到数据帧中:

你可以通过爆炸很容易做到这一点

输入:

{"device_id": "1", "timestamp": 1436941050, "rooms": ["Office", "Foyer"]}
{"device_id": "2", "timestamp": 1435677490, "rooms": ["Office", "Lab"]}
{"device_id": "3", "timestamp": 1436673850, "rooms": ["Office", "Foyer"]}
您将获得:

+---------+------+----------+
|device_id|  room| timestamp|
+---------+------+----------+
|        1|Office|1436941050|
|        1| Foyer|1436941050|
|        2|Office|1435677490|
|        2|   Lab|1435677490|
|        3|Office|1436673850|
|        3| Foyer|1436673850|
+---------+------+----------+

mapClass应该是一个case类,用于映射json记录中的对象

请把你的错误贴出来。编辑您的帖子并添加StackTraceTachanks,让我知道这个有用的功能。我不知道Spark像UDF一样支持蜂巢。这很有帮助!spark与hive完全兼容*≧▽≦
DataFrame df = sqlContext.read().json("/path/to/json");
df.select(
    df.col("device_id"),
    df.col("timestamp"),
    org.apache.spark.sql.functions.explode(df.col("rooms")).as("room")
);
{"device_id": "1", "timestamp": 1436941050, "rooms": ["Office", "Foyer"]}
{"device_id": "2", "timestamp": 1435677490, "rooms": ["Office", "Lab"]}
{"device_id": "3", "timestamp": 1436673850, "rooms": ["Office", "Foyer"]}
+---------+------+----------+
|device_id|  room| timestamp|
+---------+------+----------+
|        1|Office|1436941050|
|        1| Foyer|1436941050|
|        2|Office|1435677490|
|        2|   Lab|1435677490|
|        3|Office|1436673850|
|        3| Foyer|1436673850|
+---------+------+----------+
val formatrecord = records.map(fromJson[mapClass](_))