Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/290.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Elasticsearch聚合到数据帧_Python_Pandas_Dataframe_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch - Fatal编程技术网 elasticsearch,Python,Pandas,Dataframe,elasticsearch" /> elasticsearch,Python,Pandas,Dataframe,elasticsearch" />

Python Elasticsearch聚合到数据帧

Python Elasticsearch聚合到数据帧,python,pandas,dataframe,elasticsearch,Python,Pandas,Dataframe,elasticsearch,我正在处理一些ElasticSearch数据,我想从Kibana中的聚合生成表。根据以下代码,聚合的示例输出如下所示: s.aggs.bucket("name1", "terms", field="field1").bucket( "name2", "terms", field="innerField1" ).bucket("name3", "terms", field="InnerAgg1") response = s.execute() res

我正在处理一些ElasticSearch数据,我想从Kibana中的聚合生成表。根据以下代码,聚合的示例输出如下所示:

    s.aggs.bucket("name1", "terms", field="field1").bucket(
        "name2", "terms", field="innerField1"
    ).bucket("name3", "terms", field="InnerAgg1")
     response = s.execute()
   resp_dict = response.aggregations.name.buckets




{
    "key": "Locationx",
    "doc_count": 12,
    "name2": {
        "doc_count_error_upper_bound": 0,
        "sum_other_doc_count": 0,
        "buckets": [{
            "key": "Sub-Loc1",
            "doc_count": 1,
            "name3": {
                "doc_count_error_upper_bound": 0,
                "sum_other_doc_count": 0,
                "buckets": [{
                    "key": "super-Loc1",
                    "doc_count": 1
                }]
            }
        }, {
            "key": "Sub-Loc2",
            "doc_count": 1,
            "name3": {
                "doc_count_error_upper_bound": 0,
                "sum_other_doc_count": 0,
                "buckets": [{
                    "key": "super-Loc1",
                    "doc_count": 1
                }]
            }
        }]
    }
}
在这种情况下,预期输出为:

现在,我尝试了多种方法,并简要描述了出现问题的原因:

Pandasticsearch=即使只有一本字典也完全失败。字典没有创建,因为它正在与键斗争,即使每个字典都是单独处理的:

for d in resp_dict :
    x= d.to_dict()
    pandas_df = Select.from_dict(x).to_pandas()
    print(pandas_df)
特别是,收到的错误与以下事实有关:没有制作词典,因此['take']不是一个键

Pandas(pd.Dataframe.from_records())=只给了我第一个聚合,其中一列包含内部字典,并在其上使用pd.apply(pd.Series)给出了另一个结果字典表


StackOverflow posts=字典看起来与所使用的示例完全不同,除非我彻底更改输入,否则修修补补不会让我有任何结果。

我一直在努力解决同样的问题,我开始相信这是因为响应不是正常的,但是
elasticsearch\u dsl.utils.AttrDict
elasticsearch\u dsl.utils.AttrList

如果您有
AttrDicts
AttrList
,可以执行以下操作:

resp_dict = response.aggregations.name.buckets
new_response = [i._d_ for i in resp_dict]
而是要得到一份正常口述的清单。这可能会更好地与其他库配合使用

编辑:

我写了一个递归函数,它至少可以处理一些情况,虽然还没有经过广泛的测试,也没有封装在一个好的模块或任何东西中。这只是一个脚本。
one_lvl
函数在名为
tmp
的字典中跟踪树中的所有同级和父级的同级,并在找到新的命名聚合时递归。它假设了很多关于数据结构的信息,我不确定在一般情况下是否有必要这样做

我认为
lvl
是必要的,因为您可能有重复的名称,所以
key
存在于多个聚合级别

#!/usr/bin/env python3

from elasticsearch_dsl.query import QueryString
from elasticsearch_dsl import Search, A
from elasticsearch import Elasticsearch
import pandas as pd

PORT = 9250
TIMEOUT = 10000
USR = "someusr"
PW = "somepw"
HOST = "test.com"
INDEX = "my_index"
QUERY = "foobar"

client = Elasticsearch([HOST], port = PORT, http_auth=(USR, PW), timeout = TIMEOUT)

qs = QueryString(query = QUERY)
s = Search(using=client, index=INDEX).query(qs)

s = s.params(size = 0)

agg= {
    "dates" : A("date_histogram", field="date", interval="1M", time_zone="Europe/Berlin"),
    "region" : A("terms", field="region", size=10),
    "county" : A("terms", field="county", size = 10)
}

s.aggs.bucket("dates", agg["dates"]). \
       bucket("region", agg["region"]). \
       bucket("county", agg["county"])

resp = s.execute()

data = {"buckets" : [i._d_ for i in resp.aggregations.dates]}
rec_list = ["buckets"] + [*agg.keys()]

def get_fields(i, lvl):
    return {(k + f"{lvl}"):v for k, v in i.items() if k not in rec_list}

def one_lvl(data, tmp, lvl, rows, maxlvl):
    tmp = {**tmp, **get_fields(data, lvl)}

    if "buckets" not in data:
        rows.append(tmp)

    for d in data:
        if d in ["buckets"]:
            for v, b in enumerate(data[d]):
                tmp = {**tmp, **get_fields(data[d][v], lvl)}
                for k in b:
                    if k in agg.keys():
                        one_lvl(data[d][v][k], tmp, lvl+1, rows, maxlvl)
                    else:
                        if lvl == maxlvl:
                            tmp = {**tmp, (k + f"{lvl}") : data[d][v][k]}
                            rows.append(tmp)

    return rows


rows = one_lvl(data, {}, 1, [], len(agg))
df = pd.DataFrame(rows)


老实说,这是目前能做的最好的事情。我很想找到一个更好的方法,但是我把它变成了一堆dicts并使用它们,我就是这样做的。我试图找到一个递归的解决方案,编辑了这篇文章。