Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/351.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何从ElasticSearch python获取大型数据集的完整数据。默认情况下搜索“匹配所有”返回值10,最大值10000_Python_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch - Fatal编程技术网 elasticsearch,Python,elasticsearch" /> elasticsearch,Python,elasticsearch" />

如何从ElasticSearch python获取大型数据集的完整数据。默认情况下搜索“匹配所有”返回值10,最大值10000

如何从ElasticSearch python获取大型数据集的完整数据。默认情况下搜索“匹配所有”返回值10,最大值10000,python,elasticsearch,Python,elasticsearch,self.conn.Elasticsearch(index=index\u name,body={“query”:{“match\u all”:{}}},size=10000)这目前可以正常工作,但如果Elasticsearch中有一百万个点,这将失败。我甚至在包含136条记录的ElasticSearch上尝试了scrolloperation,但结果只有36条。以下是代码片段: resp = self.conn.Elasticsearch(index = index_name, body={&q

self.conn.Elasticsearch(index=index\u name,body={“query”:{“match\u all”:{}}},size=10000)
这目前可以正常工作,但如果Elasticsearch中有一百万个点,这将失败。我甚至在包含136条记录的ElasticSearch上尝试了scrolloperation,但结果只有36条。以下是代码片段:

resp = self.conn.Elasticsearch(index = index_name, body={"query": {"match_all": {}}}, size = 1000, scroll = '10s')
scroll_id = resp['_scroll_id']
resp = client.scroll(
        scroll_id = scroll_id,
        scroll = '5s')
这只提供了36个响应。请提供有关如何一次获得最大大小或流式传输数据的信息