Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/image-processing/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
将csv从Python传输到elasticsearch,文档id为csv字段_Python_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch - Fatal编程技术网 elasticsearch,Python,elasticsearch" /> elasticsearch,Python,elasticsearch" />

将csv从Python传输到elasticsearch,文档id为csv字段

将csv从Python传输到elasticsearch,文档id为csv字段,python,elasticsearch,Python,elasticsearch,想要将以下csv传输到elsticsearch |hcode|hname| |1|aaaa| |2|bbbbb| |3|ccccc| |4|dddd| |5|eeee| |6|ffff| 并且需要插入hcode字段作为文档id。获取以下错误 File "C:\Users\Namali\Anaconda3\lib\site-packages\elasticsearch\connection\base.py", line 181, in _raise_error s

想要将以下csv传输到elsticsearch

|hcode|hname|
|1|aaaa|
|2|bbbbb|
|3|ccccc|
|4|dddd|
|5|eeee|
|6|ffff|
并且需要插入hcode字段作为文档id。获取以下错误

  File "C:\Users\Namali\Anaconda3\lib\site-packages\elasticsearch\connection\base.py", line 181, in _raise_error
    status_code, error_message, additional_info

RequestError: RequestError(400, 'mapper_parsing_exception', 'failed to parse')"
使用elasticseach版本是7.1.1,python vervion版本是3.7.6 Python代码-----------------------------------------------------------------

import csv
import json

from elasticsearch import Elasticsearch

es = Elasticsearch([{'host': 'localhost', 'port': 9200}])

def csv_reader(file_obj, delimiter=','):
   reader_ = csv.reader(file_obj,delimiter=delimiter,quotechar='"')
   
   i = 1
   results = []
   for row in reader_:
    #try :
    #es.index(index='hb_hotel_raw', doc_type='hb_hotel_raw', id=row[0], 
                # body=json.dump([row for row in reader_], file_obj))
    es.index(index='test', doc_type='test', id=row[0],body=json.dumps(row))
    #except:
    #    print("error")
    i = i + 1
    results.append(row)
    print(row)

if __name__ == "__main__":
  with open("D:\\namali\\rez\\data_mapping\\test.csv") as f_obj:
    csv_reader(f_obj)

首先,在elasticsearch 7中省略doc_type。其次,需要将有效的json传递给elasticsearch。我将您的代码编辑如下:

for row in reader_:
    _id = row[0].split("|")[1]
    text = row[0].split("|")[2]
    my_dict = {"hname" : text}
    es.index(index='test', id=_id, body=my_dict)

首先,在elasticsearch 7中省略doc_type。其次,需要将有效的json传递给elasticsearch。我将您的代码编辑如下:

for row in reader_:
    _id = row[0].split("|")[1]
    text = row[0].split("|")[2]
    my_dict = {"hname" : text}
    es.index(index='test', id=_id, body=my_dict)