Python 如何使用jsonpath ng算法?
Python 如何使用jsonpath ng算法?,python,jsonpath,jsonpath-ng,Python,Jsonpath,Jsonpath Ng,jsonpath ng包声称支持基本算术(),但解析器不接受算术语句。以下是其中之一: 来自jsonpath\u ng导入解析 jsonpath_expr=parse('$.objects[*].cow+$.objects[*].cat') obj={'objects':[ {‘牛’:2,‘猫’:3}, {‘牛’:4,‘猫’:6} ]} values=[jsonpath_expr.find(obj)中匹配的match.value] 打印(值) 这会引发一个错误: Traceback (most
jsonpath ng
包声称支持基本算术(),但解析器不接受算术语句。以下是其中之一:
来自jsonpath\u ng导入解析
jsonpath_expr=parse('$.objects[*].cow+$.objects[*].cat')
obj={'objects':[
{‘牛’:2,‘猫’:3},
{‘牛’:4,‘猫’:6}
]}
values=[jsonpath_expr.find(obj)中匹配的match.value]
打印(值)
这会引发一个错误:
Traceback (most recent call last):
File "test.py", line 8, in <module>
jsonpath_expr = parse('$.objects[*].cow + $.objects[*].cat')
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\jsonpath_ng\parser.py", line 14, in parse
return JsonPathParser().parse(string)
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\jsonpath_ng\parser.py", line 32, in parse
return self.parse_token_stream(lexer.tokenize(string))
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\jsonpath_ng\parser.py", line 55, in parse_token_stream
return new_parser.parse(lexer = IteratorToTokenStream(token_iterator))
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\ply\yacc.py", line 333, in parse
return self.parseopt_notrack(input, lexer, debug, tracking, tokenfunc)
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\ply\yacc.py", line 1063, in parseopt_notrack
lookahead = get_token() # Get the next token
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\jsonpath_ng\parser.py", line 179, in token
return next(self.iterator)
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\jsonpath_ng\lexer.py", line 35, in tokenize
t = new_lexer.token()
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\ply\lex.py", line 386, in token
newtok = self.lexerrorf(tok)
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\jsonpath_ng\lexer.py", line 167, in t_error
raise JsonPathLexerError('Error on line %s, col %s: Unexpected character: %s ' % (t.lexer.lineno, t.lexpos - t.lexer.latest_newline, t.value[0]))
jsonpath_ng.lexer.JsonPathLexerError: Error on line 1, col 17: Unexpected character: +
回溯(最近一次呼叫最后一次):
文件“test.py”,第8行,在
jsonpath_expr=parse('$.objects[*].cow+$.objects[*].cat')
文件“C:\Users\micha\AppData\Roaming\Python38\site packages\jsonpath\ng\parser.py”,第14行,在parse中
返回JsonPathParser().parse(字符串)
文件“C:\Users\micha\AppData\Roaming\Python38\site packages\jsonpath\ng\parser.py”,第32行,在parse中
返回self.parse_token_流(lexer.tokenize(string))
解析令牌流中的文件“C:\Users\micha\AppData\Roaming\Python38\site packages\jsonpath\ng\parser.py”,第55行
返回新的\u parser.parse(lexer=iteratortokenstream(令牌\u迭代器))
文件“C:\Users\micha\AppData\Roaming\Python38\site packages\ply\yacc.py”,第333行,在parse中
返回self.parseopt_notrack(输入、lexer、调试、跟踪、tokenfunc)
文件“C:\Users\micha\AppData\Roaming\Python38\site packages\ply\yacc.py”,第1063行,在parseopt\u notrack中
lookahead=get_token()#获取下一个token
令牌中的文件“C:\Users\micha\AppData\Roaming\Python38\site packages\jsonpath\ng\parser.py”,第179行
返回下一个(self.iterator)
文件“C:\Users\micha\AppData\Roaming\Python38\site packages\jsonpath\u ng\lexer.py”,第35行,标记化
t=新的语法标记()
令牌中第386行的文件“C:\Users\micha\AppData\Roaming\Python38\site packages\ply\lex.py”
newtok=self.lexerrorf(tok)
文件“C:\Users\micha\AppData\Roaming\Python38\site packages\jsonpath\u ng\lexer.py”,第167行,在t\u错误中
raise JSONPATHERLERROR('第%s行,列%s上的错误:意外字符:%s'(t.lexer.lineno,t.lexpos-t.lexer.latest\u newline,t.value[0]))
jsonpath_ng.lexer.JsonPathLexerError:第1行第17列出现错误:意外字符:+
我错过什么了吗?(我使用的是最新版本:1.5.2)您需要使用扩展解析器使其工作:
#from jsonpath_ng import jsonpath
from jsonpath_ng.ext import parser
jsonpath_expr = parser.parse('$.objects[*].cow + $.objects[*].cat')
obj = {'objects': [
{'cow': 2, 'cat': 3},
{'cow': 4, 'cat': 6}
]}
print([match.value for match in jsonpath_expr.find(obj)])
这将打印:[5,10]
。所以它实际上是在每一行上加上每头牛和每只猫的值