Regex 解析k6数据的输出以获取特定信息

Regex 解析k6数据的输出以获取特定信息,regex,string-matching,k6,Regex,String Matching,K6,我试图从k6输出()中提取数据: 数据是以上面的格式提供的,我正试图找到一种方法来获取上面的每一行以及仅包含值的数据。例如: http_req_duration: 197.41ms, 70.32ms,91.56ms, 619.44ms, 288.2ms, 326.23ms 我必须对大约50-100个文件执行此操作,并且希望找到一种RegEx或类似的更快的方法来执行此操作,而不必编写太多代码。有可能吗?这里有一个简单的Python解决方案: import re FIELD = re.compi

我试图从k6输出()中提取数据:

数据是以上面的格式提供的,我正试图找到一种方法来获取上面的每一行以及仅包含值的数据。例如:

http_req_duration: 197.41ms, 70.32ms,91.56ms, 619.44ms, 288.2ms, 326.23ms

我必须对大约50-100个文件执行此操作,并且希望找到一种RegEx或类似的更快的方法来执行此操作,而不必编写太多代码。有可能吗?

这里有一个简单的Python解决方案:

import re

FIELD = re.compile(r"(\w+)\.*:(.*)", re.DOTALL)  # split the line to name:value
VALUES = re.compile(r"(?<==).*?(?=\s|$)")  # match individual values from http_req_* fields

# open the input file `k6_input.log` for reading, and k6_parsed.log` for parsing
with open("k6_input.log", "r") as f_in, open("k6_parsed.log", "w") as f_out:
    for line in f_in:  # read the input file line by line
        field = FIELD.match(line)  # first match all <field_name>...:<values> fields
        if field:
            name = field.group(1)  # get the field name from the first capture group
            f_out.write(name + ": ")  # write the field name to the output file
            value = field.group(2)  # get the field value from the second capture group
            if name[:9] == "http_req_":  # parse out only http_req_* fields
                f_out.write(", ".join(VALUES.findall(value)) + "\n")  # extract the values
            else:  # verbatim copy of other fields
                f_out.write(value)
        else:  # encountered unrecognizable field, just copy the line
            f_out.write(line)
重新导入
FIELD=re.compile(r“(\w+)\.*:(*),re.DOTALL)#将行拆分为name:value

值=重新编译(r)(?你想用什么语言来处理文件?@zwer我对这种语言不感兴趣。你可以用Java、C#甚至Python、Perl或JavaScriptWait编写脚本,为什么你不将数据导出到JSON而不是从标准输出获取数据呢?那么你就不需要处理解析细节,你可以用任何你想要的方式来塑造它……JSON o输出提供了太多的数据,而这些数据对于我试图做的事情来说大多是多余的:-(这里的问题是,存在依赖于数据类型的特殊情况-例如,
data\u-sent
http\u-reqs
vus\u-max
应该如何转换?
import re

FIELD = re.compile(r"(\w+)\.*:(.*)", re.DOTALL)  # split the line to name:value
VALUES = re.compile(r"(?<==).*?(?=\s|$)")  # match individual values from http_req_* fields

# open the input file `k6_input.log` for reading, and k6_parsed.log` for parsing
with open("k6_input.log", "r") as f_in, open("k6_parsed.log", "w") as f_out:
    for line in f_in:  # read the input file line by line
        field = FIELD.match(line)  # first match all <field_name>...:<values> fields
        if field:
            name = field.group(1)  # get the field name from the first capture group
            f_out.write(name + ": ")  # write the field name to the output file
            value = field.group(2)  # get the field value from the second capture group
            if name[:9] == "http_req_":  # parse out only http_req_* fields
                f_out.write(", ".join(VALUES.findall(value)) + "\n")  # extract the values
            else:  # verbatim copy of other fields
                f_out.write(value)
        else:  # encountered unrecognizable field, just copy the line
            f_out.write(line)
data_received: 246 kB 21 kB/s data_sent: 174 kB 15 kB/s http_req_blocked: 26.24ms, 0s, 13.5ms, 145.27ms, 61.04ms, 70.04ms http_req_connecting: 23.96ms, 0s, 12ms, 145.27ms, 57.03ms, 66.04ms http_req_duration: 197.41ms, 70.32ms, 91.56ms, 619.44ms, 288.2ms, 326.23ms http_req_receiving: 141.82µs, 0s, 0s, 1ms, 1ms, 1ms http_req_sending: 8.15ms, 0s, 0s, 334.23ms, 1ms, 1ms http_req_waiting: 189.12ms, 70.04ms, 91.06ms, 343.42ms, 282.2ms, 309.22ms http_reqs: 190 16.054553/s iterations: 5 0.422488/s vus: 200 min=200 max=200 vus_max: 200 min=200 max=200