Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/batch-file/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 如何将5个变量的块循环到ArrayList中-块多次使用不同的值出现_Java_Arraylist - Fatal编程技术网

Java 如何将5个变量的块循环到ArrayList中-块多次使用不同的值出现

Java 如何将5个变量的块循环到ArrayList中-块多次使用不同的值出现,java,arraylist,Java,Arraylist,我需要将日志文件解析为ArrayList的ArrayList。正则表达式正在工作,我可以在变量或.csv输出中获得正确的结果。问题是,我需要通过在条件不为true的条目中添加值,并根据原始行和待追加行之间的索引[0](文件名)匹配添加其他值来操作输出 每个日志文件可以有1到200个条目,具体取决于字段收集输入的数量。日志文件条目是多行和可变的;但是它是结构化的,所以所有的变体都是已知的(n=18个正则表达式-并非所有与下面的代码片段相关)。我需要能够根据这些变化操纵行内容 这意味着我需要遍历单个

我需要将日志文件解析为ArrayList的ArrayList。正则表达式正在工作,我可以在变量或.csv输出中获得正确的结果。问题是,我需要通过在条件不为true的条目中添加值,并根据原始行和待追加行之间的索引[0](文件名)匹配添加其他值来操作输出

每个日志文件可以有1到200个条目,具体取决于字段收集输入的数量。日志文件条目是多行和可变的;但是它是结构化的,所以所有的变体都是已知的(n=18个正则表达式-并非所有与下面的代码片段相关)。我需要能够根据这些变化操纵行内容

这意味着我需要遍历单个可能长度不等的行(即跨表)以进行编辑和追加,并遍历每一行(即沿表向下)。因此,简单数组的工作效率不如ArrayList

我正在成功地创建单个ArrayList的ArrayList(所有应该是单个行的内容都放入单个ArrayList,然后放入父ArrayList…)

试图通过移动“covArrayList=newarraylist(covArrayList);”来获取单个ArrayList在“while((corrLine…)”和“for(String…)”循环之间,或在“if(fileMatcher.find)”块中,每个正则表达式匹配返回多个输出,并更改顺序,因此不能将每个值都链接到特定的“file1Name”

仅供参考:我正在使用JDK10。我将不得不进行重构,以便JRE 8可以运行该程序,但出于开发原因,我希望稍后再这样做

这是我的代码的一个子集,都在main方法中:

//arraylist of covArrayLists init:
    List<List<String>> coverage = new ArrayList<>();
//coverage arrayList init:
    List<String> covArrayList = new ArrayList<String>();
//log file Reader init:
    File corrFile = new File("D:\\Utilities\\Development\\Java\\HPGPSLogParser\\Correct_2015-10-13_10-51.txt");
    BufferedReader corrReader = new BufferedReader(new InputStreamReader(new FileInputStream(corrFile),"UTF-16LE"));
        //NOTE: PFO differential correction log files are encoded in UTF-16 LE
    String corrText = "";
    String corrLine = "";
//corrWriter init:
    File stateCSV = new File("D:\\Utilities\\Development\\Java\\HPGPSLogParser\\tcov.csv");
    BufferedWriter corrWriter = new BufferedWriter(new FileWriter(stateCSV, true));
    String coverageOutput = "";
    String processingOutput = "";
//regex variables:
        //Coverage Details regex
    Pattern fileName1 = Pattern.compile("Rover file: (?<fileName1>[A-Z]{2}-\\d{3}-\\d{5}-SP\\d\\.SSF)+");
    String firstFileName =  "";
    Pattern noBase = Pattern.compile("(?<noBase>No matching base data found)");
    String noBaseText =  "";
    Pattern totalCoverage = Pattern.compile("(?<totalCoverage>[\\d]{1,3})\\% total coverage");
    String totalCovText =  "";
    Pattern coverageBy = Pattern.compile("(?<coverageBy>[\\d]{1,3})+\\% coverage by (?<baseStation>\\b\\w+\\b\\.[zZ].*)+", Pattern.CASE_INSENSITIVE | Pattern.UNICODE_CASE);
    String covByPct =  "";
    String covByProvider =  "";

    try(corrReader)
    {
        while ((corrLine = corrReader.readLine())!=null)
        {
            corrText = corrLine.trim();
            String delim = " ";
            String[] words = corrLine.split(delim);
            covArrayList = new ArrayList<String>(covArrayList);
            for (String s : words)
            {
            //Coverage details regex search begin - write to coverageOutput
                Matcher file1Matcher = fileName1.matcher(corrText); 
                if(file1Matcher.find())
                {
                    firstFileName = file1Matcher.group("fileName1");
                    covArrayList.add(firstFileName);
                } //end if(file1Matcher)
                Matcher baseMatcher = noBase.matcher(corrText);
                if (baseMatcher.find()) 
                {
                    noBaseText = baseMatcher.group("noBase");
                    covArrayList.add("TRUE");
                } //end if(baseMatcher)
                Matcher totCovMatcher = totalCoverage.matcher(corrText);
                if(totCovMatcher.matches()) 
                {
                    totalCovText = totCovMatcher.group("totalCoverage");
                    covArrayList.add(totalCovText);
                } //end if(totCovMatcher)
                Matcher covByMatcher = coverageBy.matcher(corrText);
                if(covByMatcher.matches()) 
                {
                    covByPct = covByMatcher.group("coverageBy");
                    covArrayList.add(covByPct);
                    covByProvider = covByMatcher.group("baseStation");
                    covArrayList.add(covByProvider);
                } //end if(covByMatcher)
            } //end for(String)
        } //end while loop - regex searches & initial output file end
        coverage.add(covArrayList);
        processing.add(procArrayList);

        corrWriter.write(coverage.toString());
        corrWriter.flush();
        outWriter.write(processing.toString());
        outWriter.flush();
//covArrayLists init的arraylist:
列表覆盖率=新的ArrayList();
//覆盖率arrayList初始化:
List covArrayList=new ArrayList();
//日志文件读取器初始化:
File corrFile=新文件(“D:\\Utilities\\Development\\Java\\HPGPSLogParser\\Correct_2015-10-13_10-51.txt”);
BufferedReader corrReader=新的BufferedReader(新的InputStreamReader(新文件InputStream(corrFile),“UTF-16LE”);
//注:PFO差分校正日志文件采用UTF-16 LE编码
字符串corrText=“”;
字符串corrLine=“”;
//corrWriter初始化:
File stateCSV=新文件(“D:\\Utilities\\Development\\Java\\HPGPSLogParser\\tcov.csv”);
BufferedWriter corrWriter=新的BufferedWriter(新文件写入程序(stateCV,true));
字符串coverageOutput=“”;
字符串处理输出=”;
//正则表达式变量:
//覆盖范围详细信息正则表达式
Pattern fileName1=Pattern.compile(“Rover文件:(?[A-Z]{2}-\\d{3}-\\d{5}-SP\\d\\.SSF)+”);
字符串firstFileName=“”;
Pattern noBase=Pattern.compile(“(?未找到匹配的基础数据)”);
字符串noBaseText=“”;
Pattern totalCoverage=Pattern.compile((?[\\d]{1,3})\\\%totalCoverage”);
字符串totalCovText=“”;
Pattern coverageBy=Pattern.compile((?[\\d]{1,3})+\\%coverage by(?\\b\\w+\\b\\.[zZ].*)+”,Pattern.CASE不区分| Pattern.UNICODE|u大小写);
字符串covByPct=“”;
字符串covByProvider=“”;
试一试(corrReader)
{
而((corrLine=corrReader.readLine())!=null)
{
corrText=corrLine.trim();
字符串delim=“”;
String[]words=corrLine.split(delim);
covArrayList=新的ArrayList(covArrayList);
for(字符串s:单词)
{
//覆盖率详细信息正则表达式搜索开始-写入覆盖率输出
Matcher file1Matcher=fileName1.Matcher(corrText);
if(file1Matcher.find())
{
firstFileName=file1Matcher.group(“fileName1”);
添加(第一个文件名);
}//如果结束(file1Matcher)
Matcher baseMatcher=noBase.Matcher(corrText);
if(baseMatcher.find())
{
noBaseText=baseMatcher.group(“noBase”);
covArrayList.添加(“真”);
}//如果结束(baseMatcher)
Matcher totCovMatcher=totalCoverage.Matcher(corrText);
if(totCovMatcher.matches())
{
totalCovText=totCovMatcher.group(“TotalCovCoverage”);
添加(totalCovText);
}//如果结束(totCovMatcher)
Matcher covByMatcher=coverageBy.Matcher(corrText);
if(covByMatcher.matches())
{
covByPct=covByMatcher.group(“覆盖范围”);
covArrayList.add(covByPct);
covByProvider=covByMatcher.group(“基站”);
添加(covByProvider);
}//如果结束(covByMatcher)
}//结束(字符串)
}//循环结束-正则表达式搜索和初始输出文件结束
coverage.add(covArrayList);
processing.add(procArrayList);
write(coverage.toString());
corrWriter.flush();
write(processing.toString());
outWriter.flush();
catch/finally块在代码中,而不是在代码段中

下面是日志文件的一个片段,其中包含本节中的三个潜在变体:

--------覆盖范围详细信息:--------------Rover文件:AA-123-12345-SP1.SSF当地时间:2015年3月2日下午4:06:14至2015年3月2日 下午4:06:44 0%总覆盖率。未找到匹配的基础数据。Rover 文件:AA-123-12345-SP2.SSF当地时间:2014年2月17日下午5:51:01至1月2日 2014年7月6日下午6:18:57 100%总覆盖率4%覆盖率 guug04914003.zip guug04914022.zip罗孚文件的100%覆盖率: AA-123-12345-SP3.SSF当地时间:2014年2月17日9:53:40至2014年2月17日 晚上10:45:59 100%总覆盖率guug04914044.zip提供100%覆盖率

注意:无法识别行尾:

我能得到的最接近日志文件编码的匹配是UTF-16LE,没有其他选项g
        try(corrReader)
    {
        while ((corrLine = corrReader.readLine())!=null)
        {
            corrText = corrLine.trim();
        //Coverage details regex search begin - write to coverageOutput
            Matcher file1Matcher = fileName1.matcher(corrText); 
            if(file1Matcher.find())
            {
                coverageOutput = new ArrayList<String>();
                coverageOutput.add(file1Matcher.group("fileName1"));
                coverage.add(coverageOutput);
            } //end if(file1Matcher)

            Matcher baseMatcher = noBase.matcher(corrText);
            if (baseMatcher.find()) 
            {
                noBaseText = baseMatcher.group("noBase");
                noBaseText = "noBaseData";
                coverageOutput.add(noBaseText);
            } //end if(baseMatcher)
            Matcher totCovMatcher = totalCoverage.matcher(corrText);
            if(totCovMatcher.matches()) 
            {
                totalCovText = totCovMatcher.group("totalCoverage");
                coverageOutput.add(totalCovText);
            } //end if(totCovMatcher)
            Matcher covByMatcher = coverageBy.matcher(corrText);
            if(covByMatcher.matches()) 
            {
                covByPct = covByMatcher.group("coverageBy");
                covByProvider = covByMatcher.group("baseStation");
                coverageOutput.add(covByPct);
                coverageOutput.add(covByProvider);
            } //end if(covByMatcher)