Linux 用bash解析Apache日志

Linux 用bash解析Apache日志,linux,bash,apache,shell,awk,Linux,Bash,Apache,Shell,Awk,我想分析Apache日志文件,例如: 1.1.1.1 - - [12/Dec/2019:18:25:11 +0100] "GET /endpoint1/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-" 1.1.1.1 - - [13/Dec/2019:18:25:11 +010

我想分析Apache日志文件,例如:

1.1.1.1 - - [12/Dec/2019:18:25:11 +0100] "GET /endpoint1/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
1.1.1.1 - - [13/Dec/2019:18:25:11 +0100] "GET /endpoint1/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
2.2.2.2 - - [13/Dec/2019:18:27:11 +0100] "GET /endpoint1/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
2.2.2.2 - - [13/Jan/2020:17:15:13 +0100] "GET /endpoint2/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
3.3.3.3 - - [13/Jan/2020:17:15:13 +0100] "GET /endpoint2/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
1.1.1.1 - - [13/Feb/2020:17:15:13 +0100] "GET /endpoint2/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
4.4.4.4 - - [13/Feb/2020:17:15:13 +0100] "GET /endpoint2/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
4.4.4.4 - - [13/Feb/2020:17:15:13 +0100] "GET /endpoint2/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
4.4.4.4 - - [13/Feb/2020:17:15:13 +0100] "GET /endpoint2/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
我需要得到每月访问的客户IP列表。我有类似的东西

awk '{print $1,$4}' access.log | grep Dec | cut -d" " -f1 | uniq -c
但这是错误的,因为它计算每天的访问量

预期结果如下(缩进不重要):

其中2为2019年12月1.1.1 ip的总访问量


您能给我建议一种方法吗?

尽管您的示例预期输出看起来与显示的示例不匹配,但基于显示的示例输出和描述,您可以尝试以下操作吗。此外,由于这是一个日志文件,我将使用
awk的字段分隔符方法,因为日志是固定模式的

awk-F':| |-|/+|]'
{
ind[$7 OFS$8 OFS$1]++
价值[$7 OFS$8 OFS$1]=$1
}
结束{
对于(i的价值){
拆分(i,arr,“”)
打印arr[1],arr[2]或值[i],ind[i]
}
}'输入文件
说明:添加上述内容的详细说明

awk -F':| |-|/+|]' '                             ##Starting awk program from here and setting field separators as : space - / ] here.
{
  ind[$7 OFS $8 OFS $1]++                        ##Creating ind array whose index is 7th 8th and 1st field and keep increasing value with 1 here.
  value[$7 OFS $8 OFS $1]=$1                     ##Creating value with index of 7th, 8th and 1st field and its value is 1st field.
}
END{                                             ##Starting END block of this program from here.
  for(i in value){                               ##Traversing through value elements here.
    split(i,arr," ")                             ##Splitting i into array arr with delimiter as space here.
    print arr[1],arr[2] ORS value[i],ind[i]      ##Printing 1st and 2nd element of arr with ORS(new line) and array value and ind value here.
  }
}' Input_file                                    ##Mentioning Input_file name here.
试试这个

外壳:

#!/usr/bin/env bash
LOG_FILE=$1

#regex to find mmm/yyyy
dateUniq=`grep -oP '(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\/\d{4}' $LOG_FILE | sort | uniq`


for i in $dateUniq
do  
    #output mmm yyyy
    echo $i | sed 's/\// /g'
    
    #regex to find ip
    ipUniq=`grep $i $LOG_FILE | grep -oP '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)'  | sort | uniq`
    
    for x in $ipUniq
    do  
        count=`grep $i $LOG_FILE |grep -c $x`
        #output count ip
        echo $count $x
    done
    echo
done
输出:

Dec 2019
2 1.1.1.1
1 2.2.2.2

Feb 2020
1 1.1.1.1
3 4.4.4.4

Jan 2020
1 2.2.2.2
1 3.3.3.3

一个用于GNU awk,按照数据输入的顺序输出(即,应按照该顺序输出日志记录等时序数据):


感谢您的编辑,但看起来样本输入和样本预期输出不同步。例如-->输出中有
2019
2020
,但示例输入中没有。当然,这只是输出的示例:)我明白了,但对于引用此线程的未来用户,这可能会有所帮助,干杯,您也可以查看我的答案,我们也可以在其评论部分讨论。谢谢。我理解了你的观点,改变了示例和预期结果预期输出中的记录数量(10)与样本数据中的实际数量不匹配(9)谢谢,我尝试了你的示例,但它似乎不符合预期:(@Andrii,我已根据你的样本进行了编辑,请现在检查:)谢谢!我扩展了示例和预期结果,但它似乎算错了:(@Andrii,再次编辑,请现在检查,然后让我知道。非常感谢!它很有效!但不知道如何提供按月/年排序的输出,因为我不清楚它是如何工作的:)如果您有时间和愿望,您能为您的awk命令提供一些小的解释吗?我认为这将有助于大量用户!非常感谢。似乎-P选项(perl regexp)在MacOS上不起作用:(我认为这也可以在ipUniq=
grep$I$LOG_文件| grep-oP'\d{1,3}\.\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}“| sort | uniq
| regex找到ipThank you!还有什么方法可以通过访问对其进行排序吗?例如:2020年2月\n 4.4.4.3\n 1.1.1 11如果您取消注释并编辑到:
PROCINFO[“sorted_in”]=“@val_num_desc”
。是的,这似乎有效。
Dec 2019
2 1.1.1.1
1 2.2.2.2

Feb 2020
1 1.1.1.1
3 4.4.4.4

Jan 2020
1 2.2.2.2
1 3.3.3.3
$ gawk '                     # using GNU awk
BEGIN {
    a[""][""]                # initialize a 2D array
}
{
    split($4,t,/[/:]/)       # split datetime 
    my=t[2] OFS t[3]         # my=month year
    if(!(my in mye)) {       # if current my unseen
        mye[my]=++myi        # update month year exists array with new index
        mya[myi]=my          # chronology is made
    }
    a[mye[my]][$1]++         # update record to a hash
}
END {                        # in the end
    # PROCINFO["sorted_in"]="@val_num_desc"  # this may work for ordering visits
    for(i=1;i<=myi;i++) {    # in fed order 
        print mya[i]         # print month year
        for(j in a[i])       # then related ips in no particular order
            print j,a[i][j]  # output ip and count
    }
}' file
Dec 2019
1.1.1.1 2
2.2.2.2 1
Jan 2020
2.2.2.2 1
3.3.3.3 1
Feb 2020
1.1.1.1 1
4.4.4.4 3