Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/288.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何查找在特定日期ping的IP数?_Python_Python 2.x - Fatal编程技术网

Python 如何查找在特定日期ping的IP数?

Python 如何查找在特定日期ping的IP数?,python,python-2.x,Python,Python 2.x,我有一个python文件,我将日志文件和url传递给代码。输出文件包含IP地址访问URL的次数 #!/usr/bin/env python # # Counts the IP addresses of a log file. # # Assumption: the IP address is logged in the first column. # Example line: 117.195.185.130 - - [06/Mar/2012:00:00:00 -0800] \ # "

我有一个python文件,我将日志文件和url传递给代码。输出文件包含IP地址访问URL的次数

#!/usr/bin/env python
# 
# Counts the IP addresses of a log file.
# 
# Assumption: the IP address is logged in the first column.
# Example line: 117.195.185.130 - - [06/Mar/2012:00:00:00 -0800] \
#    "GET /mysidebars/newtab.html HTTP/1.1" 404 0 - -
#

import sys

def urlcheck(line, url):
    '''Checks if the url is part of the log line.'''
    lsplit = line.split()
    if len(lsplit)<7:
        return False
    return url==lsplit[6]

def extract_ip(line):
    '''Extracts the IP address from the line.
       Currently it is assumed, that the IP address is logged in
       the first column and the columns are space separated.'''
    return line.split()[0]

def increase_count(ip_dict, ip_addr):
    '''Increases the count of the IP address.
       If an IP address is not in the given dictionary,
       it is initially created and the count is set to 1.'''
    if ip_addr in ip_dict:
        ip_dict[ip_addr] += 1
    else:
        ip_dict[ip_addr] = 1

def read_ips(infilename, url):
    '''Read the IP addresses from the file and store (count)
       them in a dictionary - returns the dictionary.'''
    res_dict = {}
    log_file = file(infilename)
    for line in log_file:
        if line.isspace():
            continue
        if not urlcheck(line, url):
            continue
        ip_addr = extract_ip(line)
        increase_count(res_dict, ip_addr)
    return res_dict

def write_ips(outfilename, ip_dict):
    '''Write out the count and the IP addresses.'''
    out_file = file(outfilename, "w")
    for ip_addr, count in ip_dict.iteritems():
        out_file.write("%s\t%5d\n" % (ip_addr, count))
    out_file.close()

def parse_cmd_line_args():
    '''Return the in and out file name.
       If there are more or less than two parameters,
       an error is logged in the program is exited.'''
    if len(sys.argv)!=4:
        print("Usage: %s [infilename] [outfilename] [url]" % sys.argv[0])
        sys.exit(1)
    return sys.argv[1], sys.argv[2], sys.argv[3]

def main():
    infilename, outfilename, url = parse_cmd_line_args()
    ip_dict = read_ips(infilename, url)
    write_ips(outfilename, ip_dict)

if __name__ == "__main__":
    main()

grep、cut、sort和uniq有什么问题

grep "\[07/Mar/2012" logfile.txt | cut -d " " -f 1 | sort | uniq

可能重复的请停止使用stackoverflow为您编写整个脚本。
grep "\[07/Mar/2012" logfile.txt | cut -d " " -f 1 | sort | uniq