Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 基于时间间隔分割csv文件_Python_Python 3.x_Csv - Fatal编程技术网

Python 基于时间间隔分割csv文件

Python 基于时间间隔分割csv文件,python,python-3.x,csv,Python,Python 3.x,Csv,我已将wireshark pcap文件导出到csv。我需要根据时间间隔分割这些csv文件。在csv文件中有一个“时间”列。我想把这些文件分成1秒的时间间隔。因此,在前1秒到达的前几个数据包将写入一个文件,下一个数据包在下1秒到达另一个文件,依此类推。如果输入文件名为AAA.csv,则分割的文件将获得相同的名称,并在末尾附加一个数字。AAA1.csv,…AAA5.csv等等。我是编程新手,所以不太确定如何从这一点开始。请帮忙。谢谢 以下是连续2秒的csv文件行摘录: "No.","Time","

我已将wireshark pcap文件导出到csv。我需要根据时间间隔分割这些csv文件。在csv文件中有一个“时间”列。我想把这些文件分成1秒的时间间隔。因此,在前1秒到达的前几个数据包将写入一个文件,下一个数据包在下1秒到达另一个文件,依此类推。如果输入文件名为AAA.csv,则分割的文件将获得相同的名称,并在末尾附加一个数字。AAA1.csv,…AAA5.csv等等。我是编程新手,所以不太确定如何从这一点开始。请帮忙。谢谢


以下是连续2秒的csv文件行摘录:

"No.","Time","Time delta from previous displayed frame","Length","Source","Destination","Protocol","Info"
"100","23:39:52.634388","0.000502000","28","HuaweiTe_3a:d0:1a (8c:15:c7:3a:d0:1a) (TA)","Htc_9b:92:24 (ac:37:43:9b:92:24) (RA)","802.11","802.11 Block Ack, Flags=........"
"101","23:39:52.634393","0.000005000","102","Htc_9b:92:24","HuaweiTe_3a:d0:16","802.11","QoS Data, SN=45, FN=0, Flags=.p.....T"
"102","23:39:52.695277","0.060884000","28","Microsof_d2:8b:4f (30:59:b7:d2:8b:4f) (TA)","Sagemcom_28:38:64 (d0:6e:de:28:38:64) (RA)","802.11","802.11 Block Ack, Flags=........"
"103","23:39:52.695278","0.000001000","10","","Sagemcom_28:38:64 (d0:6e:de:28:38:64) (RA)","802.11","Clear-to-send, Flags=........"
"104","23:39:52.717845","0.022567000","16","HuaweiTe_3a:d0:1a (8c:15:c7:3a:d0:1a) (TA)","Htc_9b:92:24 (ac:37:43:9b:92:24) (RA)","802.11","Request-to-send, Flags=........"
"105","23:39:52.717845","0.000000000","406","HuaweiTe_3a:d0:16","Htc_9b:92:24","802.11","QoS Data, SN=3446, FN=0, Flags=.p....F."
"106","23:39:52.717852","0.000007000","28","Htc_9b:92:24 (ac:37:43:9b:92:24) (TA)","HuaweiTe_3a:d0:1a (8c:15:c7:3a:d0:1a) (RA)","802.11","802.11 Block Ack, Flags=........"
"107","23:39:52.717853","0.000001000","10","","HuaweiTe_3a:d0:1a (8c:15:c7:3a:d0:1a) (RA)","802.11","Clear-to-send, Flags=........"
"108","23:39:52.719380","0.001527000","28","HuaweiTe_3a:d0:1a (8c:15:c7:3a:d0:1a) (TA)","Htc_9b:92:24 (ac:37:43:9b:92:24) (RA)","802.11","802.11 Block Ack, Flags=........"
"109","23:39:52.719384","0.000004000","102","Htc_9b:92:24","HuaweiTe_3a:d0:16","802.11","QoS Data, SN=46, FN=0, Flags=.p.....T"
"110","23:39:52.719389","0.000005000","10","","Htc_9b:92:24 (ac:37:43:9b:92:24) (RA)","802.11","Clear-to-send, Flags=........"
"111","23:39:53.109091","0.389702000","24","Htc_9b:92:24","HuaweiTe_3a:d0:1a","802.11","Null function (No data), SN=4069, FN=0, Flags=...P...T"
"112","23:39:53.109586","0.000495000","10","","Htc_9b:92:24 (ac:37:43:9b:92:24) (RA)","802.11","Acknowledgement, Flags=........"
"113","23:39:53.149481","0.039895000","28","Sagemcom_28:38:64 (d0:6e:de:28:38:64) (TA)","Microsof_a0:a4:2c (58:82:a8:a0:a4:2c) (RA)","802.11","802.11 Block Ack, Flags=........"
"114","23:39:53.157218","0.007737000","24","Htc_9b:92:24","HuaweiTe_3a:d0:1a","802.11","Null function (No data), SN=4070, FN=0, Flags=.......T"
"115","23:39:53.159251","0.002033000","10","","Htc_9b:92:24 (ac:37:43:9b:92:24) (RA)","802.11","Acknowledgement, Flags=........"
"116","23:39:53.159252","0.000001000","16","HuaweiTe_3a:d0:1a (8c:15:c7:3a:d0:1a) (TA)","Htc_9b:92:24 (ac:37:43:9b:92:24) (RA)","802.11","Request-to-send, Flags=........"
"117","23:39:53.159267","0.000015000","10","","HuaweiTe_3a:d0:1a (8c:15:c7:3a:d0:1a) (RA)","802.11","Clear-to-send, Flags=........"
"118","23:39:53.160276","0.001009000","16","HuaweiTe_3a:d0:1a (8c:15:c7:3a:d0:1a) (TA)","Htc_9b:92:24 (ac:37:43:9b:92:24) (RA)","802.11","Request-to-send, Flags=........"
"119","23:39:53.160277","0.000001000","1500","HuaweiTe_3a:d0:16","Htc_9b:92:24","802.11","QoS Data, SN=3447, FN=0, Flags=.p....F."
"120","23:39:53.160290","0.000013000","28","Htc_9b:92:24 (ac:37:43:9b:92:24) (TA)","HuaweiTe_3a:d0:1a (8c:15:c7:3a:d0:1a) (RA)","802.11","802.11 Block Ack, Flags=........"

这应该让你开始。这将把您的示例csv拆分为11个不同的文件。我建议创建一个测试目录,并使用下面的代码进行测试,如果它做了您希望它做的事情

import os
# pandas to read / write csv and process the data
import pandas as pd
startdir='.'
suffix='.csv'
for root, dirs, files in os.walk(startdir):
  for name in files:
    if name.endswith(suffix):
      filename=os.path.join(root,name)
      #print(filename)
      df = pd.read_csv(filename) 
      # Extract the time for grouping
      col_time = pd.to_datetime(dat1['Time'])
      # Group the values according to second(minute might be not needed)
      df2 = df.groupby([col_time.dt.second,col_time.dt.minute]) 
      # now split the data frame according to group and put them in a list
      list_of_df = [df2.get_group(x) for x in df2.groups]
      # get the data frame from the list and write them 
      for i in range(len(list_of_df)):
        list_of_df[i].to_csv(file_nme[:-4]+str(i)+".csv")

csv模块在这里就足够了。你只需要一次读一行每一个文件。如果时间字段的前8个字符(第二个)与前一行的字符相同,则将该行复制到同一输出文件中,否则创建新的输出文件

可将其编码为:

import os
import csv
startdir='.'
suffix='.csv'
for root, dirs, files in os.walk(startdir):
    for name in files:
        if name.endswith(suffix):
            filename=os.path.join(root,name)
            with open(filename) as fd:        # open the csv file
                rd = csv.reader(fd)           #  as a csv input file
                old = None                    # no previous line
                i = 0                         # we will start numbering output files with 1
                header = next(rd)             # store the header line
                for row in rd:
                    if row[1][:8] != old:     # we have a different second (or the first one...)
                        old = row[1][:8]      # store current time for next rows
                        i += 1                # increase output file number
                        if old is not None:   # eventually close previous output file
                            fdout.close()
                        fdout = open(filename[:-4] + str(i) + filename[-4:],
                                 'w', newline='')     # open a new output file
                        wr = csv.writer(fdout, quoting=csv.QUOTE_ALL)  # with expected csv params
                        _ = wr.writerow(header)   # write the header
                    _ = wr.writerow(row)      # copy the row to the current output file
                fdout.close()

上面的代码使用了一个事实,即不需要在时间字符串中直接解析就可以确定秒数。如果需要的可变持续时间最终小于秒,则需要解析时间字符串并将其转换为十进制(更准确地说是浮点)秒数,然后将其除以所选的持续时间(以秒为单位):

...
sec_duration=0.5     # for half a second
                ...
                for row in rd:
                    # convert the Time field to a total number of seconds in day
                    #  as a flot
                    cur = datetime.datetime.strptime(row[1], "%H:%M:%S.%f")
                    cur -= cur.replace(hour=0, minute=0, second=0, microsecond=0)
                    # make it a number of periods of sec_duration
                    cur = int(cur.total_seconds() / sec_duration)
                    if cur != old:     # we have a different period (or the first one...)
                        if old is not None:   # eventually close previous output file
                            fdout.close()
                        old = cur      # store current time for next rows
                        i += 1                # increase output file number
                ...

您至少需要发布这样一个CSV文件的有意义的示例。@Tomalak谢谢,我现在添加了一个CSV文件。请在上面的帖子中添加一个有意义的示例(大约10行),而不是外部站点上完整的60+Kb文件的链接。当那个链接失效时,它又没用了。谢谢。我应该将拆分时间放在哪里(例如:如果我想在1秒或2毫秒的时间间隔内拆分数据包)?我假设您是基于表中的时间列拆分表。因此,这一行告诉我们如何将表分组
df2=df.groupby([col_time.dt.second])
。我理解的对吗?熊猫在这里太过分了,因为除了将一个字段与前一个字段进行比较外,不涉及任何处理。这是一个公平的评估,我同意这可能不是最有效的方法。是的,我必须根据时间列进行拆分。因此,如果我想将行df2=df.groupby([col_time.dt.second])拆分为0.5秒,我必须将其更改为df2=df.groupby([0.5])?对吗?感谢使用
glob
模块将简化操作系统的漫游和过滤。@Serge谢谢。但是如果我想除以0.5秒。那么我应该在哪一行对代码进行更改?@user3535695:这会稍微复杂一些,因为除了时间域的前8个字符外,还必须考虑第10个字符:如果小于5,则在一秒钟的下半部分,如果在上半部分为5或以上。@SergeBallesta谢谢。我对编程真的很陌生,所以很抱歉这些问题。问题是我必须把文件分成不同的时间间隔。例如1秒、0.5秒、0.05秒、2秒等。我想知道是否可以编写一个python脚本,它将允许我输入我想要的时间值(例如:0.5),并根据“时间”列将第一个到达0.5秒的数据包分开(根据上面的示例23:39:52.634388到23:39:53.134388)放进一个文件,如此类推?我真的不知道怎么写这个:/
...
sec_duration=0.5     # for half a second
                ...
                for row in rd:
                    # convert the Time field to a total number of seconds in day
                    #  as a flot
                    cur = datetime.datetime.strptime(row[1], "%H:%M:%S.%f")
                    cur -= cur.replace(hour=0, minute=0, second=0, microsecond=0)
                    # make it a number of periods of sec_duration
                    cur = int(cur.total_seconds() / sec_duration)
                    if cur != old:     # we have a different period (or the first one...)
                        if old is not None:   # eventually close previous output file
                            fdout.close()
                        old = cur      # store current time for next rows
                        i += 1                # increase output file number
                ...