使用scientific python进行时间序列数据分析:对多个文件进行连续分析

使用scientific python进行时间序列数据分析:对多个文件进行连续分析,python,pandas,time-series,data-analysis,continuous,Python,Pandas,Time Series,Data Analysis,Continuous,问题 我在做时间序列分析。测量数据来自于在50 kHz时对传感器的电压输出进行采样,然后将数据作为单独的文件以小时为单位转储到磁盘。使用pytables作为CArray将数据保存到HDF5文件中。选择此格式是为了保持与MATLAB的互操作性 完整的数据集现在有多TB,太大,无法加载到内存中 我的一些分析要求我对整个数据集进行迭代。对于需要获取数据块的分析,我可以通过创建生成器方法看到前进的道路。我有点不确定如何进行需要连续时间序列的分析 示例 例如,假设我正在寻找使用一些移动窗口过程(例如小波分

问题

我在做时间序列分析。测量数据来自于在50 kHz时对传感器的电压输出进行采样,然后将数据作为单独的文件以小时为单位转储到磁盘。使用pytables作为CArray将数据保存到HDF5文件中。选择此格式是为了保持与MATLAB的互操作性

完整的数据集现在有多TB,太大,无法加载到内存中

我的一些分析要求我对整个数据集进行迭代。对于需要获取数据块的分析,我可以通过创建生成器方法看到前进的道路。我有点不确定如何进行需要连续时间序列的分析

示例

例如,假设我正在寻找使用一些移动窗口过程(例如小波分析)或应用FIR滤波器来发现和分类瞬态。如何处理边界,无论是在文件的结尾还是开头,还是在块边界?我希望数据显示为一个连续的数据集

请求

我很乐意:

  • 根据需要加载数据,以保持较低的内存占用空间
  • 在内存中保留整个数据集的映射,以便我可以像处理常规pandas Series对象一样处理数据集,例如data[time1:time2]
我正在使用scientific python(Enthow发行版)来处理所有常规的东西:numpy、scipy、pandas、matplotlib等等。我最近才开始将pandas合并到我的工作流程中,我仍然不熟悉它的所有功能

我查看了相关的stackexchange线程,没有看到任何能够准确解决我的问题的内容

编辑:最终解决方案。

基于这些有用的提示,我构建了一个迭代器,可以遍历文件并返回任意大小的块——一个移动窗口,希望能够优雅地处理文件边界。我添加了用数据填充每个窗口前后的选项(重叠窗口)。然后,我可以对重叠的窗口应用一系列过滤器,然后删除最后的重叠。我希望这能给我连续性

我还没有实现
\uuu getitem\uuu
,但它在我要做的事情列表中

这是最后的代码。为了简洁起见,省略了一些细节

class FolderContainer(readdata.DataContainer):

    def __init__(self,startdir):
        readdata.DataContainer.__init__(self,startdir)

        self.filelist = None
        self.fs = None
        self.nsamples_hour = None
        # Build the file list
        self._build_filelist(startdir)


    def _build_filelist(self,startdir):
        """
        Populate the filelist dictionary with active files and their associated
        file date (YYYY,MM,DD) and hour.

        Each entry in 'filelist' has the form (abs. path : datetime) where the
        datetime object contains the complete date and hour information.
        """
        print('Building file list....',end='')
        # Use the full file path instead of a relative path so that we don't
        # run into problems if we change the current working directory.
        filelist = { os.path.abspath(f):self._datetime_from_fname(f)
                for f in os.listdir(startdir)
                if fnmatch.fnmatch(f,'NODE*.h5')}

        # If we haven't found any files, raise an error
        if not filelist:
            msg = "Input directory does not contain Illionix h5 files."
            raise IOError(msg)
        # Filelist is a ordered dictionary. Sort before saving.
        self.filelist = OrderedDict(sorted(filelist.items(),
                key=lambda t: t[0]))
        print('done')
    
    def _datetime_from_fname(self,fname):
        """
        Return the year, month, day, and hour from a filename as a datetime
        object
        
        """
        # Filename has the prototype: NODE##-YY-MM-DD-HH.h5. Split this up and
        # take only the date parts. Convert the year form YY to YYYY.
        (year,month,day,hour) = [int(d) for d in re.split('-|\.',fname)[1:-1]]
        year+=2000
        return datetime.datetime(year,month,day,hour)


    def chunk(self,tstart,dt,**kwargs):
        """
        Generator expression from returning consecutive chunks of data with
        overlaps from the entire set of Illionix data files.

        Parameters
        ----------
        Arguments:
            tstart: UTC start time [provided as a datetime or date string]
            dt: Chunk size [integer number of samples]

        Keyword arguments:
            tend: UTC end time [provided as a datetime or date string].
            frontpad: Padding in front of sample [integer number of samples].
            backpad: Padding in back of sample [integer number of samples]

        Yields:
            chunk: generator expression

        """
        # PARSE INPUT ARGUMENTS

        # Ensure 'tstart' is a datetime object.
        tstart = self._to_datetime(tstart)
        # Find the offset, in samples, of the starting position of the window
        # in the first data file
        tstart_samples = self._to_samples(tstart)

        # Convert dt to samples. Because dt is a timedelta object, we can't use
        # '_to_samples' for conversion.
        if isinstance(dt,int):
            dt_samples = dt
        elif isinstance(dt,datetime.timedelta):
            dt_samples = np.int64((dt.day*24*3600 + dt.seconds + 
                    dt.microseconds*1000) * self.fs)
        else:
            # FIXME: Pandas 0.13 includes a 'to_timedelta' function. Change
            # below when EPD pushes the update.
            t = self._parse_date_str(dt)
            dt_samples = np.int64((t.minute*60 + t.second) * self.fs)

        # Read keyword arguments. 'tend' defaults to the end of the last file
        # if a time is not provided.
        default_tend = self.filelist.values()[-1] + datetime.timedelta(hours=1)
        tend = self._to_datetime(kwargs.get('tend',default_tend))
        tend_samples = self._to_samples(tend)

        frontpad = kwargs.get('frontpad',0)
        backpad = kwargs.get('backpad',0)


        # CREATE FILE LIST

        # Build the the list of data files we will iterative over based upon
        # the start and stop times.
        print('Pruning file list...',end='')
        tstart_floor = datetime.datetime(tstart.year,tstart.month,tstart.day,
                tstart.hour)
        filelist_pruned = OrderedDict([(k,v) for k,v in self.filelist.items()
                if v >= tstart_floor and v <= tend])
        print('done.')
        # Check to ensure that we're not missing files by enforcing that there
        # is exactly an hour offset between all files.
        if not all([dt == datetime.timedelta(hours=1) 
                for dt in np.diff(np.array(filelist_pruned.values()))]):
            raise readdata.DataIntegrityError("Hour gap(s) detected in data")


        # MOVING WINDOW GENERATOR ALGORITHM

        # Keep two files open, the current file and the next in line (que file)
        fname_generator = self._file_iterator(filelist_pruned)
        fname_current = fname_generator.next()
        fname_next = fname_generator.next()

        # Iterate over all the files. 'lastfile' indicates when we're
        # processing the last file in the que.
        lastfile = False
        i = tstart_samples
        while True:
            with tables.openFile(fname_current) as fcurrent, \
                    tables.openFile(fname_next) as fnext:
                # Point to the data
                data_current = fcurrent.getNode('/data/voltage/raw')
                data_next = fnext.getNode('/data/voltage/raw')
                # Process all data windows associated with the current pair of
                # files. Avoid unnecessary file access operations as we moving
                # the sliding window.
                while True:
                    # Conditionals that depend on if our slice is:
                    #   (1) completely into the next hour
                    #   (2) partially spills into the next hour
                    #   (3) completely in the current hour.
                    if i - backpad >= self.nsamples_hour:
                        # If we're already on our last file in the processing
                        # que, we can't continue to the next. Exit. Generator
                        # is finished.
                        if lastfile:
                            raise GeneratorExit
                        # Advance the active and que file names. 
                        fname_current = fname_next
                        try:
                            fname_next = fname_generator.next()
                        except GeneratorExit:
                            # We've reached the end of our file processing que.
                            # Indicate this is the last file so that if we try
                            # to pull data across the next file boundary, we'll
                            # exit.
                            lastfile = True
                        # Our data slice has completely moved into the next
                        # hour.
                        i-=self.nsamples_hour
                        # Return the data
                        yield data_next[i-backpad:i+dt_samples+frontpad]
                        # Move window by amount dt
                        i+=dt_samples
                        # We've completely moved on the the next pair of files.
                        # Move to the outer scope to grab the next set of
                        # files.
                        break  
                    elif i + dt_samples + frontpad >= self.nsamples_hour:
                        if lastfile:
                            raise GeneratorExit
                        # Slice spills over into the next hour
                        yield np.r_[data_current[i-backpad:],
                                data_next[:i+dt_samples+frontpad-self.nsamples_hour]]
                        i+=dt_samples
                    else:
                        if lastfile:
                            # Exit once our slice crosses the boundary of the
                            # last file.
                            if i + dt_samples + frontpad > tend_samples:
                                raise GeneratorExit
                        # Slice is completely within the current hour
                        yield data_current[i-backpad:i+dt_samples+frontpad]
                        i+=dt_samples


    def _to_samples(self,input_time):
        """Convert input time, if not in samples, to samples"""
        if isinstance(input_time,int):
            # Input time is already in samples
            return input_time
        elif isinstance(input_time,datetime.datetime):
            # Input time is a datetime object
            return self.fs * (input_time.minute * 60 + input_time.second)
        else:
            raise ValueError("Invalid input 'tstart' parameter")


    def _to_datetime(self,input_time):
        """Return the passed time as a datetime object"""
        if isinstance(input_time,datetime.datetime):
            converted_time = input_time
        elif isinstance(input_time,str):
            converted_time = self._parse_date_str(input_time)
        else:
            raise TypeError("A datetime object or string date/time were "
                    "expected")
        return converted_time


    def _file_iterator(self,filelist):
        """Generator for iterating over file names."""
        for fname in filelist:
            yield fname
class FolderContainer(readdata.DataContainer):
定义初始(自,起始):
readdata.DataContainer.\uuuu init\uuuuu(self,startdir)
self.filelist=None
self.fs=无
self.nsamples\u hour=无
#构建文件列表
自创建文件列表(startdir)
定义生成文件列表(自、开始文件):
"""
使用活动文件及其关联文件填充文件列表字典
文件日期(YYYY、MM、DD)和时间。
“文件列表”中的每个条目都有一个表单(abs.path:datetime),其中
datetime对象包含完整的日期和小时信息。
"""
打印('建筑文件列表…',结束='')
#使用完整的文件路径而不是相对路径,这样我们就不会
#如果更改当前工作目录,则会遇到问题。
filelist={os.path.abspath(f):self.\u datetime\u from\u fname(f)
用于os.listdir(startdir)中的f
如果fnmatch.fnmatch(f,'NODE*.h5')}
#如果我们没有找到任何文件,请引发错误
如果不是文件列表:
msg=“输入目录不包含伊利奥尼克斯h5文件。”
引发IOError(msg)
#文件列表是一个有序字典。先分类再保存。
self.filelist=OrderedDict(已排序)(filelist.items(),
key=lambda t:t[0]))
打印(‘完成’)
def_datetime_from_fname(self,fname):
"""
从文件名返回年、月、日和小时作为日期时间
对象
"""
#文件名的原型为:NODE##-YY-MM-DD-HH.h5。把这个分开
#只取日期部分。将年份从YY转换为YYYY。
(年、月、日、时)=[int(d)表示重新拆分('-\.',fname)[1:-1]]
年份+=2000
return datetime.datetime(年、月、日、小时)
def块(自、tstart、dt、**kwargs):
"""
使用返回连续数据块的生成器表达式
与整个Illionix数据文件集重叠。
参数
----------
论据:
tstart:UTC开始时间[作为日期时间或日期字符串提供]
dt:块大小[样本的整数]
关键字参数:
趋势:UTC结束时间[作为日期时间或日期字符串提供]。
frontpad:在样本前面填充[integer number of samples]。
backpad:在样本后面填充[样本的整数]
产量:
区块:生成器表达式
"""
#解析输入参数
#确保“tstart”是datetime对象。
tstart=self.\u到\u日期时间(tstart)
#以示例形式查找窗口起始位置的偏移量
#在第一个数据文件中
tstart_samples=self._到_samples(tstart)
#将dt转换为样本。因为dt是一个timedelta对象,所以我们不能使用
#“从样本到样本”进行转换。
如果isinstance(dt,int):
dt_样本=dt
elif isinstance(dt,datetime.timedelta):
dt_samples=np.int64((dt.day*24*3600+dt.seconds+
dt.微秒*1000)*自身.fs)
其他:
#FIXME:Pandas 0.13包含一个“to_timedelta”函数。改变
#当环保署推动更新时,请参见下文。
t=自我解析日期str(dt)
dt_samples=np.int64((t.minute*60+t.second)*self.fs)
#读取关键字参数。'“倾向”默认为最后一个文件的结尾
#如果没有提供时间。
默认值=self.filelist.v
class OutOfCoreSeries(object):

     def __init__(self, dir):
            .... load a list of the files in the dir where you have them ...

     def __getitem__(self, key):
            .... map the selection key (say its a slice, which 'time1:time2' resolves) ...
            .... to the files that make it up .... , then return a new Series that only
            .... those file pointers ....

     def apply(self, func, **kwargs):
            """ apply a function to the files """
            results = []
            for f in self.files:
                     results.append(func(self.read_file(f)))
            return Results(results)