Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/356.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 2.7 基于不规则时间间隔的数据帧合并_Python 2.7_Merge_Pandas_Time Series_Group By - Fatal编程技术网

Python 2.7 基于不规则时间间隔的数据帧合并

Python 2.7 基于不规则时间间隔的数据帧合并,python-2.7,merge,pandas,time-series,group-by,Python 2.7,Merge,Pandas,Time Series,Group By,我想知道如何加速两个数据帧的合并。其中一个数据帧具有时间戳数据点(valuecol) 另一个具有时间间隔信息(start\u time,end\u time,以及相关的interval\u id) 与下面的for循环相比,我希望更有效地合并这两个数据帧: data['interval_id'] = np.nan for index, ser in intervals.iterrows(): in_interval = (data['time'] >= ser['start_time

我想知道如何加速两个数据帧的合并。其中一个数据帧具有时间戳数据点(
value
col)

另一个具有时间间隔信息(
start\u time
end\u time
,以及相关的
interval\u id

与下面的
for
循环相比,我希望更有效地合并这两个数据帧:

data['interval_id'] = np.nan
for index, ser in intervals.iterrows():
    in_interval = (data['time'] >= ser['start_time']) & \
                  (data['time'] <= ser['end_time'])
    data['interval_id'][in_interval] = ser['interval_id']

result = data.merge(intervals, how='outer').sort('time').reset_index(drop=True)
任何来自熟悉时间序列的人的建议都将不胜感激


杰夫回答后更新:


主要问题是
interval\u id
与任何常规时间间隔无关(例如,间隔并不总是大约10秒)。一个间隔可能是10秒,下一个间隔可能是2秒,下一个间隔可能是100秒,所以我不能像杰夫建议的那样使用任何常规的舍入方案。不幸的是,我上面的最小示例并没有明确说明这一点。

您可能希望指定稍微不同的“时间”间隔,但应该给您一个开始

In [34]: data['on'] = np.round(data['time']/10)

In [35]: data.merge(intervals,left_on=['on'],right_on=['interval_id'],how='outer')
Out[35]: 
         time     value  on   end_time  interval_id  start_time
0    1.301658 -0.462594   0   7.630243            0    0.220746
1    2.202654  0.054903   0   7.630243            0    0.220746
2   10.253593  0.329947   1  17.715596            1   10.299464
3   13.803064 -0.601021   1  17.715596            1   10.299464
4   17.086290  0.484119   2  27.175455            2   24.710704
5   21.797655  0.988212   2  27.175455            2   24.710704
6   26.265165  0.491410   3  37.702968            3   30.670753
7   27.777182 -0.121691   3  37.702968            3   30.670753
8   34.066473  0.659260   3  37.702968            3   30.670753
9   34.786337 -0.230026   3  37.702968            3   30.670753
10  35.343021  0.364505   4  49.489028            4   42.948486
11  35.506895  0.953562   4  49.489028            4   42.948486
12  36.129951 -0.703457   4  49.489028            4   42.948486
13  38.794690 -0.510535   4  49.489028            4   42.948486
14  40.508702 -0.763417   4  49.489028            4   42.948486
15  43.974516 -0.149487   4  49.489028            4   42.948486
16  46.219554  0.893025   5  57.086065            5   53.124795
17  50.206860  0.729106   5  57.086065            5   53.124795
18  50.395082 -0.807557   5  57.086065            5   53.124795
19  50.410783  0.996247   5  57.086065            5   53.124795
20  51.602892  0.144483   5  57.086065            5   53.124795
21  52.006921 -0.979778   5  57.086065            5   53.124795
22  52.682896 -0.593500   5  57.086065            5   53.124795
23  52.836037  0.448370   5  57.086065            5   53.124795
24  53.052130 -0.227245   5  57.086065            5   53.124795
25  57.169775  0.659673   6  65.927106            6   61.590948
26  59.336176 -0.893004   6  65.927106            6   61.590948
27  60.297771  0.897418   6  65.927106            6   61.590948
28  61.151664  0.176229   6  65.927106            6   61.590948
29  61.769023  0.894644   6  65.927106            6   61.590948
30  64.221220  0.893012   6  65.927106            6   61.590948
31  67.907417 -0.859734   7  78.192671            7   72.463468
32  71.460483 -0.271364   7  78.192671            7   72.463468
33  74.514028  0.621174   7  78.192671            7   72.463468
34  75.822643 -0.351684   8  88.820139            8   83.183825
35  84.252778 -0.685043   8  88.820139            8   83.183825
36  84.838361  0.354365   8  88.820139            8   83.183825
37  85.770611 -0.089678   9        NaN          NaN         NaN
38  85.957559  0.649995   9        NaN          NaN         NaN
39  86.498339  0.569793   9        NaN          NaN         NaN
40  91.006735  0.731006   9        NaN          NaN         NaN
41  91.941862  0.964376   9        NaN          NaN         NaN
42  94.617522  0.626889   9        NaN          NaN         NaN
43  95.318288 -0.088918  10        NaN          NaN         NaN
44  95.595243  0.539685  10        NaN          NaN         NaN
45  95.818267 -0.989647  10        NaN          NaN         NaN
46  98.240444  0.931445  10        NaN          NaN         NaN
47  98.722869  0.442502  10        NaN          NaN         NaN
48  99.349198  0.585264  10        NaN          NaN         NaN
49  99.829372 -0.743697  10        NaN          NaN         NaN

[50 rows x 6 columns]
您可以使用来查找表示
数据['time']
中每个值在
间隔['start\u time']
之间的位置的索引。然后,您可以再次调用
np.searchsorted
,以查找表示
数据['time']
中每个值在
间隔['end\u time']之间的位置的索引。
。请注意,使用
np.searchsorted
依赖于
interval['start\u time']
interval['end\u time']
的排序顺序

对于数组中这两个索引相等的每个对应位置,
data['time']
位于
interval['start\u time']
interval['end\u time']
之间。请注意,这取决于间隔是不相交的

以这种方式使用
searchsorted
比使用
for循环快5倍左右:

import pandas as pd
import numpy as np

np.random.seed(1)
data = pd.DataFrame({'time':np.sort(np.random.uniform(0,100,size=50)),
                     'value':np.random.uniform(-1,1,size=50)})

intervals = pd.DataFrame(
    {'interval_id':np.arange(9),
     'start_time':np.random.uniform(0,5,size=9) + np.arange(0,90,10),    
     'end_time':np.random.uniform(5,10,size=9) + np.arange(0,90,10)})

def using_loop():
    data['interval_id'] = np.nan
    for index, ser in intervals.iterrows():
        in_interval = (data['time'] >= ser['start_time']) & \
                      (data['time'] <= ser['end_time'])
        data['interval_id'][in_interval] = ser['interval_id']

    result = data.merge(intervals, how='outer').sort('time').reset_index(drop=True)
    return result

def using_searchsorted():
    start_idx = np.searchsorted(intervals['start_time'].values, data['time'].values)-1
    end_idx = np.searchsorted(intervals['end_time'].values, data['time'].values)
    mask = (start_idx == end_idx)
    result = data.copy()
    result['interval_id'] = result['start_time'] = result['end_time'] = np.nan
    result['interval_id'][mask] = start_idx
    result.ix[mask, 'start_time'] = intervals['start_time'][start_idx[mask]].values
    result.ix[mask, 'end_time'] = intervals['end_time'][end_idx[mask]].values
    return result

谢谢杰夫!不幸的是,我不认为这在所有情况下都有效。例如,如果我有一个时间为0.19的事件,但第一个间隔直到3.21才开始,则向下舍入会将该事件放入第一个间隔。这里有一些变通方法——我猜这就是你所说的时间指定略有不同的意思——但我认为还有一个更深层次的问题,即时间间隔不规则。在你的解决方案之后,我改进了我的问题,所以谢谢(很抱歉没有更清楚)。间隔总是不相交的吗?在这种情况下,是的,间隔总是不相交的。既然你问了,找到一个重叠间隔的解决方案也是很有趣的!
     time      value     interval_id  start_time   end_time
0    0.575976  0.022727          NaN         NaN        NaN
1    4.607545  0.222568            0    3.618715   8.294847
2    5.179350  0.438052            0    3.618715   8.294847
3   11.069956  0.641269            1   10.301728  19.870283
4   12.387854  0.344192            1   10.301728  19.870283
5   18.889691  0.582946            1   10.301728  19.870283
6   20.850469 -0.027436          NaN         NaN        NaN
7   23.199618  0.731316            2   21.488868  28.968338
8   26.631284  0.570647            2   21.488868  28.968338
9   26.996397  0.597035            2   21.488868  28.968338
10  28.601867 -0.131712            2   21.488868  28.968338
11  28.660986  0.710856            2   21.488868  28.968338
12  28.875395 -0.355208            2   21.488868  28.968338
13  28.959320 -0.430759            2   21.488868  28.968338
14  29.702800 -0.554742          NaN         NaN        NaN
In [34]: data['on'] = np.round(data['time']/10)

In [35]: data.merge(intervals,left_on=['on'],right_on=['interval_id'],how='outer')
Out[35]: 
         time     value  on   end_time  interval_id  start_time
0    1.301658 -0.462594   0   7.630243            0    0.220746
1    2.202654  0.054903   0   7.630243            0    0.220746
2   10.253593  0.329947   1  17.715596            1   10.299464
3   13.803064 -0.601021   1  17.715596            1   10.299464
4   17.086290  0.484119   2  27.175455            2   24.710704
5   21.797655  0.988212   2  27.175455            2   24.710704
6   26.265165  0.491410   3  37.702968            3   30.670753
7   27.777182 -0.121691   3  37.702968            3   30.670753
8   34.066473  0.659260   3  37.702968            3   30.670753
9   34.786337 -0.230026   3  37.702968            3   30.670753
10  35.343021  0.364505   4  49.489028            4   42.948486
11  35.506895  0.953562   4  49.489028            4   42.948486
12  36.129951 -0.703457   4  49.489028            4   42.948486
13  38.794690 -0.510535   4  49.489028            4   42.948486
14  40.508702 -0.763417   4  49.489028            4   42.948486
15  43.974516 -0.149487   4  49.489028            4   42.948486
16  46.219554  0.893025   5  57.086065            5   53.124795
17  50.206860  0.729106   5  57.086065            5   53.124795
18  50.395082 -0.807557   5  57.086065            5   53.124795
19  50.410783  0.996247   5  57.086065            5   53.124795
20  51.602892  0.144483   5  57.086065            5   53.124795
21  52.006921 -0.979778   5  57.086065            5   53.124795
22  52.682896 -0.593500   5  57.086065            5   53.124795
23  52.836037  0.448370   5  57.086065            5   53.124795
24  53.052130 -0.227245   5  57.086065            5   53.124795
25  57.169775  0.659673   6  65.927106            6   61.590948
26  59.336176 -0.893004   6  65.927106            6   61.590948
27  60.297771  0.897418   6  65.927106            6   61.590948
28  61.151664  0.176229   6  65.927106            6   61.590948
29  61.769023  0.894644   6  65.927106            6   61.590948
30  64.221220  0.893012   6  65.927106            6   61.590948
31  67.907417 -0.859734   7  78.192671            7   72.463468
32  71.460483 -0.271364   7  78.192671            7   72.463468
33  74.514028  0.621174   7  78.192671            7   72.463468
34  75.822643 -0.351684   8  88.820139            8   83.183825
35  84.252778 -0.685043   8  88.820139            8   83.183825
36  84.838361  0.354365   8  88.820139            8   83.183825
37  85.770611 -0.089678   9        NaN          NaN         NaN
38  85.957559  0.649995   9        NaN          NaN         NaN
39  86.498339  0.569793   9        NaN          NaN         NaN
40  91.006735  0.731006   9        NaN          NaN         NaN
41  91.941862  0.964376   9        NaN          NaN         NaN
42  94.617522  0.626889   9        NaN          NaN         NaN
43  95.318288 -0.088918  10        NaN          NaN         NaN
44  95.595243  0.539685  10        NaN          NaN         NaN
45  95.818267 -0.989647  10        NaN          NaN         NaN
46  98.240444  0.931445  10        NaN          NaN         NaN
47  98.722869  0.442502  10        NaN          NaN         NaN
48  99.349198  0.585264  10        NaN          NaN         NaN
49  99.829372 -0.743697  10        NaN          NaN         NaN

[50 rows x 6 columns]
import pandas as pd
import numpy as np

np.random.seed(1)
data = pd.DataFrame({'time':np.sort(np.random.uniform(0,100,size=50)),
                     'value':np.random.uniform(-1,1,size=50)})

intervals = pd.DataFrame(
    {'interval_id':np.arange(9),
     'start_time':np.random.uniform(0,5,size=9) + np.arange(0,90,10),    
     'end_time':np.random.uniform(5,10,size=9) + np.arange(0,90,10)})

def using_loop():
    data['interval_id'] = np.nan
    for index, ser in intervals.iterrows():
        in_interval = (data['time'] >= ser['start_time']) & \
                      (data['time'] <= ser['end_time'])
        data['interval_id'][in_interval] = ser['interval_id']

    result = data.merge(intervals, how='outer').sort('time').reset_index(drop=True)
    return result

def using_searchsorted():
    start_idx = np.searchsorted(intervals['start_time'].values, data['time'].values)-1
    end_idx = np.searchsorted(intervals['end_time'].values, data['time'].values)
    mask = (start_idx == end_idx)
    result = data.copy()
    result['interval_id'] = result['start_time'] = result['end_time'] = np.nan
    result['interval_id'][mask] = start_idx
    result.ix[mask, 'start_time'] = intervals['start_time'][start_idx[mask]].values
    result.ix[mask, 'end_time'] = intervals['end_time'][end_idx[mask]].values
    return result
In [254]: %timeit using_loop()
100 loops, best of 3: 7.74 ms per loop

In [255]: %timeit using_searchsorted()
1000 loops, best of 3: 1.56 ms per loop

In [256]: 7.74/1.56
Out[256]: 4.961538461538462