Python pandas.DataFrame的矢量化集成
我有一个力位移数据的Python pandas.DataFrame的矢量化集成,python,numpy,pandas,vectorization,numerical-integration,Python,Numpy,Pandas,Vectorization,Numerical Integration,我有一个力位移数据的数据框。位移数组已设置为数据帧索引,列是不同测试的各种力曲线 如何计算已完成的功(即“曲线下面积”) 我看了一下它似乎满足了我的需要,但我认为我可以避免像这样在每一列上循环: import numpy as np import pandas as pd forces = pd.read_csv(...) work_done = {} for col in forces.columns: work_done[col] = np.trapz(forces.loc[c
数据框
。位移数组已设置为数据帧
索引,列是不同测试的各种力曲线
如何计算已完成的功(即“曲线下面积”)
我看了一下它似乎满足了我的需要,但我认为我可以避免像这样在每一列上循环:
import numpy as np
import pandas as pd
forces = pd.read_csv(...)
work_done = {}
for col in forces.columns:
work_done[col] = np.trapz(forces.loc[col], forces.index))
我希望为曲线下的区域创建一个新的DataFrame
,而不是dict
,并认为DataFrame.apply()
或一些合适的东西,但不知道从哪里开始查找
简言之:
数据框吗
提前感谢您的帮助。您可以通过将整个
数据帧
传递给并指定axis=
参数来对其进行矢量化,例如:
import numpy as np
import pandas as pd
# some random input data
gen = np.random.RandomState(0)
x = gen.randn(100, 10)
names = [chr(97 + i) for i in range(10)]
forces = pd.DataFrame(x, columns=names)
# vectorized version
wrk = np.trapz(forces, x=forces.index, axis=0)
work_done = pd.DataFrame(wrk[None, :], columns=forces.columns)
# non-vectorized version for comparison
work_done2 = {}
for col in forces.columns:
work_done2.update({col:np.trapz(forces.loc[:, col], forces.index)})
它们提供以下输出:
from pprint import pprint
pprint(work_done.T)
# 0
# a -24.331560
# b -10.347663
# c 4.662212
# d -12.536040
# e -10.276861
# f 3.406740
# g -3.712674
# h -9.508454
# i -1.044931
# j 15.165782
pprint(work_done2)
# {'a': -24.331559643023006,
# 'b': -10.347663159421426,
# 'c': 4.6622123535050459,
# 'd': -12.536039649161403,
# 'e': -10.276861220217308,
# 'f': 3.4067399176289994,
# 'g': -3.7126739591045541,
# 'h': -9.5084536839888187,
# 'i': -1.0449311137294459,
# 'j': 15.165781517623724}
您最初的示例还存在一些其他问题col
是一个列名而不是行索引,因此它需要索引数据帧的第二维度(即.loc[:,col]
而不是.loc[col]
)。此外,在最后一行还有一个额外的尾随括号
编辑: 您还可以通过将
np.trapz
映射到每列,直接生成输出DataFrame
,例如:
work_done = forces.apply(np.trapz, axis=0, args=(forces.index,))
然而,这并不是真正的“正确”的矢量化-您仍然在对每一列分别调用np.trapz
。通过比较.apply
版本与直接调用np.trapz
的速度,可以看出这一点:
In [1]: %timeit forces.apply(np.trapz, axis=0, args=(forces.index,))
1000 loops, best of 3: 582 µs per loop
In [2]: %timeit np.trapz(forces, x=forces.index, axis=0)
The slowest run took 6.04 times longer than the fastest. This could mean that an
intermediate result is being cached
10000 loops, best of 3: 53.4 µs per loop
这不是一个完全公平的比较,因为第二个版本排除了从输出numpy数组中构造
DataFrame
所需的额外时间,但这仍然应该小于执行实际积分所需的时间差。以下是如何使用梯形规则获得沿数据帧列的累积积分。或者,下面创建一个pandas.Series方法来选择梯形、辛普森法则或隆伯格法则():
回答得很好,谢谢。我会在周末的某个时候测试它,如果它有效,我会勾选答案。如果我使用
.apply
选项,我的集成结果是一个时间增量。有没有办法避免这种情况?@rubenbaetens可能是因为您的列包含timedelta值。您希望结果的类型是什么?例如,您可以使用[total_seconds()
]()将时间增量列转换为浮点。@ali\m我的x轴是创建为pd.to_datetime(time1,unit='s')
的序列。事实上,这有时甚至会导致与.apply
选项.copy of集成的值错误,除非您是@metakermit
import pandas as pd
from scipy import integrate
import numpy as np
#%% Setup Functions
def integrate_method(self, how='trapz', unit='s'):
'''Numerically integrate the time series.
@param how: the method to use (trapz by default)
@return
Available methods:
* trapz - trapezoidal
* cumtrapz - cumulative trapezoidal
* simps - Simpson's rule
* romb - Romberger's rule
See http://docs.scipy.org/doc/scipy/reference/integrate.html for the method details.
or the source code
https://github.com/scipy/scipy/blob/master/scipy/integrate/quadrature.py
'''
available_rules = set(['trapz', 'cumtrapz', 'simps', 'romb'])
if how in available_rules:
rule = integrate.__getattribute__(how)
else:
print('Unsupported integration rule: %s' % (how))
print('Expecting one of these sample-based integration rules: %s' % (str(list(available_rules))))
raise AttributeError
if how is 'cumtrapz':
result = rule(self.values)
result = np.insert(result, 0, 0, axis=0)
else:
result = rule(self.values)
return result
pd.Series.integrate = integrate_method
#%% Setup (random) data
gen = np.random.RandomState(0)
x = gen.randn(100, 10)
names = [chr(97 + i) for i in range(10)]
df = pd.DataFrame(x, columns=names)
#%% Cummulative Integral
df_cummulative_integral = df.apply(lambda x: x.integrate('cumtrapz'))
df_integral = df.apply(lambda x: x.integrate('trapz'))
df_do_they_match = df_cummulative_integral.tail(1).round(3) == df_integral.round(3)
if df_do_they_match.all().all():
print("Trapz produces the last row of cumtrapz")