Python 在读取csv文件时,如何以时间升序的方式获取最近一天的行?
我想获取最近一天的行数,按时间升序排列。Python 在读取csv文件时,如何以时间升序的方式获取最近一天的行?,python,pandas,Python,Pandas,我想获取最近一天的行数,按时间升序排列。 我得到的数据帧如下所示: label uId adId operTime siteId slotId contentId netType 0 0 u147333631 3887 2019-03-30 15:01:55.617 10 30 2137 1 1 0 u146930
我得到的数据帧如下所示:
label uId adId operTime siteId slotId contentId netType
0 0 u147333631 3887 2019-03-30 15:01:55.617 10 30 2137 1
1 0 u146930169 1462 2019-03-31 09:51:15.275 3 32 1373 1
2 0 u139816523 2084 2019-03-27 08:10:41.769 10 30 2336 1
3 0 u106546472 1460 2019-03-31 08:51:41.085 3 32 1371 4
4 0 u106642861 2295 2019-03-27 22:58:03.679 3 32 2567 4
#this not a real data, just for examples.
label uId adId operTime siteId slotId contentId netType
0 0 u147336431 3887 2019-04-04 00:08:42.315 1 54 2427 2
1 0 u146933269 1462 2019-04-04 01:06:16.417 30 36 1343 6
2 0 u139536523 2084 2019-04-04 02:08:58.079 15 23 1536 7
3 0 u106663472 1460 2019-04-04 03:21:13.050 32 45 1352 2
4 0 u121642861 2295 2019-04-04 04:36:08.653 3 33 3267 4
因为我在这个csv文件中有大约1亿行,所以不可能将所有这些都加载到我的电脑内存中。因此,在读取此csv文件时,我希望以时间升序的方式获取最近一天的行。
例如,如果最近一天是2019-04-04,则其输出如下:
label uId adId operTime siteId slotId contentId netType
0 0 u147333631 3887 2019-03-30 15:01:55.617 10 30 2137 1
1 0 u146930169 1462 2019-03-31 09:51:15.275 3 32 1373 1
2 0 u139816523 2084 2019-03-27 08:10:41.769 10 30 2336 1
3 0 u106546472 1460 2019-03-31 08:51:41.085 3 32 1371 4
4 0 u106642861 2295 2019-03-27 22:58:03.679 3 32 2567 4
#this not a real data, just for examples.
label uId adId operTime siteId slotId contentId netType
0 0 u147336431 3887 2019-04-04 00:08:42.315 1 54 2427 2
1 0 u146933269 1462 2019-04-04 01:06:16.417 30 36 1343 6
2 0 u139536523 2084 2019-04-04 02:08:58.079 15 23 1536 7
3 0 u106663472 1460 2019-04-04 03:21:13.050 32 45 1352 2
4 0 u121642861 2295 2019-04-04 04:36:08.653 3 33 3267 4
有人能帮我吗?感谢您的建议。
就像前面提到的@anky_91一样,您可以使用该功能。下面是一个简单的示例,说明它是如何工作的:
df = pd.DataFrame( {'Symbol':['A','A','A'] ,
'Date':['02/20/2015','01/15/2016','08/21/2015']})
df.sort_values(by='Date')
输出:
Date Symbol
2 08/21/2015 A
0 02/20/2015 A
1 01/15/2016 A
支持anky_91所说的,sort_values()在这里会很有帮助
import pandas as pd
df = pd.read_csv('file.csv')
# >>> df
# label uId adId operTime siteId slotId contentId netType
# 0 0 u147333631 3887 2019-03-30 15:01:55.617 10 30 2137 1
# 1 0 u146930169 1462 2019-03-31 09:51:15.275 3 32 1373 1
# 2 0 u139816523 2084 2019-03-27 08:10:41.769 10 30 2336 1
# 3 0 u106546472 1460 2019-03-31 08:51:41.085 3 32 1371 4
# 4 0 u106642861 2295 2019-03-27 22:58:03.679 3 32 2567 4
sub_df = df[(df['operTime']>'2019-03-31') & (df['operTime']<'2019-04-01')]
# >>> sub_df
# label uId adId operTime siteId slotId contentId netType
# 1 0 u146930169 1462 2019-03-31 09:51:15.275 3 32 1373 1
# 3 0 u106546472 1460 2019-03-31 08:51:41.085 3 32 1371 4
final_df = sub_df.sort_values(by=['operTime'])
# >>> final_df
# label uId adId operTime siteId slotId contentId netType
# 3 0 u106546472 1460 2019-03-31 08:51:41.085 3 32 1371 4
# 1 0 u146930169 1462 2019-03-31 09:51:15.275 3 32 1373 1
将熊猫作为pd导入
df=pd.read\u csv('file.csv'))
#>>>df
#标签uId adId operTime站点ID slotId contentId netType
#0 U1473336313887 2019-03-30 15:01:55.617102137 1
#1:10 U14693016914622019-03-3109:51:15.27533211373 1
#2 u139816523 2084 2019-03-27 08:10:41.769 10 30 2336 1
#3.0 u106546472 1460 2019-03-31 08:51:41.085 3 32 1371 4
#4 u106642861 2295 2019-03-27 22:58:03.679 3 32 2567 4
分时距=分时距[(分时距['operTime']>'2019-03-31')和(分时距['operTime']>>分时距
#标签uId adId operTime站点ID slotId contentId netType
#1:10 U14693016914622019-03-3109:51:15.27533211373 1
#3.0 u106546472 1460 2019-03-31 08:51:41.085 3 32 1371 4
final_df=sub_df.sort_值(by=['operTime'])
#>>>最终
#标签uId adId operTime站点ID slotId contentId netType
#3.0 u106546472 1460 2019-03-31 08:51:41.085 3 32 1371 4
#1:10 U14693016914622019-03-3109:51:15.27533211373 1
我认为您也可以在这里使用datetimeindex;如果文件足够大,这可能是必要的。我假设您无法将整个文件读入内存,并且文件是随机顺序的。您可以分块读取文件并在块中迭代
# read 50,000 lines of the file at a time
reader = pd.read_csv(
'csv_file.csv',
parse_dates=True,
chunksize=5e5,
header=0
)
recent_day=pd.datetime(2019,4,4)
next_day=recent_day + pd.Timedelta(days=1)
df_list=[]
for chunk in reader:
#check if any rows match the date range
date_rows = chunk.loc[
(chunk['operTime'] >= recent_day]) &\
(chunk['operTime'] < next_day)
]
#append dataframe of matching rows to the list
if date_rows.empty:
pass
else:
df_list.append(date_rows)
final_df = pd.concat(df_list)
final_df = final_df.sort_values('operTime')
#一次读取50000行文件
读卡器=pd.read\U csv(
“csv_file.csv”,
parse_dates=True,
chunksize=5e5,
页眉=0
)
最近一天=pd.日期时间(2019,4,4)
下一天=最近一天+pd.Timedelta(天=1)
df_列表=[]
对于读卡器中的块:
#检查是否有任何行与日期范围匹配
日期行=chunk.loc[
(chunk['operTime']>=最近的一天])&\
(chunk['operTime']<下一天)
]
#将匹配行的dataframe追加到列表中
如果date_rows.empty:
通过
其他:
df_list.append(日期_行)
最终df=pd.concat(df列表)
final_df=final_df.sort_值('operTime'))
查看sort\u values()
由于无法在内存中读取文件,因此可以分块读取并迭代。查看read\u csv的“chunksize”参数