Python 使用数据帧的正则表达式
输入csv文件: regex csv表: 我的代码 包括python模块 从csv文件定义数据帧Python 使用数据帧的正则表达式,python,regex,pandas,dataframe,Python,Regex,Pandas,Dataframe,输入csv文件: regex csv表: 我的代码 包括python模块 从csv文件定义数据帧 df=pd.read\u csv(“Default Profile.csv”) 将系列字段名称上的下划线(u)和连字符(-)替换为in df df.field\u name=df.field\u name.str.replace(“[\u-]”,“”,regex=True) 将df中的系列字段_名称上的所有字符更改为小写 df.field\u name=df.field\u name.str.low
df=pd.read\u csv(“Default Profile.csv”)
将系列字段名称上的下划线(u)和连字符(-)替换为in df
df.field\u name=df.field\u name.str.replace(“[\u-]”,“”,regex=True)
将df中的系列字段_名称上的所有字符更改为小写
df.field\u name=df.field\u name.str.lower()
定义正则表达式表
regex\u table=pd.read\u csv(“regex.csv”)
代码用于更新字段\u友好\u名称&\u是否包含在\u报告中
为正则表达式表中的每个正则表达式在df.field_name中查找模式,如果发现正确匹配,则使用个人信息更新列字段_-friendly_name,如果发现匹配,则更新为not_-found,如果未找到匹配,则将最后一列更新为True
例:
单词应仅由完整的|名称| nm | txt | dsc组成,并应包含完整的
Personal_Inforamtion,regex,addiitional_grep
Full Name,full|name|nm|txt|dsc,full
然后更新df,如下所示:
_id,field_name,field_friendly_name,purpose_of_use,category,data_source,schema,table,attribute_type,sample_values,mask_it,is_included_in_report
5e95a49b0985567430f8fc00,FullName,Full Name,,,,,,,,,TRUE
5e95a4dd0985567430f9ef16,xyz,not_found,,,,,,,,,FALSE
5e95a4dd0985567430f9ef17,FullNm,Full Name,,,,,,,,,TRUE
所需输出
另一种方法是,您可以创建一组正则表达式,使用 正则表达式表文件
(full)|(first)|(last)|(legal)|(nick)
您仍然可以调整regex表的最后一列以获得更具体的输出
有你需要的。然后,您可以将not\u found
大小写附加到regex数据帧以准备
与str.extract
一起使用的数据,它从第一个匹配模式中提取组。和
组匹配然后我们可以通过行轴上的idxmax
获得正则表达式组索引。之后,
使用
组索引信息
import pandas as pd
import re
df = pd.read_csv("data.csv")
print(df)
regxt = pd.read_csv("regex_table.csv")
print(regxt)
# append not_found item case
not_found = pd.Series(["not_found","",""], index=regxt.columns)
regxt = regxt.append(not_found, ignore_index=True)
# create regex groups with last column csv words
regxl = regxt.iloc[:, 2].to_list()
regx_grps = "|".join(["(" + i + ")" for i in regxl])
# get regex group match index
grp_match = df["field_name"].str.extract(regx_grps, flags=re.IGNORECASE)
grp_idx = (~grp_match.isnull()).idxmax(axis=1)
df["field_friendly_name"] = grp_idx.map(lambda r: regxt.loc[r, "Personal_Inforamtion"])
df["is_included_in_report"] = grp_idx.map(lambda r: str(r!=len(regxt)-1).upper())
print(df)
来自df的输出
_id field_name field_friendly_name ... mask_it is_included_in_report
0 5e95a49b0985567430f8fc00 FullName Full Name ... NaN TRUE
1 5e95a4dd0985567430f9ef16 xyz not_found ... NaN FALSE
2 5e95a4dd0985567430f9ef17 FullNm Full Name ... NaN TRUE
3 5e95a4dd0985567430f9ef18 FirstName First Name ... NaN TRUE
4 5e95a49b0985567430f8fc01 abc not_found ... NaN FALSE
5 5e95a4dd0985567430f9ef19 FirstNm First Name ... NaN TRUE
6 5e95a4dd0985567430f9ef20 LastName Last Name ... NaN TRUE
7 5e95a4dd0985567430f9ef21 LastNm Last Name ... NaN TRUE
8 5e95a49b0985567430f8fc02 LegalName Legal Name ... NaN TRUE
9 5e95a4dd0985567430f9ef22 LegalNm Legal Name ... NaN TRUE
10 5e95a4dd0985567430f9ef23 NickName Nick Name ... NaN TRUE
11 5e95a4dd0985567430f9ef24 pqr not_found ... NaN FALSE
12 5e95a49b0985567430f8fc03 NickNm Nick Name ... NaN TRUE
另一种方法是,您可以创建一组正则表达式,使用 正则表达式表文件
(full)|(first)|(last)|(legal)|(nick)
您仍然可以调整regex表的最后一列以获得更具体的输出
有你需要的。然后,您可以将not\u found
大小写附加到regex数据帧以准备
与str.extract
一起使用的数据,它从第一个匹配模式中提取组。和
组匹配然后我们可以通过行轴上的idxmax
获得正则表达式组索引。之后,
使用
组索引信息
import pandas as pd
import re
df = pd.read_csv("data.csv")
print(df)
regxt = pd.read_csv("regex_table.csv")
print(regxt)
# append not_found item case
not_found = pd.Series(["not_found","",""], index=regxt.columns)
regxt = regxt.append(not_found, ignore_index=True)
# create regex groups with last column csv words
regxl = regxt.iloc[:, 2].to_list()
regx_grps = "|".join(["(" + i + ")" for i in regxl])
# get regex group match index
grp_match = df["field_name"].str.extract(regx_grps, flags=re.IGNORECASE)
grp_idx = (~grp_match.isnull()).idxmax(axis=1)
df["field_friendly_name"] = grp_idx.map(lambda r: regxt.loc[r, "Personal_Inforamtion"])
df["is_included_in_report"] = grp_idx.map(lambda r: str(r!=len(regxt)-1).upper())
print(df)
来自df的输出
_id field_name field_friendly_name ... mask_it is_included_in_report
0 5e95a49b0985567430f8fc00 FullName Full Name ... NaN TRUE
1 5e95a4dd0985567430f9ef16 xyz not_found ... NaN FALSE
2 5e95a4dd0985567430f9ef17 FullNm Full Name ... NaN TRUE
3 5e95a4dd0985567430f9ef18 FirstName First Name ... NaN TRUE
4 5e95a49b0985567430f8fc01 abc not_found ... NaN FALSE
5 5e95a4dd0985567430f9ef19 FirstNm First Name ... NaN TRUE
6 5e95a4dd0985567430f9ef20 LastName Last Name ... NaN TRUE
7 5e95a4dd0985567430f9ef21 LastNm Last Name ... NaN TRUE
8 5e95a49b0985567430f8fc02 LegalName Legal Name ... NaN TRUE
9 5e95a4dd0985567430f9ef22 LegalNm Legal Name ... NaN TRUE
10 5e95a4dd0985567430f9ef23 NickName Nick Name ... NaN TRUE
11 5e95a4dd0985567430f9ef24 pqr not_found ... NaN FALSE
12 5e95a49b0985567430f8fc03 NickNm Nick Name ... NaN TRUE
请不要将此标记为复杂,因为我发现即使经过多次Python培训,这也非常困难,Python专家在这方面需要你的帮助与Python专家聊天会很有帮助请不要将这一点标记为复杂,因为我发现即使经过多次Python培训,这也是非常困难的,Python专家需要你在这方面的帮助与Python方面的专家聊天会很有帮助
_id field_name field_friendly_name ... mask_it is_included_in_report
0 5e95a49b0985567430f8fc00 FullName Full Name ... NaN TRUE
1 5e95a4dd0985567430f9ef16 xyz not_found ... NaN FALSE
2 5e95a4dd0985567430f9ef17 FullNm Full Name ... NaN TRUE
3 5e95a4dd0985567430f9ef18 FirstName First Name ... NaN TRUE
4 5e95a49b0985567430f8fc01 abc not_found ... NaN FALSE
5 5e95a4dd0985567430f9ef19 FirstNm First Name ... NaN TRUE
6 5e95a4dd0985567430f9ef20 LastName Last Name ... NaN TRUE
7 5e95a4dd0985567430f9ef21 LastNm Last Name ... NaN TRUE
8 5e95a49b0985567430f8fc02 LegalName Legal Name ... NaN TRUE
9 5e95a4dd0985567430f9ef22 LegalNm Legal Name ... NaN TRUE
10 5e95a4dd0985567430f9ef23 NickName Nick Name ... NaN TRUE
11 5e95a4dd0985567430f9ef24 pqr not_found ... NaN FALSE
12 5e95a49b0985567430f8fc03 NickNm Nick Name ... NaN TRUE