Python 基于表中其他列的值对列执行操作
我有一个数据帧Python 基于表中其他列的值对列执行操作,python,python-3.x,pandas,python-2.7,dataframe,Python,Python 3.x,Pandas,Python 2.7,Dataframe,我有一个数据帧 df = pd.DataFrame([["A",1,98,88,"",567,453,545,656,323,756], ["B",1,99,"","",231,232,234,943,474,345], ["C",1,97,67,23,543,458,456,876,935,876], ["B",1,"",79,84,8
df = pd.DataFrame([["A",1,98,88,"",567,453,545,656,323,756], ["B",1,99,"","",231,232,234,943,474,345], ["C",1,97,67,23,543,458,456,876,935,876], ["B",1,"",79,84,895,237,678,452,545,453], ["A",1,45,"",58,334,778,234,983,858,657], ["C",1,23,55,"",183,565,953,565,234,234]], columns=["id","date","col1","col2","col3","col1_num","col1_deno","col3_num","col3_deno","col2_num","col2_deno"])
我需要分别为列名称的_num和_deno设置Nan/空值。例如:如果“col1”的特定行为空,则将的“col1\u num”和的“col1\u deno”的值设为Nan/空。基于“col2”对“col2\u num”和基于“col2\u deno”对“col3\u num”和“col3\u deno”重复相同的过程
预期输出:
df_out = pd.DataFrame([["A",1,98,88,"",567,453,"","",323,756], ["B",1,99,"","",231,232,"","","",""], ["C",1,97,67,23,543,458,456,876,935,876], ["B",1,"",79,84,"","",678,452,545,453], ["A",1,45,"",58,334,778,234,983,"",""], ["C",1,23,55,"",183,565,"","",234,234]], columns=["id","date","col1","col2","col3","col1_num","col1_deno","col3_num","col3_deno","col2_num","col2_deno"])
如何操作?让我们尝试使用布尔掩蔽:
# select the columns
c = pd.Index(['col1', 'col2', 'col3'])
# create boolean mask
m = df[c].eq('').to_numpy()
# mask the values in `_num` and `_deno` like columns
df[c + '_num'] = df[c + '_num'].mask(m, '')
df[c + '_deno'] = df[c + '_deno'].mask(m, '')
@shubham的答案很简单,切中要害,我相信也会更快;这只是一个选项,您可能无法(或希望)列出所有列 获取需要更改的列的列表:
cols = [col for col in df if col.startswith('col')]
['col1',
'col2',
'col3',
'col1_num',
'col1_deno',
'col3_num',
'col3_deno',
'col2_num',
'col2_deno']
创建字典,将col1与要更改的列配对,与col2相同,依此类推:
from collections import defaultdict
d = defaultdict(list)
for col in cols:
if "_" in col:
d[col.split("_")[0]].append(col)
d
defaultdict(list,
{'col1': ['col1_num', 'col1_deno'],
'col3': ['col3_num', 'col3_deno'],
'col2': ['col2_num', 'col2_deno']})
迭代dict以分配新值:
for key, val in d.items():
df.loc[df[key].eq(""), val] = ""
id date col1 col2 col3 col1_num col1_deno col3_num col3_deno col2_num col2_deno
0 A 1 98 88 567 453 323 756
1 B 1 99 231 232
2 C 1 97 67 23 543 458 456 876 935 876
3 B 1 79 84 678 452 545 453
4 A 1 45 58 334 778 234 983
5 C 1 23 55 183 565 234 234
使用
多索引的解决方案
:
#first convert not processing and testing columns to index
df1 = df.set_index(['id','date'])
cols = df1.columns
#split columns by _ for MultiIndex
df1.columns = df1.columns.str.split('_', expand=True)
#compare columns without _ (with NaN in second level) by empty string
m = df1.xs(np.nan, axis=1, level=1).eq('')
#create mask by all columns
mask = m.reindex(df1.columns, axis=1, level=0)
#set new values by mask, overwrite columns names
df1 = df1.mask(mask, '').set_axis(cols, axis=1).reset_index()
print (df1)
id date col1 col2 col3 col1_num col1_deno col3_num col3_deno col2_num \
0 A 1 98 88 567 453 323
1 B 1 99 231 232
2 C 1 97 67 23 543 458 456 876 935
3 B 1 79 84 678 452 545
4 A 1 45 58 334 778 234 983
5 C 1 23 55 183 565 234
col2_deno
0 756
1
2 876
3 453
4
5 234
#first convert not processing and testing columns to index
df1 = df.set_index(['id','date'])
cols = df1.columns
#split columns by _ for MultiIndex
df1.columns = df1.columns.str.split('_', expand=True)
#compare columns without _ (with NaN in second level) by empty string
m = df1.xs(np.nan, axis=1, level=1).eq('')
#create mask by all columns
mask = m.reindex(df1.columns, axis=1, level=0)
#set new values by mask, overwrite columns names
df1 = df1.mask(mask, '').set_axis(cols, axis=1).reset_index()
print (df1)
id date col1 col2 col3 col1_num col1_deno col3_num col3_deno col2_num \
0 A 1 98 88 567 453 323
1 B 1 99 231 232
2 C 1 97 67 23 543 458 456 876 935
3 B 1 79 84 678 452 545
4 A 1 45 58 334 778 234 983
5 C 1 23 55 183 565 234
col2_deno
0 756
1
2 876
3 453
4
5 234