基于列值拆分Python数据帧,然后在算法中使用它们
我目前正在使用mlxtend中的Apriori算法进行简单的频繁模式分析。目前,我只是查看所有事务。但我想区分基于国家的分析。我当前的脚本如下所示:基于列值拆分Python数据帧,然后在算法中使用它们,python,pandas,apriori,Python,Pandas,Apriori,我目前正在使用mlxtend中的Apriori算法进行简单的频繁模式分析。目前,我只是查看所有事务。但我想区分基于国家的分析。我当前的脚本如下所示: import pandas as pd import numpy as np import pyodbc from mlxtend.preprocessing import TransactionEncoder from mlxtend.frequent_patterns import apriori from mlxtend.frequent_p
import pandas as pd
import numpy as np
import pyodbc
from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori
from mlxtend.frequent_patterns import association_rules
dataset = pd.read_sql_query("""some query"", cnxn)
# Transform/prep dataset into list data
dataset_tx = dataset.groupby(['ReceiptCode'])['ItemCategoryName'].apply(list).values.tolist()
# Define classifier
te = TransactionEncoder()
# Binary-transform dataset
te_ary = te.fit(dataset_tx).transform(dataset_tx)
# Fit to new dataframe (sparse dataframe)
df = pd.SparseDataFrame(te_ary, columns=te.columns_)
# Run algorithm
frequent_itemsets = apriori(df, min_support=0.10, use_colnames=True)
frequent_itemsets['length'] = frequent_itemsets['itemsets'].apply(lambda x: len(x))
rules = association_rules(frequent_itemsets, metric="confidence", min_threshold=0.3)
下面是数据集
的示例
+----------------------+--+------------------+--+------------------+
| ReceiptCode | | ItemCategoryName | | StoreCountryName |
+----------------------+--+------------------+--+------------------+
| 0000P70322000031467 | | Food | | Denmark |
| 0000P70322000031867 | | Food | | Denmark |
| 0000P70322000051467 | | Interior | | Germany |
| 0000P70322000087468 | | Kitchen | | Switzerland |
| 0000P70322000031469 | | Leisure | | Germany |
| 0000P70322000031439 | | Food | | Switzerland |
+----------------------+--+------------------+--+------------------+
是否可以基于列
StoreCountryName
“自动”创建多个数据帧,然后在算法中使用它,即在分析中使用特定于国家的数据帧并遍历所有国家?我知道我可以手动创建数据帧,然后只应用转换和分析。您可以groupby
并执行列表理解,将数据帧存储在列表中,然后对其进行迭代:
g = df.groupby('StoreCountryName')
dfs = [group for _,group in g]
for i in range(len(dfs)):
dfs[i]['iteration'] = i # do stuff to each frame
print(f"{dfs[i]} \n")
ReceiptCode ItemCategoryName StoreCountryName iteration
0 0000P70322000031467 Food Denmark 0
1 0000P70322000031867 Food Denmark 0
ReceiptCode ItemCategoryName StoreCountryName iteration
2 0000P70322000051467 Interior Germany 1
4 0000P70322000031469 Leisure Germany 1
ReceiptCode ItemCategoryName StoreCountryName iteration
3 0000P70322000087468 Kitchen Switzerland 2
5 0000P70322000031439 Food Switzerland 2
或者您可以创建一个函数并使用groupby
和apply
def myFunc(country):
# do stuff
df.groupby('StoreCountryName').apply(myFunc)
对于数据集['StoreCountryName']中的store\u country\u name,如何
。unique():
。。。然后传给你的算法?或者,您可以将它们存储在一个dict中,例如store\u country\u dict={}
,用于数据集['StoreCountryName']中的store\u country\u name。unique():
,store\u country\u dict[store\u country\u name]=dataset.loc[dataset['StoreCountryName'==store\u country\u name]