Python 计数项目的计数器';在多个列表中出现并返回元组(项、列表1中的计数、列表2中的计数)
我正在参加kaggle“what's cooking”比赛,我想改进不同配料的计数。我意识到用不同的方式对待像“黑胡椒粉”和“黑胡椒粉”这样的生串是愚蠢的,所以我想比较两个列表Python 计数项目的计数器';在多个列表中出现并返回元组(项、列表1中的计数、列表2中的计数),python,pandas,tuples,nltk,Python,Pandas,Tuples,Nltk,我正在参加kaggle“what's cooking”比赛,我想改进不同配料的计数。我意识到用不同的方式对待像“黑胡椒粉”和“黑胡椒粉”这样的生串是愚蠢的,所以我想比较两个列表 原始/原始列表,包含任意长的配料串,如“黑胡椒粉” 已处理成分列表,其中所有>=3个单词的成分字符串都转换为带有“磨砂黑”和“黑胡椒”的双字组列表 在处理和准备列表之后(我为所有包含3个或更多单词的成分短语运行了一个双RAM生成器),我运行了一个计数器函数,现在我为每个列表(原始和已处理)创建了一个元组列表(“成分字符串
with open('data/train.json', 'r') as f:
train_json = f.read()
stop = stopwords.words('english') + list(string.punctuation)
trainset = json.loads(train_json)
# read the data in and export the ingredient column into a list
df = pd.read_json('data/train.json')
all_recipes = df.ingredients.tolist()
# extract all ingridents from all cuisines into one flat list
all_ingr_raw = [ingr.lower() for recipe in all_recipes for ingr in recipe]
# compare 2 lists for most common ingredients - raw unprocessed ingredients vs processed, where
# processed is a 1-2 word ingredients or ngrams of >=3 word ingredients
# take all ingredient strings that have more than 2 whitespaces in them i.e. 3 words
three_word_ingr = [ingr for ingr in all_ingr_raw if len(ingr.split()) > 2]
# make a list of sublists of bigram tuples for each string from above
raw_three_word_ngrams = [list(ngrams(phrase.split(),2)) for phrase in three_word_ingr]
# turn tuples into strings and flatten the list
proc_three_word_ngrams = [' '.join(pair) for sublist in raw_three_word_ngrams for pair in sublist]
# join all ingredient strings of 2 words or less with the flat list of all bigrams out of >3 word strings
all_ingr_ngrams = [ingr for ingr in all_ingr_raw if len(ingr.split()) <= 2] + proc_three_word_ngrams
# return a sorted (descending in count) set of tuples (ingredient, count)
count_ingr_raw = Counter(all_ingr_raw).most_common()
count_ingr_ngrams = Counter(all_ingr_ngrams).most_common()
common = [x for x in count_ingr_raw if x[0] in count_ingr_ngrams]
unique_raw = [x for x in count_ingr_raw if x[0] not in count_ingr_ngrams]
unique_proc = [x for x in count_ingr_ngrams if x[0] not in count_ingr_raw]
# find
print common[:20]
print unique_raw[:20]
print unique_proc[:20]
[]
[(u'salt', 18049), (u'olive oil', 7972), (u'onions', 7972), (u'water', 7457), (u'garlic', 7380), (u'sugar', 6434), (u'garlic cloves', 6237), (u'butter', 4848), (u'ground black pepper', 4785), (u'all-purpose flour', 4632), (u'pepper', 4438), (u'vegetable oil', 4385), (u'eggs', 3388), (u'soy sauce', 3296), (u'kosher salt', 3113), (u'green onions', 3078), (u'tomatoes', 3058), (u'large eggs', 2948), (u'carrots', 2814), (u'unsalted butter', 2782)]
[(u'salt', 18049), (u'olive oil', 10916), (u'black pepper', 8039), (u'onions', 7972), (u'water', 7457), (u'garlic', 7380), (u'garlic cloves', 7110), (u'sugar', 6434), (u'ground black', 5004), (u'butter', 4848), (u'soy sauce', 4822), (u'vegetable oil', 4731), (u'all-purpose flour', 4632), (u'pepper', 4438), (u'bell pepper', 4190), (u'green onions', 3550), (u'eggs', 3388), (u'chicken broth', 3386), (u'kosher salt', 3179), (u'red pepper', 3169)]
以open('data/train.json','r')作为f的:
序列号json=f.read()
stop=stopwords.words('english')+列表(字符串、标点符号)
trainset=json.load(train_json)
#读取中的数据并将“成分”列导出到列表中
df=pd.read_json('data/train.json'))
所有配方=df.components.tolist()
#将所有菜肴中的所有配料提取到一个平面列表中
all_ingr_raw=[ingr.lower()表示配方中的配方中的所有配方中的ingr]
#比较两个最常见成分列表-未加工原料和加工原料,其中
#加工是1-2个单词的成分或大于等于3个单词的成分
#取所有包含超过2个空格的成分字符串,即3个单词
如果len(ingr.split()
#为上面的每个字符串列出二元元组的子列表
原始三字语法=[list(ngrams(phrase.split(),2))表示三字语法中的短语]
#将元组转换为字符串并展平列表
proc_three_word_ngrams=[''。在原始_three_word_ngrams中为子列表中的pair连接(pair)]
#将所有小于等于2个单词的成分字符串与大于3个单词字符串中的所有bigram的平面列表连接起来
all_ingr_ngrams=[ingr for ingr in all_ingr_raw if len(ingr.split())首先将ngrams放入数据帧,然后查询数据帧。而不是先处理ngrams,然后再将其放入数据帧。您可以将您的json文件或json示例发布到某个地方吗?这样,帮助您更容易。首先将ngrams放入数据帧,然后查询数据帧。而不是先处理ngrams,然后放入数据帧你能把你的json文件或你的json样本发布到某个地方吗?这样,就更容易帮助你了。