Python 将列表中的单词与行中的单词进行匹配的过程
下面是我需要分析和提取特定单词的许多行的两个示例Python 将列表中的单词与行中的单词进行匹配的过程,python,string,list,integer,tweets,Python,String,List,Integer,Tweets,下面是我需要分析和提取特定单词的许多行的两个示例 [40.748330000000003, -73.878609999999995] 6 2011-08-28 19:52:47 Sometimes I wish my life was a movie; #unreal I hate the fact I feel lonely surrounded by so many ppl [37.786221300000001, -122.1965002] 6 2011-08-28 19:55:26
[40.748330000000003, -73.878609999999995] 6 2011-08-28 19:52:47 Sometimes I wish my life was a movie; #unreal I hate the fact I feel lonely surrounded by so many ppl
[37.786221300000001, -122.1965002] 6 2011-08-28 19:55:26 I wish I could lay up with the love of my life And watch cartoons all day.
忽略坐标和数字
本案例旨在找出每个tweet行中有多少单词出现在该关键字列表中:
['hate', 1]
['hurt', 1]
['hurting', 1]
['like', 5]
['lonely', 1]
['love', 10]
此外,还可以找到在每条推文行中找到的关键字的值总和(例如['love',10])
例如,对于这个句子
'I hate to feel lonely at times'
仇恨=1和孤独=1的情感值之和等于2。
行中的字数是7
我曾尝试使用列表到列表的方法,甚至尝试遍历每个句子和关键字,但这些都不起作用,因为tweet和关键字的数量很多,我需要使用循环格式来查找值
我想知道的是在每行中找到的关键字的情感值的总和,以及每行中有多少个单词
提前感谢您的洞察力!!:)
我的代码:
try:
KeywordFileName=input('Input keyword file name: ')
KeywordFile = open(KeywordFileName, 'r')
except FileNotFoundError:
print('The file you entered does not exist or is not in the directory')
exit()
KeyLine = KeywordFile.readline()
while KeyLine != '':
if list != []:
KeyLine = KeywordFile.readline()
KeyLine = KeyLine.rstrip()
list = KeyLine.split(',')
list[1] = int(list[1])
print(list)
else:
break
try:
TweetFileName = input('Input Tweet file name: ')
TweetFile = open(TweetFileName, 'r')
except FileNotFoundError:
print('The file you entered does not exist or is not in the directory')
exit()
TweetLine = TweetFile.readline()
while TweetLine != '':
TweetLine = TweetFile.readline()
TweetLine = TweetLine.rstrip()
最简单的方法是在每个tweet的基础上使用nltk库中的word_标记化
from nltk.tokenize import word_tokenize
import collections
import re
# Sample text from above
s = '[40.748330000000003, -73.878609999999995] 6 2011-08-28 19:52:47 Sometimes I wish my life was a movie; #unreal I hate the fact I feel lonely surrounded by so many ppl'
num_regex = re.compile(r"[+-]?\d+(?:\.\d+)?")
# Removing the numbers from the text
s = num_regex.sub('',s)
# Tokenization
tokens = word_tokenize(s)
# Counting the words
fdist = collections.Counter(tokens)
print fdist
`如果您的tweet是.txt格式,并且tweet的行模式与您在问题中描述的相同,那么您可以尝试以下方法:
import re
import json
pattern=r'\d{2}:\d{2}:\d{2}\s([a-zA-Z].+)'
sentiment_dict={'hate' :1,'hurt':1,'hurting':1,'like':5,'lonely':1,'love':10}
final=[]
with open('senti.txt','r+') as f:
for line in f:
data = []
match=re.finditer(pattern,line)
for find in match:
if find.group(1).split():
final.append(find.group(1).split())
line=[]
for item in final:
final_dict = {}
for sub_item in item:
if sub_item in sentiment_dict:
if sub_item not in final_dict:
final_dict[sub_item]=[sentiment_dict.get(sub_item)]
else:
final_dict[sub_item].append(sentiment_dict.get(sub_item))
line.append((item,len(item),{key: sum(value) for key,value in final_dict.items()}))
result=json.dumps(line,indent=2)
print(result)
输出:
[
[
[
"Sometimes", #tweets line or all words
"I",
"wish",
"my",
"life",
"was",
"a",
"movie;",
"#unreal",
"I",
"hate",
"the",
"fact",
"I",
"feel",
"lonely",
"surrounded",
"by",
"so",
"many",
"ppl"
],
21, #count of words in tweets
{
"lonely": 1, #sentiment count
"hate": 1
}
],
[
[
"I",
"wish",
"I",
"could",
"lay",
"up",
"with",
"the",
"love",
"of",
"my",
"life",
"And",
"watch",
"cartoons",
"all",
"day."
],
17,
{
"love": 10
}
],
[
[
"I",
"hate",
"to",
"feel",
"lonely",
"at",
"times"
],
7,
{
"lonely": 1,
"hate": 1
}
]
]
如果一种模式不适用于您的文件,则正则表达式的选项:
r'[a-zA-Z].+'#如果使用此更改find.group(1)来查找.group()
r'(?您最好删除数字,使用ntlk word标记化并进行计数。我认为您可能希望首先将整行转换为字符串,然后使用正则表达式仅保留时间戳右侧的位。其次,我会将您的关键字列表保存为字典。因为您的代码是inc完成并且不包括项目所需的所有内容。@jake,这些行在.txt文件中还是什么?