Python 如何在nltk中拆分逗号或句点上的字符串

Python 如何在nltk中拆分逗号或句点上的字符串,python,nltk,Python,Nltk,我想在nltk中分隔逗号和/或句点上的字符串。我试过使用sent\u tokenize(),但它只在句点上分开 我也试过这个代码 from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktLanguageVars ex_sent = "This is an example showing sentence filtration.This is how it is done, in case of Python I want to l

我想在nltk中分隔逗号和/或句点上的字符串。我试过使用
sent\u tokenize()
,但它只在句点上分开

我也试过这个代码

from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktLanguageVars
ex_sent = "This is an example showing sentence filtration.This is how it is done, in case of Python I want to learn more. So, that i can have some experience over it, by it I mean python."
class CommaPoint(PunktLanguageVars):
    sent_end_chars = ('.','?','!',',')
tokenizer = PunktSentenceTokenizer(lang_vars = CommaPoint())
n_w=tokenizer.tokenize(ex_sent)
print n_w
上面代码的输出是

['This is an example showing sentence filtration.This is how it is done,' 'in case of Python I want to learn more.' 'So,' 'that i can have some experience over it,' 'by it I mean python.\n']
当我试图给出“.”而没有任何空格时,它就把它当作一个词

我希望输出为

['This is an example showing sentence filtration.' 'This is how it is done,' 'in case of Python I want to learn more.' 'So,' 'that i can have some experience over it,' 'by it I mean python.']

re
做些简单的事情怎么样:

>>> import re
>>> sent = "This is an example showing sentence filtration.This is how it is done, in case of Python I want to learn more. So, that i can have some experience over it, by it I mean python."
>>> re.split(r'[.,]', sent)
['This is an example showing sentence filtration', 'This is how it is done', ' in case of Python I want to learn more', ' So', ' that i can have some experience over it', ' by it I mean python', '']
要保留分隔符,可以使用组:

>>> re.split(r'([.,])', sent)
['This is an example showing sentence filtration', '.', 'This is how it is done', ',', ' in case of Python I want to learn more', '.', ' So', ',', ' that i can have some experience over it', ',', ' by it I mean python', '.', '']

在这种情况下,您可以将字符串中的所有逗号替换为点,然后将其标记为:

from nltk.tokenize import sent_tokenize
ex_sent = "This is an example showing sentence filtration.This is how it is done, in case of Python I want to learn more. So, that i can have some experience over it, by it I mean python."

ex_sent = ex_sent.replace(",", ".")
n_w = sent_tokenize(texto2, 'english')
print(n_w)

你能更具体地回答你的问题吗?给出一些输入和期望输出的例子,试着说出你所尝试过的。看一看嗨这是我第一次来stackoverflow。我试图解释我的问题,希望你能回答我。谢谢你好,阿尔瓦斯我希望这次你能帮我谢谢你,阿尔瓦斯。但是你可以看到,在这个例子中,它确实是在分割句子,但也提取了逗号和我想保留的句号。还有我能在nltkIt中做些什么呢?不难把逗号和句号放回去;我建议您尽可能多地使用本机python库,如果它能提供您所需的输出。如果您真的必须使用nltk,那么一般的nlp模型通常适用于格式正常的新闻文本。