在R中真正快速的单词ngram矢量化
编辑:新的软件包text2vec非常优秀,它很好地解决了这个问题(以及其他许多问题) 我在R中有一个相当大的文本数据集,我将其作为字符向量导入:在R中真正快速的单词ngram矢量化,r,vectorization,text-mining,n-gram,text2vec,R,Vectorization,Text Mining,N Gram,Text2vec,编辑:新的软件包text2vec非常优秀,它很好地解决了这个问题(以及其他许多问题) 我在R中有一个相当大的文本数据集,我将其作为字符向量导入: #Takes about 15 seconds system.time({ set.seed(1) samplefun <- function(n, x, collapse){ paste(sample(x, n, replace=TRUE), collapse=collapse) } words <- sapp
#Takes about 15 seconds
system.time({
set.seed(1)
samplefun <- function(n, x, collapse){
paste(sample(x, n, replace=TRUE), collapse=collapse)
}
words <- sapply(rpois(10000, 3) + 1, samplefun, letters, '')
sents1 <- sapply(rpois(1000000, 5) + 1, samplefun, words, ' ')
})
我可以把这个稀疏矩阵放到or中,对文本数据进行一些非常棒的定量分析。万岁
现在,我想把这个分析扩展到一包ngrams矩阵,而不是一包单词矩阵。到目前为止,我发现实现这一点的最快方法如下(我在CRAN上找到的所有ngram函数都被这个数据集阻塞了,所以):
在这种情况下,我创建一袋单词矩阵的函数大约需要30秒,创建一袋ngrams矩阵的函数大约需要500秒。同样,R中现有的n-gram向量器似乎被这个数据集卡住了(尽管我希望被证明是错误的!)
编辑2
计时与tau:
zach_t1 <- system.time(zach_ng1 <- text_to_ngrams(sents1))
tau_t1 <- system.time(tau_ng1 <- tau::textcnt(as.list(sents1), n = 2L, method = "string", recursive = TRUE))
tau_t1 / zach_t1 #1.598655
zach_t2 <- system.time(zach_ng2 <- text_to_ngrams(sents2))
tau_t2 <- system.time(tau_ng2 <- tau::textcnt(as.list(sents2), n = 2L, method = "string", recursive = TRUE))
tau_t2 / zach_t2 #1.9295619
zach_t1这是一个非常有趣的问题,我在quanteda软件包中花了很多时间来解决这个问题。这涉及到三个方面,我将对此进行评论,尽管这只是第三个方面真正解决了你的问题。但前两点解释了为什么我只关注ngram的创建功能,因为——正如你所指出的——这就是可以提高速度的地方
标记化。在这里,您对空格字符使用string::str\u split\u fixed()
,这是最快的标记化方法,但不是最好的标记化方法。我们在quanteda::tokenize(x,what=“faster word”)
中实现了几乎完全相同的功能。这不是最好的,因为stringi可以更智能地实现空白分隔符。(即使字符类\\s
也更智能,但速度稍慢——这是通过what=“fasterword”
实现的)。您的问题不是关于标记化,所以这一点只是上下文
将文档特征矩阵制成表格。在这里,我们还使用矩阵包,对文档和特性(我称它们为特性,而不是术语)进行索引,并直接创建一个稀疏矩阵,就像您在上面代码中所做的那样。但是您使用的match()
比我们通过data.table使用的匹配/合并方法快得多。我将重新编写quanteda::dfm()
函数,因为您的方法更优雅、更快。真的,真的很高兴我看到了
内存创建。在这里,我认为我可以在性能方面提供帮助。我们通过一个名为grams=c(1)
的quanteda::tokenize()
参数在quanteda中实现了这一点,其中的值可以是任意整数集。例如,我们对单图和双图的匹配是ngrams=1:2
。您可以在上检查代码,请参阅内部函数ngram()
。我在下面复制了它,并制作了一个包装器,这样我们就可以直接将它与您的find\ngrams()
函数进行比较
代码:
(这已经起作用了,但比您的总体结果慢,因为创建最终稀疏矩阵对象的方式更快,但我很快就会改变它。)下面是一个使用dev版本的测试,您可以使用devtools::install_github(“ropensci/tokenizers”)
获得该测试
使用上述sents1
、sents2
和find\ngrams()
的定义:
library(stringi)
library(magrittr)
library(tokenizers)
library(microbenchmark)
library(pbapply)
set.seed(198)
sents1_sample <- sample(sents1, 1000)
sents2_sample <- sample(sents2, 1000)
test_sents1 <- microbenchmark(
find_ngrams(stri_split_fixed(sents1_sample, ' '), n = 2),
tokenize_ngrams(sents1_sample, n = 2),
times = 25)
test_sents1
sents2的测试
test_sents2 <- microbenchmark(
find_ngrams(stri_split_fixed(sents2_sample, ' '), n = 2),
tokenize_ngrams(sents2_sample, n = 2),
times = 25)
test_sents2
直接检查时间
timing <- system.time({find_ngrams(stri_split_fixed(sents1, ' '), n = 2)})
timing
user system elapsed
90.499 0.506 91.309
timing_tokenizers <- system.time({tokenize_ngrams(sents1, n = 2)})
timing_tokenizers
user system elapsed
6.940 0.022 6.964
timing <- system.time({find_ngrams(stri_split_fixed(sents2, ' '), n = 2)})
timing
user system elapsed
138.957 3.131 142.581
timing_tokenizers <- system.time({tokenize_ngrams(sents2, n = 2)})
timing_tokenizers
user system elapsed
65.22 1.57 66.91
计时Hmm您考虑过tau::textcnt(as.list(sents),n=2L,method=“string”,recursive=TRUE)
而不是find\ngrams
?花费了一半的时间,但只提供了bigrams(n=2)。我还没有试过这个,威尔也会。如果两个数据集的Bigrams都比我上面的代码快,那么Bigrams就可以工作了。@lukeA在两个数据集tau::textct上的速度在我的系统上都慢50%。我将用计时和示例代码更新我的问题,请在您的系统上尝试并比较结果。stringdist::qgrams
确实可以快速生成字符qgrams。作者目前正在研究支持词(ints)。@Zach Strange。现在我得到了tau\u t1/zach\u t1
=649.48
/675.82
。没什么区别了。我很高兴我们都能互相帮助!我也是。quanteda的GitHub版本现在使用本文中的方法合并了tokenize()和dfm()中的更改。我在回答的最后所描述的方式,现在应该可以很快地为您工作。将很快处理GitHub的其余问题。谢谢相比扎克的回答,他的风格仍然比quanteda快得多。怎么会?“我想在你改变之后,这应该已经解决了,”肯说Benoit@ambodiquanteda::ngrams()
自从这篇文章以来有了一些变化,所以我将很快进行回顾并与您联系。@KenBenoit Thanx。我真的很想使用quanteda,因为我喜欢这个API,但是因为我的文本文件很大,所以我现在恢复它并使用Zach的解决方案。
zach_t1 <- system.time(zach_ng1 <- text_to_ngrams(sents1))
tau_t1 <- system.time(tau_ng1 <- tau::textcnt(as.list(sents1), n = 2L, method = "string", recursive = TRUE))
tau_t1 / zach_t1 #1.598655
zach_t2 <- system.time(zach_ng2 <- text_to_ngrams(sents2))
tau_t2 <- system.time(tau_ng2 <- tau::textcnt(as.list(sents2), n = 2L, method = "string", recursive = TRUE))
tau_t2 / zach_t2 #1.9295619
# wrapper
find_ngrams2 <- function(x, ngrams = 1, concatenator = " ") {
if (sum(1:length(ngrams)) == sum(ngrams)) {
result <- lapply(x, ngram, n = length(ngrams), concatenator = concatenator, include.all = TRUE)
} else {
result <- lapply(x, function(x) {
xnew <- c()
for (n in ngrams)
xnew <- c(xnew, ngram(x, n, concatenator = concatenator, include.all = FALSE))
xnew
})
}
result
}
# does the work
ngram <- function(tokens, n = 2, concatenator = "_", include.all = FALSE) {
if (length(tokens) < n)
return(NULL)
# start with lower ngrams, or just the specified size if include.all = FALSE
start <- ifelse(include.all,
1,
ifelse(length(tokens) < n, 1, n))
# set max size of ngram at max length of tokens
end <- ifelse(length(tokens) < n, length(tokens), n)
all_ngrams <- c()
# outer loop for all ngrams down to 1
for (width in start:end) {
new_ngrams <- tokens[1:(length(tokens) - width + 1)]
# inner loop for ngrams of width > 1
if (width > 1) {
for (i in 1:(width - 1))
new_ngrams <- paste(new_ngrams,
tokens[(i + 1):(length(tokens) - width + 1 + i)],
sep = concatenator)
}
# paste onto previous results and continue
all_ngrams <- c(all_ngrams, new_ngrams)
}
all_ngrams
}
txt <- c("The quick brown fox named Seamus jumps over the lazy dog.",
"The dog brings a newspaper from a boy named Seamus.")
tokens <- tokenize(toLower(txt), removePunct = TRUE)
tokens
# [[1]]
# [1] "the" "quick" "brown" "fox" "named" "seamus" "jumps" "over" "the" "lazy" "dog"
#
# [[2]]
# [1] "the" "dog" "brings" "a" "newspaper" "from" "a" "boy" "named" "seamus"
#
# attr(,"class")
# [1] "tokenizedTexts" "list"
microbenchmark::microbenchmark(zach_ng <- find_ngrams(tokens, 2),
ken_ng <- find_ngrams2(tokens, 1:2))
# Unit: microseconds
# expr min lq mean median uq max neval
# zach_ng <- find_ngrams(tokens, 2) 288.823 326.0925 433.5831 360.1815 542.9585 897.469 100
# ken_ng <- find_ngrams2(tokens, 1:2) 74.216 87.5150 130.0471 100.4610 146.3005 464.794 100
str(zach_ng)
# List of 2
# $ : chr [1:21] "the" "quick" "brown" "fox" ...
# $ : chr [1:19] "the" "dog" "brings" "a" ...
str(ken_ng)
# List of 2
# $ : chr [1:21] "the" "quick" "brown" "fox" ...
# $ : chr [1:19] "the" "dog" "brings" "a" ...
tokens <- stri_split_fixed(sents1, ' ')
zach_ng1_t1 <- system.time(zach_ng1 <- find_ngrams(tokens, 2))
ken_ng1_t1 <- system.time(ken_ng1 <- find_ngrams2(tokens, 1:2))
zach_ng1_t1
# user system elapsed
# 230.176 5.243 246.389
ken_ng1_t1
# user system elapsed
# 58.264 1.405 62.889
dfm(sents1, ngrams = 1:2, what = "fastestword",
toLower = FALSE, removePunct = FALSE, removeNumbers = FALSE, removeTwitter = TRUE))
library(stringi)
library(magrittr)
library(tokenizers)
library(microbenchmark)
library(pbapply)
set.seed(198)
sents1_sample <- sample(sents1, 1000)
sents2_sample <- sample(sents2, 1000)
test_sents1 <- microbenchmark(
find_ngrams(stri_split_fixed(sents1_sample, ' '), n = 2),
tokenize_ngrams(sents1_sample, n = 2),
times = 25)
test_sents1
Unit: milliseconds
expr min lq mean
find_ngrams(stri_split_fixed(sents1_sample, " "), n = 2) 79.855282 83.292816 102.564965
tokenize_ngrams(sents1_sample, n = 2) 4.048635 5.147252 5.472604
median uq max neval cld
93.622532 109.398341 226.568870 25 b
5.479414 5.805586 6.595556 25 a
test_sents2 <- microbenchmark(
find_ngrams(stri_split_fixed(sents2_sample, ' '), n = 2),
tokenize_ngrams(sents2_sample, n = 2),
times = 25)
test_sents2
Unit: milliseconds
expr min lq mean
find_ngrams(stri_split_fixed(sents2_sample, " "), n = 2) 509.4257 521.7575 562.9227
tokenize_ngrams(sents2_sample, n = 2) 288.6050 295.3262 306.6635
median uq max neval cld
529.4479 554.6749 844.6353 25 b
306.4858 310.6952 332.5479 25 a
timing <- system.time({find_ngrams(stri_split_fixed(sents1, ' '), n = 2)})
timing
user system elapsed
90.499 0.506 91.309
timing_tokenizers <- system.time({tokenize_ngrams(sents1, n = 2)})
timing_tokenizers
user system elapsed
6.940 0.022 6.964
timing <- system.time({find_ngrams(stri_split_fixed(sents2, ' '), n = 2)})
timing
user system elapsed
138.957 3.131 142.581
timing_tokenizers <- system.time({tokenize_ngrams(sents2, n = 2)})
timing_tokenizers
user system elapsed
65.22 1.57 66.91