parlappy和词性标注

parlappy和词性标注,r,parallel-processing,R,Parallel Processing,我正在尝试使用Parlappy和openNLP R包对约600k文档的语料库进行词性标记。然而,虽然我能够成功地对不同的~90k文档集进行词性标记,但在~600k文档上运行相同代码约25分钟后,我遇到了一个奇怪的错误: Error in checkForRemoteErrors(val) : 10 nodes produced errors; first error: no word token annotations found 这些文档仅仅是数字报纸文章,我在身体区域(清洁后)运行标记器。

我正在尝试使用Parlappy和openNLP R包对约600k文档的语料库进行词性标记。然而,虽然我能够成功地对不同的~90k文档集进行词性标记,但在~600k文档上运行相同代码约25分钟后,我遇到了一个奇怪的错误:

Error in checkForRemoteErrors(val) : 10 nodes produced errors; first error: no word token annotations found
这些文档仅仅是数字报纸文章,我在身体区域(清洁后)运行标记器。这个字段只是我保存到字符串列表中的原始文本

这是我的密码:

# I set the Java heap size (memory) allocation - I experimented with different sizes
options(java.parameters = "- Xmx3GB")
# Convert the corpus into a list of strings
myCorpus <- lapply(contentCleaned, function(x){x <- as.String(x)})

# tag Corpus Function
tagCorpus <- function(x, ...){
    s <- as.String(x) # This is a repeat and may not be required
    WTA <- Maxent_Word_Token_Annotator()
    a2 <- Annotation(1L, "sentence", 1L, nchar(s))
    a2 <- annotate(s, WTA, a2)
    a3 <- annotate(s, PTA, a2)
    word_subset <- a3[a3$type == "word"]
    POStags <- unlist(lapply(word_subset$features, `[[`, "POS"))
    POStagged <- paste(sprintf("%s/%s", s[word_subset], POStags), collapse   = " ")
    list(text = s, POStagged = POStagged, POStags = POStags, words = s[word_subset])
}

# I have 12 cores in my box
cl <- makeCluster(mc <- getOption("cl.cores", detectCores()-2))

# I tried both exporting the word token annotator and not
clusterEvalQ(cl, {
    library(openNLP);
    library(NLP);
    PTA <- Maxent_POS_Tag_Annotator();
    WTA <- Maxent_Word_Token_Annotator()
})

# Each cluster node has the following description:
[[1]]
An annotator inheriting from classes
    Simple_Word_Token_Annotator Annotator
    with description
    Computes word token annotations using the Apache OpenNLP Maxent tokenizer employing the default model for language 'en'.

clusterEvalQ(cl, sessionInfo())

# ClusterEvalQ outputs for each worker:

[[1]]
R version 3.4.4 (2018-03-15)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 16.04.5 LTS

Matrix products: default
BLAS: /usr/lib/libblas/libblas.so.3.6.0
LAPACK: /usr/lib/lapack/liblapack.so.3.6.0

locale:
  [1] LC_CTYPE=en_US.UTF-8          LC_NUMERIC=C                    LC_TIME=en_US.UTF-8           LC_COLLATE=en_US.UTF-8       
  [5] LC_MONETARY=en_US.UTF-8       LC_MESSAGES=en_US.UTF-8       LC_PAPER=en_US.UTF-8          LC_NAME=en_US.UTF-8          
  [9] LC_ADDRESS=en_US.UTF-8        LC_TELEPHONE=en_US.UTF-8      LC_MEASUREMENT=en_US.UTF-8    LC_IDENTIFICATION=en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] NLP_0.1-11    openNLP_0.2-6

loaded via a namespace (and not attached):
[1] openNLPdata_1.5.3-4 compiler_3.4.4      parallel_3.4.4      rJava_0.9-10    

packageDescription('openNLP') # Version: 0.2-6
packageDescription('parallel') # Version: 3.4.4

startTime <- Sys.time()
print(startTime)
corpus.tagged <- parLapply(cl, myCorpus, tagCorpus)
endTime <- Sys.time()
print(endTime)
endTime - startTime
#我设置了Java堆大小(内存)分配-我尝试了不同的大小
选项(java.parameters=“-Xmx3GB”)
#将语料库转换为字符串列表

myCorpus我通过直接使用Tyler Rinker的qdap包()成功地实现了这一点。运行大约需要20个小时。以下是qdap包中的函数pos如何在一行程序中实现这一点:

corpus.tagged <- qdap::pos(myCorpus, parallel =TRUE, cores =detectCores()-2)
corpus.taged