Python Stanford NER和POS,用于大数据量的多线程

Python Stanford NER和POS,用于大数据量的多线程,python,multithreading,nltk,stanford-nlp,Python,Multithreading,Nltk,Stanford Nlp,我正在尝试使用斯坦福NER和斯坦福POS标记器来解析大约23000个文档。我使用以下伪代码实现了它- `for each in document: eachSentences = PunktTokenize(each) #code to generate NER Tagger #code to generate POS Taggers on the above output` 对于具有15 GB RAM的4核机器,仅NER的运行时间约为945小时。我试图通过使用“线程化”库来提高性

我正在尝试使用斯坦福NER斯坦福POS标记器来解析大约23000个文档。我使用以下伪代码实现了它-

`for each in document:
  eachSentences = PunktTokenize(each)
  #code to generate NER Tagger
  #code to generate POS Taggers on the above output`
对于具有15 GB RAM的4核机器,仅NER的运行时间约为945小时。我试图通过使用“线程化”库来提高性能,但我得到了以下错误-

`Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "removeStopWords.py", line 75, in partofspeechRecognition
    listOfRes_new = namedEntityRecognition(listRes[min:max])
  File "removeStopWords.py", line 63, in namedEntityRecognition
    listRes_ner.append(namedEntityRecognitionResume(eachResSentence))
  File "removeStopWords.py", line 50, in namedEntityRecognitionResume
    ner2Tags = ner2.tag(each.title().split())
  File "/home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/tag/stanford.py", line 71, in tag
    return sum(self.tag_sents([tokens]), [])
  File "/home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/tag/stanford.py", line 98, in tag_sents
    os.unlink(self._input_file_path)
OSError: [Errno 2] No such file or directory: '/tmp/tmpvMNqwB'`
我正在使用NLTK版本-3.2.1、Stanford NER、POS-3.7.0 jar文件以及线程模块。就我所见,这可能是由于/tmp上的线程锁造成的如果我错了,请纠正我的错误,以及使用线程运行上述内容的最佳方式或更好的实现方式。

我正在使用和

请忽略Python文件的名称,我仍然没有从原始文本中删除停止词或标点符号

编辑-使用cProfile并按累计时间排序,我得到了以下前20个调用

600792 function calls (595912 primitive calls) in 60.795 seconds

Ordered by: cumulative time
List reduced from 3357 to 20 due to restriction <20>

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    1    0.000    0.000   60.811   60.811 removeStopWords.py:1(<module>)
    1    0.000    0.000   58.923   58.923 removeStopWords.py:76(partofspeechRecognition)
   28    0.001    0.000   58.883    2.103 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/tag/stanford.py:69(tag)
   28    0.004    0.000   58.883    2.103 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/tag/stanford.py:73(tag_sents)
   28    0.001    0.000   56.927    2.033 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/internals.py:63(java)
  141    0.001    0.000   56.532    0.401 /usr/lib/python2.7/subprocess.py:769(communicate)
  140    0.002    0.000   56.530    0.404 /usr/lib/python2.7/subprocess.py:1408(_communicate)
  140    0.008    0.000   56.492    0.404 /usr/lib/python2.7/subprocess.py:1441(_communicate_with_poll)
  400   56.474    0.141   56.474    0.141 {built-in method poll}
    1    0.001    0.001   43.522   43.522 removeStopWords.py:69(partofspeechRecognitionRes)
    1    0.000    0.000   15.401   15.401 removeStopWords.py:62(namedEntityRecognition)
    1    0.001    0.001   15.367   15.367 removeStopWords.py:46(namedEntityRecognitionRes)
  141    0.004    0.000    2.302    0.016 /usr/lib/python2.7/subprocess.py:651(__init__)
  141    0.020    0.000    2.287    0.016 /usr/lib/python2.7/subprocess.py:1199(_execute_child)
   56    0.002    0.000    1.933    0.035 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/internals.py:38(config_java)
   56    0.001    0.000    1.931    0.034 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/internals.py:599(find_binary)
  112    0.002    0.000    1.930    0.017 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/internals.py:582(find_binary_iter)
  118    0.009    0.000    1.928    0.016 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/internals.py:453(find_file_iter)
    1    0.001    0.001    1.318    1.318 /usr/lib/python2.7/pickle.py:1383(load)
    1    0.046    0.046    1.317    1.317 /usr/lib/python2.7/pickle.py:851(load) 
600792个函数调用(595912个基元调用)在60.795秒内完成
排序人:累计时间
由于限制,名单从3357人减少到20人
ncalls tottime percall cumtime percall文件名:lineno(函数)
1 0.000 0.000 60.811 60.811 removeStopWords.py:1()
1 0.000 0.000 58.923 58.923删除单词。py:76(部分语音识别)
28 0.001 0.000 58.883 2.103/home/datascience/pythonEnv/local/lib/python2.7/site packages/nltk/tag/stanford.py:69(tag)
28 0.004 0.000 58.883 2.103/home/datascience/pythonEnv/local/lib/python2.7/site packages/nltk/tag/stanford.py:73(tag_sents)
28 0.001 0.000 56.927 2.033/home/datascience/pythonEnv/local/lib/python2.7/site packages/nltk/internals.py:63(java)
141 0.001 0.000 56.532 0.401/usr/lib/python2.7/subprocess.py:769(通信)
140 0.002 0.000 56.530 0.404/usr/lib/python2.7/subprocess.py:1408(_)
140 0.008 0.000 56.492 0.404/usr/lib/python2.7/subprocess.py:1441(与轮询通信)
400 56.474 0.141 56.474 0.141{内置方法轮询}
1 0.001 0.001 43.522 43.522删除单词。py:69(部分语音识别)
1 0.000 0.000 15.401 15.401 removeStopWords.py:62(姓名识别)
1 0.001 0.001 15.367 15.367 removeStopWords.py:46(名称DentityRecognitions)
141 0.004 0.000 2.302 0.016/usr/lib/python2.7/子流程py:651
141 0.020 0.000 2.287 0.016/usr/lib/python2.7/subprocess.py:1199(_execute_child)
56 0.002 0.000 1.933 0.035/home/datascience/pythonEnv/local/lib/python2.7/site packages/nltk/internal.py:38(config_java)
56 0.001 0.000 1.931 0.034/home/datascience/pythonEnv/local/lib/python2.7/site packages/nltk/internal.py:599(查找二进制文件)
112 0.002 0.000 1.930 0.017/home/datascience/pythonEnv/local/lib/python2.7/site packages/nltk/internals.py:582(查找二进制文件)
118 0.009 0.000 1.928 0.016/home/datascience/pythonEnv/local/lib/python2.7/site packages/nltk/internal.py:453(查找文件)
1 0.001 0.001 1.318 1.318/usr/lib/python2.7/pickle.py:1383(负载)
1 0.046 0.046 1.317 1.317/usr/lib/python2.7/pickle.py:851(加载)

这里的罪魁祸首似乎是Python包装器。Java实现没有花费那么多时间。大约需要@Gabor Angeli提到的时间。试试看


希望有帮助

这可能已经解决了,但对于那些试图用Python加速斯坦福NLP的人来说,这里是一个经过验证的答案

基本上,它要求您在后端运行NER服务器,调用sner库,并进一步执行所有与Stanford NLP相关的任务

找到了答案

在后台斯坦福NLP解压文件夹中启动斯坦福NLP服务器

下面给出答案的一部分

java -Djava.ext.dirs=./lib -cp stanford-ner.jar edu.stanford.nlp.ie.NERServer -port 9199 -loadClassifier ./classifiers/english.all.3class.distsim.crf.ser.gz
Then initiate Stanford NLP Server tagger in Python using sner library.

from sner import Ner
tagger = Ner(host='localhost',port=9199)
然后运行标记器

%%time
classified_text=tagger.get_entities(text)
print (classified_text)
Output:

    [('My', 'O'), ('name', 'O'), ('is', 'O'), ('John', 'PERSON'), ('Doe', 'PERSON')]
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 18.2 ms

这是关于训练分类器还是应用分类器?945h似乎比您所期望的2300个文档的标签(或对其进行标签培训)要长得多,除非文档确实很大。我怀疑您的代码有问题(例如,为每个句子创建新的tagger实例),我将集中精力解决这个问题,而不是尝试多线程。试着分析一下什么部分需要这么长的时间。23000个文档,每个文档大约有20-25个句子。我在开头创建了一个tagger实例,并使用同一个实例对每个句子进行分类。我正在对文档应用NER分类器来标记它们。我用tqdm来预测剩余时间,但最好的预测是600小时,这似乎很多。啊,好的,23000,不是2300,我的错。不过,它太长了,您应该做一些分析。请详细说明您所说的分析是什么意思,用NER和Python。我不熟悉CoreNLP的NLTK包装器,但对于这么大的集合,可能值得用原始Java代码注释并保存结果。这些文件可能会引起特别的兴趣。您可以使用
-threads
命令行标志并行化此计算。在4个核上,注释不应超过一天;我猜你可以在6-12小时内完成。