Python spark nlp';JavaPackage';对象不可调用
我正在使用jupyter实验室运行spark nlp文本分析。目前我正在运行示例代码:Python spark nlp';JavaPackage';对象不可调用,python,python-3.x,apache-spark,pyspark,johnsnowlabs-spark-nlp,Python,Python 3.x,Apache Spark,Pyspark,Johnsnowlabs Spark Nlp,我正在使用jupyter实验室运行spark nlp文本分析。目前我正在运行示例代码: import sparknlp from pyspark.sql import SparkSession from sparknlp.pretrained import PretrainedPipeline #create or get Spark Session #spark = sparknlp.start() spark = SparkSession.builder \ .appName(&qu
import sparknlp
from pyspark.sql import SparkSession
from sparknlp.pretrained import PretrainedPipeline
#create or get Spark Session
#spark = sparknlp.start()
spark = SparkSession.builder \
.appName("ner")\
.master("local[4]")\
.config("spark.driver.memory","8G")\
.config("spark.driver.maxResultSize", "2G") \
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.11:2.6.5")\
.config("spark.kryoserializer.buffer.max", "500m")\
.getOrCreate()
print("sparknlp version", sparknlp.version(), "sparkversion", spark.version)
#download, load, and annotate a text by pre-trained pipeline
pipeline = PretrainedPipeline('recognize_entities_dl', 'en')
result = pipeline.annotate('Harry Potter is a great movie')
我得到以下错误:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-bfd6884be04c> in <module>
15
16 #download, load, and annotate a text by pre-trained pipeline
---> 17 pipeline = PretrainedPipeline('recognize_entities_dl', 'en')
18 result = pipeline.annotate('Harry Potter is a great movie')
~/.pyenv/versions/3.7.9/lib/python3.7/site-packages/sparknlp/pretrained.py in __init__(self, name, lang, remote_loc, parse_embeddings, disk_location)
89 def __init__(self, name, lang='en', remote_loc=None, parse_embeddings=False, disk_location=None):
90 if not disk_location:
---> 91 self.model = ResourceDownloader().downloadPipeline(name, lang, remote_loc)
92 else:
93 self.model = PipelineModel.load(disk_location)
~/.pyenv/versions/3.7.9/lib/python3.7/site-packages/sparknlp/pretrained.py in downloadPipeline(name, language, remote_loc)
49 def downloadPipeline(name, language, remote_loc=None):
50 print(name + " download started this may take some time.")
---> 51 file_size = _internal._GetResourceSize(name, language, remote_loc).apply()
52 if file_size == "-1":
53 print("Can not find the model to download please check the name!")
~/.pyenv/versions/3.7.9/lib/python3.7/site-packages/sparknlp/internal.py in __init__(self, name, language, remote_loc)
190 def __init__(self, name, language, remote_loc):
191 super(_GetResourceSize, self).__init__(
--> 192 "com.johnsnowlabs.nlp.pretrained.PythonResourceDownloader.getDownloadSize", name, language, remote_loc)
193
194
~/.pyenv/versions/3.7.9/lib/python3.7/site-packages/sparknlp/internal.py in __init__(self, java_obj, *args)
127 super(ExtendedJavaWrapper, self).__init__(java_obj)
128 self.sc = SparkContext._active_spark_context
--> 129 self._java_obj = self.new_java_obj(java_obj, *args)
130 self.java_obj = self._java_obj
131
~/.pyenv/versions/3.7.9/lib/python3.7/site-packages/sparknlp/internal.py in new_java_obj(self, java_class, *args)
137
138 def new_java_obj(self, java_class, *args):
--> 139 return self._new_java_obj(java_class, *args)
140
141 def new_java_array(self, pylist, java_class):
~/.pyenv/versions/3.7.9/lib/python3.7/site-packages/pyspark/ml/wrapper.py in _new_java_obj(java_class, *args)
67 java_obj = getattr(java_obj, name)
68 java_args = [_py2java(sc, arg) for arg in args]
---> 69 return java_obj(*java_args)
70
71 @staticmethod
TypeError: 'JavaPackage' object is not callable
朱皮特:
jupyter core : 4.7.0
jupyter-notebook : 6.1.5
qtconsole : 5.0.1
ipython : 7.19.0
ipykernel : 5.4.2
jupyter client : 6.1.7
jupyter lab : 2.2.9
nbconvert : 6.0.7
ipywidgets : 7.5.1
nbformat : 5.0.8
traitlets : 5.0.5
我感谢你的帮助。。谢谢删除Spark 3.0.1,只留下PySpark 2.4.x。因为Spark NLP仍然不支持Spark 3.x。使用Java 8而不是Java 11,因为Spark 2.4不支持它。我沿着这条路走,我从类路径中删除了Spark(Spark_HOME不再指向3.0.1),降级到Java 8,但是,我遇到了pyspark 2.4.x和python 3.7.9之间的一些兼容性问题。以下是最终成功的方法:1)卸载pyspark和spark nlp 2)从.bashrc导出中删除spark_HOME 3)pip安装pypandoc(解决上述兼容性问题)4)pip安装Pypark==2.4.7 5)pip安装spark nlp我希望这些版本要求中的一些能在spark nlp站点上记录下来
jupyter core : 4.7.0
jupyter-notebook : 6.1.5
qtconsole : 5.0.1
ipython : 7.19.0
ipykernel : 5.4.2
jupyter client : 6.1.7
jupyter lab : 2.2.9
nbconvert : 6.0.7
ipywidgets : 7.5.1
nbformat : 5.0.8
traitlets : 5.0.5