Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何序列化pyspark管道对象?_Python_Apache Spark_Serialization_Pyspark_Apache Spark Ml - Fatal编程技术网

Python 如何序列化pyspark管道对象?

Python 如何序列化pyspark管道对象?,python,apache-spark,serialization,pyspark,apache-spark-ml,Python,Apache Spark,Serialization,Pyspark,Apache Spark Ml,我正在尝试序列化PySpark管道对象,以便以后可以保存和检索它。尝试使用Python pickle库和PySpark的PickleSerializer,调用本身失败 使用本机pickle库时提供代码段 pipeline = Pipeline(stages=[tokenizer, hashingTF, lr]) with open ('myfile', 'wb') as f: pickle.dump(pipeline,f,2) with open ('myfile', 'rb') as f

我正在尝试序列化PySpark
管道
对象,以便以后可以保存和检索它。尝试使用Python pickle库和PySpark的
PickleSerializer
,调用本身失败

使用本机
pickle
库时提供代码段

pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
with open ('myfile', 'wb') as f:
   pickle.dump(pipeline,f,2)
with open ('myfile', 'rb') as f:
   pipeline1 = pickle.load(f)
运行时出现以下错误:

py4j.protocol.Py4JError: An error occurred while calling o32.__getnewargs__. Trace:
py4j.Py4JException: Method __getnewargs__([]) does not exist
    at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:335)
    at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:344)
    at py4j.Gateway.invoke(Gateway.java:252)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:209)
    at java.lang.Thread.run(Thread.java:785)

是否可以序列化PySpark
管道
对象?

从技术上讲,您可以轻松地pickle
管道
对象:

from pyspark.ml.pipeline import Pipeline
import pickle

pickle.dumps(Pipeline(stages=[]))
## b'\x80\x03cpyspark.ml.pipeline\nPipeline\nq ...
不能pickle的是Spark
转换器
估计器
,它们只是JVM对象的薄薄包装。如果确实需要,可以将其包装到函数中,例如:

def make_pipeline():
    return Pipeline(stages=[Tokenizer(inputCol="text", outputCol="words")])

pickle.dumps(make_pipeline)
## b'\x80\x03c__ ...

但是,由于它只是一段代码,并且不存储任何持久性数据,因此它看起来并不特别有用。

当我尝试使用空管道对象pickle.dumps(Pipeline(stages=[])时,这会起作用,但是当我尝试使用stages pickle管道对象时,它仍然失败。尝试了您建议的方法格式,但如果我尝试pickle.dumps(make_pipeline()),它仍然会失败,并出现相同的错误。我将:)再看一看我的代码
pickle.dumps(make_pipeline)
和您的
pickle.dumps(make_pipeline)
。我只pickle一个可以用来生成管道的对象,而不是管道本身。