Pyspark Pypark cython获取文件。太短了

Pyspark Pypark cython获取文件。太短了,pyspark,cython,Pyspark,Cython,我在这里遵循文档 我可以让这个简短的示例代码正常工作,但是当合并到我的较长代码中时,我得到了这个 File "./jobs.zip/jobs/util.py", line 51, in wrapped cython_function_ = getattr(__import__(module), method) File "/usr/local/lib64/python2.7/site-packages/pyximport/pyximport.py", line 458, in

我在这里遵循文档

我可以让这个简短的示例代码正常工作,但是当合并到我的较长代码中时,我得到了这个

  File "./jobs.zip/jobs/util.py", line 51, in wrapped
    cython_function_ = getattr(__import__(module), method)
  File "/usr/local/lib64/python2.7/site-packages/pyximport/pyximport.py", line 458, in load_module
    language_level=self.language_level)
  File "/usr/local/lib64/python2.7/site-packages/pyximport/pyximport.py", line 233, in load_module
    exec("raise exc, None, tb", {'exc': exc, 'tb': tb})
  File "/usr/local/lib64/python2.7/site-packages/pyximport/pyximport.py", line 216, in load_module
    mod = imp.load_dynamic(name, so_path)
ImportError: Building module cython_util failed: ['ImportError: /home/.pyxbld/lib.linux-x86_64-2.7/cython_util.so: file too short\n']

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
文件“/jobs.zip/jobs/util.py”,第51行,包装
cython_函数=getattr(_导入__(模块),方法)
文件“/usr/local/lib64/python2.7/site packages/pyximport/pyximport.py”,第458行,在load_模块中
语言水平=自我。语言水平)
加载模块中的文件“/usr/local/lib64/python2.7/site packages/pyximport/pyximport.py”,第233行
exec(“raiseexc,None,tb”,{'exc':exc,'tb':tb})
load_模块中的文件“/usr/local/lib64/python2.7/site packages/pyximport/pyximport.py”,第216行
mod=imp.load\u动态(名称、so\u路径)
ImportError:生成模块cython_util失败:['ImportError:/home/.pyxbld/lib.linux-x86\u 64-2.7/cython_util.so:文件太短\n']
位于org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
位于org.apache.spark.api.python.PythonRunner$$anon$1。(PythonRDD.scala:234)
位于org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
位于org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:323)上
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:287)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
位于org.apache.spark.scheduler.Task.run(Task.scala:108)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:335)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
运行(Thread.java:748)
然而,在这个错误之后,spark程序似乎一直在运行。有人知道这到底是什么吗?.so文件怎么会太短?既然程序继续运行,我可以忽略它并继续吗