Python can';t pickle\u thread.RLock对象(在使用Web服务时)

Python can';t pickle\u thread.RLock对象(在使用Web服务时),python,python-multiprocessing,python-3.6,Python,Python Multiprocessing,Python 3.6,我正在使用python 3.6 我试图在一个名为SubmitJobsUsingMultiProcessing()的类方法中使用多处理,该类方法进一步依次调用另一个类方法 我一直遇到以下错误:类型错误:无法pickle\u thread.RLock对象 我不知道这意味着什么。我怀疑下面这行试图建立到Web服务器API的连接可能是原因,但我完全不明白为什么 我不是一个合适的程序员(作为公文包建模团队的一部分编写代码),所以如果这是一个明显的问题,请原谅我的无知,并提前表示感谢 import mult

我正在使用python 3.6

我试图在一个名为SubmitJobsUsingMultiProcessing()的类方法中使用多处理,该类方法进一步依次调用另一个类方法

我一直遇到以下错误:类型错误:无法pickle\u thread.RLock对象

我不知道这意味着什么。我怀疑下面这行试图建立到Web服务器API的连接可能是原因,但我完全不明白为什么

我不是一个合适的程序员(作为公文包建模团队的一部分编写代码),所以如果这是一个明显的问题,请原谅我的无知,并提前表示感谢

import multiprocessing as mp,functools

def SubmitJobsUsingMultiProcessing(self,PartitionsOfAnalysisDates,PickleTheJobIdsDict = True):
    if (self.ExportSetResult == "SUCCESS"):
        NumPools = mp.cpu_count()
        PoolObj =  mp.Pool(NumPools)   
        userId,clientId,password,expSetName = self.userId , self.clientId , self.password , self.expSetName
        PartialFunctor = functools.partial(self.SubmitJobsAsOfDate,userId = userId,clientId = clientId,password = password,expSetName = expSetName)
        Result = PoolObj.map(self.SubmitJobsAsOfDate, PartitionsOfAnalysisDates)
        BatchJobIDs = OrderedDict((key, val) for Dct in Result for key, val in Dct.items())
        f_pickle = open(self.JobIdPickleFileName, 'wb')
        pickle.dump(BatchJobIDs, f_pickle, -1)
        f_pickle.close()


 def SubmitJobsAsOfDate(self,ListOfDatesForBatchJobs,userId,clientId,password,expSetName):

    client = Client(self.url, proxy=self.proxysettings)
    if (self.ExportSetResult != "SUCCESS"):
        print("The export set creation was not successful...exiting")
        sys.exit()

    BatchJobIDs = OrderedDict()
    NumJobsSubmitted = 0
    CurrentProcessID = mp.current_process()

    for AnalysisDate in ListOfDatesForBatchJobs:
        jobName = "Foo_" + str(AnalysisDate)
        print('Sending job from process : ', CurrentProcessID, ' : ', jobName)
        jobId = client.service.SubmitExportJob(userId,clientId,password,expSetName, AnalysisDate, jobName, False)
        BatchJobIDs[AnalysisDate] = jobId
        NumJobsSubmitted += 1

        'Sleep for 30 secs every 100 jobs'
        if (NumJobsSubmitted % 100 == 0):
            print('100 jobs have been submitted thus far from process : ', CurrentProcessID,'---Sleeping for 30 secs to avoid the SSL time out error')
            time.sleep(30)
    self.BatchJobIDs = BatchJobIDs
    return BatchJobIDs
以下是跟踪:

    Traceback (most recent call last):
  File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\pydevd.py", line 1599, in <module>
    globals = debugger.run(setup['file'], None, None, is_module)
  File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\pydevd.py", line 1026, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "C:/Users/trpff85/PycharmProjects/QuantEcon/BDTAPIMultiProcUsingPathos.py", line 289, in <module>
    BDTProcessObj.SubmitJobsUsingMultiProcessing(Partitions)
  File "C:/Users/trpff85/PycharmProjects/QuantEcon/BDTAPIMultiProcUsingPathos.py", line 190, in SubmitJobsUsingMultiProcessing
    Result = PoolObj.map(self.SubmitJobsAsOfDate, PartitionsOfAnalysisDates)
  File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 266, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 644, in get
    raise self._value
  File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py", line 424, in _handle_tasks
    put(task)
  File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\reduction.py", line 51, in dumps
    cls(buf, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects
回溯(最近一次呼叫最后一次):
文件“C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\pydevd.py”,第1599行,在
globals=debugger.run(setup['file'],None,None,is_模块)
文件“C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\pydevd.py”,第1026行,正在运行
pydev_imports.execfile(文件、全局、局部)#执行脚本
文件“C:\Program Files\JetBrains\PyCharm 2017.2.3\helpers\pydev\\u pydev\u imps\\u pydev\u execfile.py”,execfile中第18行
exec(编译(内容+“\n”,文件,'exec'),全局,loc)
文件“C:/Users/trpff85/pycharm项目/QuantEcon/bdtapimultiproductingpaths.py”,第289行,在
BDTProcessObject.SubmitJobs使用多处理(分区)
文件“C:/Users/trpff85/pycharm项目/QuantEcon/bdtapimultiproductingpaths.py”,第190行,在SubmitJobsUsingMultiProcessing中
结果=PoolObj.map(self.submitjobsaofDate、PartitionsOfAnalysisDates)
文件“C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py”,map中第266行
返回self.\u map\u async(func、iterable、mapstar、chunksize).get()
get中第644行的文件“C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py”
提升自我价值
文件“C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\pool.py”,第424行,位于\u handle\u tasks中
放置(任务)
文件“C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\connection.py”,第206行,在send中
self.\u发送\u字节(\u ForkingPickler.dumps(obj))
文件“C:\Users\trpff85\AppData\Local\Continuum\anaconda3\lib\multiprocessing\reduce.py”,第51行,转储
cls(buf,协议).dump(obj)
TypeError:无法pickle\u thread.RLock对象

我也遇到了类似的问题。正如bug链接中所指出的,中有一个bug,这是pickle日志处理程序的问题。我也遇到了同样的问题,多亏了你的链接,我简单地删除了日志处理程序,它就工作了。现在我无法从rq工作中进行日志记录,但我可以不使用它。如果您使用Paths库(多进程而不是多进程,dill而不是pickle),那么日志记录将起作用。至少对我来说是这样。