Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/346.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python线程不起作用_Python_Multithreading - Fatal编程技术网

Python线程不起作用

Python线程不起作用,python,multithreading,Python,Multithreading,设置: init.py 塞里恩切克公司 from threading import Thread from themoviedb import * from folderhelper import * class serienchecker(Thread): ... def __init__(self, path,seriesname, blacklist, apikeytmdb='', language='eng'): ... self.star

设置: init.py

塞里恩切克公司

from threading import Thread
from themoviedb import *
from folderhelper import *
class serienchecker(Thread):
    ...
    def __init__(self, path,seriesname, blacklist, apikeytmdb='', language='eng'):
        ...
        self.startSearch()
    ...

    def startSearch(self):
        print("start")
        ...
输出:

2017-02-08 21:29:04.481536
start
2017-02-08 21:29:17.385611
start
2017-02-08 21:30:00.548471
start
但我希望它们都在同一时间进行计算。 是否有一种方法可以让所有任务排队,同时处理N个线程?[这只是脚本将检查数百个文件夹的一个小示例] 想知道我做错了什么吗

我用了几种方法都没用,请帮帮我

谢谢

编辑://

def job():
while(jobs):
    tmp = jobs.pop()
    task(drive=tmp[0],serie=tmp[1])

def task(drive, serie):
    print("Serie[{0}]".format(serie))
    sc = serienchecker(drive, serie,blacklist,apikeyv3,language)
    sc.start()
    result = sc.result
    resultString=''
    for obj in result:
        resultString+=obj+"\n"
    print(resultString)

for drive in drives:
    series = folder.getFolders(drive)
    for serie in series:
        jobs.append([drive,serie])

while(jobs):
    job()
join()
创建一个列表以在开头存储线程:

threads = []
然后在创建线程时将其添加到列表中:

threads.append(t)
在程序结束时,加入所有线程

for t in threads:
    t.join()
join()
创建一个列表以在开头存储线程:

threads = []
然后在创建线程时将其添加到列表中:

threads.append(t)
在程序结束时,加入所有线程

for t in threads:
    t.join()

如前所述,您需要将
连接推迟到所有线程启动之后。考虑使用<代码>线程池< /COD>限制并发线程的数量,如果Python的吉尔减慢处理,则可以重新实现为进程池。它为您执行线程启动、分派和加入

import multiprocessing
import itertools
import platform

...

# helper functions for process pool
#
#     linux - worker process gets a view of parent memory at time pool
#     is created, including global variables that exist at that time.
#     
#     windows - a new process is created and all needed state must be
#     passed to the child. we could pass these values on every call,
#     but assuming blacklist is large, its more efficient to set it
#     up once

do_init = platform.system() == "Windows"

if do_init:

    def init_serienchecker_process(_blacklist, _apikeyv3, _language):
        """Call once when process pool worker created to set static config"""
        global blacklist, apikeyv3, language
        blacklist, apikeyv3, language = _blacklist, _apikeyv3, _language

# this is the worker in the child process. It is called with items iterated
# in the parent Pool.map function. In our case, the item is a (drive, serie)
# tuple. Unpack, combine w/ globals and call the real function.

def serienchecker_worker(drive_serie):
    """Calls serienchecker with global blacklist, apikeyv3, language set by
    init_serienchecker_process"""
    return serienchecker(drive_serie[0], drive_serie[1], blacklist,
        apikeyv3, language)

def drive_serie_iter(folder, drives):
    """Yields (drive, serie) tuples"""
    for drive in drives:
        for serie in series:
            yield drive, serie 


# decide the number of workers. Here I just chose a random max value,
# but your number will depend on your desired workload.

max_workers = 24 
num_items = len(drive) * len(serie)
num_workers = min(num_items, max_workers)

# setup a process pool. we need to initialize windows with the global
# variables but since linux already has a view of the globals, its 
# not needed

initializer = init_serienchecker_process if do_init else None
initargs = (blacklist, apikeyv3, language) if do_init else None
pool = multiprocessing.Pool(num_workers, initializer=initializer, 
    initargs=initargs)

# map calls serienchecker_worker in the subprocess for each (drive, serie)
# pair produced by drive_serie_iter

for result in pool.map(serienchecker_worker, drive_serie_iter(folder, drives)):
    print(result) # not sure you care what the results are

pool.join()

如前所述,您需要将
连接推迟到所有线程启动之后。考虑使用<代码>线程池< /COD>限制并发线程的数量,如果Python的吉尔减慢处理,则可以重新实现为进程池。它为您执行线程启动、分派和加入

import multiprocessing
import itertools
import platform

...

# helper functions for process pool
#
#     linux - worker process gets a view of parent memory at time pool
#     is created, including global variables that exist at that time.
#     
#     windows - a new process is created and all needed state must be
#     passed to the child. we could pass these values on every call,
#     but assuming blacklist is large, its more efficient to set it
#     up once

do_init = platform.system() == "Windows"

if do_init:

    def init_serienchecker_process(_blacklist, _apikeyv3, _language):
        """Call once when process pool worker created to set static config"""
        global blacklist, apikeyv3, language
        blacklist, apikeyv3, language = _blacklist, _apikeyv3, _language

# this is the worker in the child process. It is called with items iterated
# in the parent Pool.map function. In our case, the item is a (drive, serie)
# tuple. Unpack, combine w/ globals and call the real function.

def serienchecker_worker(drive_serie):
    """Calls serienchecker with global blacklist, apikeyv3, language set by
    init_serienchecker_process"""
    return serienchecker(drive_serie[0], drive_serie[1], blacklist,
        apikeyv3, language)

def drive_serie_iter(folder, drives):
    """Yields (drive, serie) tuples"""
    for drive in drives:
        for serie in series:
            yield drive, serie 


# decide the number of workers. Here I just chose a random max value,
# but your number will depend on your desired workload.

max_workers = 24 
num_items = len(drive) * len(serie)
num_workers = min(num_items, max_workers)

# setup a process pool. we need to initialize windows with the global
# variables but since linux already has a view of the globals, its 
# not needed

initializer = init_serienchecker_process if do_init else None
initargs = (blacklist, apikeyv3, language) if do_init else None
pool = multiprocessing.Pool(num_workers, initializer=initializer, 
    initargs=initargs)

# map calls serienchecker_worker in the subprocess for each (drive, serie)
# pair produced by drive_serie_iter

for result in pool.map(serienchecker_worker, drive_serie_iter(folder, drives)):
    print(result) # not sure you care what the results are

pool.join()

为什么你要在
启动
后立即加入
每个线程?在启动另一个线程之前等待该线程完成的。此外,将线程的
目标
设置为
线程的子类也毫无意义。为什么
启动
后立即加入
每个线程?在启动另一个线程之前等待该线程完成的。另外,将线程的
目标
设为
线程
的子类也没有意义。我对编程和python不熟悉,我更新了我的文本。你能告诉我你的代码有什么意义吗?我得到池=多个。。但是我不能理解pool.map(lambda..partI已经更新了注释并修复了boot的一个明显错误。我是编程和python新手,我更新了我的文本你能告诉我你的代码是什么吗?我得到pool=multipr..但是我不能理解pool.map(lambda..partI已经更新了注释并修复了一个要启动的明显错误。如果我这样做(单线程)我的程序可以工作,我如何才能使它与N个线程一起工作?因此它不会尝试每个任务使用1个线程| |我尝试了你的方法,但是程序崩溃了,如果我这样做(单线程),他会启动超过500个线程我的程序可以运行,我怎样才能让它在N个线程的情况下运行呢?所以它不尝试每个任务使用1个线程| |我尝试了你的方法,但程序崩溃了,他启动了500多个线程