Python 将for循环代码转换为具有最大线程数的多线程代码

Python 将for循环代码转换为具有最大线程数的多线程代码,python,multithreading,function,for-loop,Python,Multithreading,Function,For Loop,背景:我正在尝试使用python dymola接口进行100次dymola模拟。我设法在for循环中运行它们。现在我想让它们在多线程运行时运行,这样我就可以并行运行多个模型,这会快得多。由于可能没有人使用该接口,因此我编写了一些简单的代码来说明我的问题: 1:将一个for循环转换为一个定义,该定义在另一个for循环中运行,但def和for循环共享同一个变量“i” 2:将for循环转换为定义,并使用多线程执行它。for循环逐个运行该命令。我希望同时以最多x个线程并行运行它们。结果应该与执行for循

背景:我正在尝试使用python dymola接口进行100次dymola模拟。我设法在for循环中运行它们。现在我想让它们在多线程运行时运行,这样我就可以并行运行多个模型,这会快得多。由于可能没有人使用该接口,因此我编写了一些简单的代码来说明我的问题:

1:将一个for循环转换为一个定义,该定义在另一个for循环中运行,但def和for循环共享同一个变量“i”

2:将for循环转换为定义,并使用多线程执行它。for循环逐个运行该命令。我希望同时以最多x个线程并行运行它们。结果应该与执行for循环时相同

示例代码:

import os

nSim = 100
ndig='{:01d}'

for i in range(nSim):
    os.makedirs(str(ndig.format(i)))
请注意,所创建目录的名称只是for循环中的数字,这一点很重要。现在,我不想使用for循环,而是希望使用多线程创建目录注意:对于这段短代码来说可能不有趣,但是当调用和执行100个模拟模型时,使用多线程肯定很有趣

因此,我从一个简单的开始,我认为,将for循环转换为一个函数,然后在另一个for循环中运行,希望得到与上面for循环代码相同的结果,但出现了以下错误: AttributeError:“非类型”对象没有属性“开始” 注意:我只是从这个开始,因为我以前没有使用def语句,而且线程包也是新的。在此之后,我将向多线程方向发展

1:

失败后,我尝试发展到多线程,将for循环转换为与之相同的功能,但使用多线程,通过并行运行代码,而不是逐个运行,并使用最大线程数:

2:

不幸的是,尝试也失败了,现在我得到了错误: 名称错误:未定义名称“i”

有人对问题1或2有什么建议吗

不能打电话,就这样开始吧

simulation(i=i).start
在非线程对象上。此外,还必须导入模块

似乎您忘记了在循环中添加“for”和缩进代码

i in range(nSim)
simulation_thread[i] = threading.Thread(target=simulation(i=i))
simulation_thread[i].daemon = True
simulation_thread[i].start()


这两个例子都不完整。这里有一个完整的例子。注意,target被传递函数target=simulation的名称及其参数args=i的元组,。不要调用函数target=simulationi=i,因为它只传递函数的结果,在本例中,这相当于target=None

import threading

nSim = 100

def simulation(i):
    print(f'{threading.current_thread().name}: {i}')

if __name__ == '__main__':
    threads = [threading.Thread(target=simulation,args=(i,)) for i in range(nSim)]
    for t in threads:
        t.start()
    for t in threads:
        t.join()
输出:

Thread-1: 0
Thread-2: 1
Thread-3: 2
 .
 .
Thread-98: 97
Thread-99: 98
Thread-100: 99

注意,您通常不需要比cpu更多的线程,这可以从multiprocessing.cpu\u count获得。您可以使用创建线程池并使用queue.queue发布线程执行的工作。如果希望池中的线程数达到最大值,并运行队列中的所有项目,则可以在。

中找到一个示例。我们可以继续@mark tolonen answer并这样做:

import threading
import queue
import time

def main():
    size_of_threads_pool = 10
    num_of_tasks = 30
    task_seconds = 1
    
    q = queue.Queue()

    def worker():
        while True:
            item = q.get()
            print(my_st)
            print(f'{threading.current_thread().name}: Working on {item}')
            time.sleep(task_seconds)
            print(f'Finished {item}')
            q.task_done()

    my_st = "MY string"
    threads = [threading.Thread(target=worker, daemon=True) for i in range(size_of_threads_pool)]
    for t in threads:
        t.start()


    # send the tasks requests to the worker
    for item in range(num_of_tasks):
        q.put(item)

    # block until all tasks are done
    q.join()
    print('All work completed')

    # NO need this, as threads are while True, so never will stop..
    # for t in threads:
    #    t.join()


if __name__ == '__main__':
    main()
这将运行30个任务,每个任务1秒,使用10个线程。 所以总时间是3秒

$ time python3 q_test.py 
...
All work completed

real  0m3.064s
user  0m0.033s
sys   0m0.016s
编辑:我找到了另一个用于异步执行可调用项的更高级别接口。 使用时,请参阅文档中的:

import concurrent.futures
import urllib.request

URLS = ['http://www.foxnews.com/',
        'http://www.cnn.com/',
        'http://europe.wsj.com/',
        'http://www.bbc.co.uk/',
        'http://some-made-up-domain.com/']

# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
    with urllib.request.urlopen(url, timeout=timeout) as conn:
        return conn.read()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
    for future in concurrent.futures.as_completed(future_to_url):
        url = future_to_url[future]
        try:
            data = future.result()
        except Exception as exc:
            print('%r generated an exception: %s' % (url, exc))
        else:
            print('%r page is %d bytes' % (url, len(data)))
请注意,max_workers=5表示最大线程数,以及 请注意您可以使用的url中url的for循环


很抱歉,当我运行这段代码时,由于def simulationi下的print语句中的“f”,我得到了一个语法错误。这是蟒蛇2/蟒蛇3的东西吗。。?我正在使用Python3,当我删除“f”或用字符串语句替换它时,代码会运行,但我只得到100次“{threading.current_thread.name}:{I}”,因此不是您上面所述的输出;-{threading.current_thread.name}:{i}{threading.current_thread.name}:{i}{threading.current_thread.name}:{i}。{threading.current_thread.name}:{i}{threading.current_thread.name}:{i}@Matthi9000 Python 3.6添加了f字符串。打印“{}:{}”.formatthreading.current_thread.name,i是旧方法。
import threading
import queue
import time

def main():
    size_of_threads_pool = 10
    num_of_tasks = 30
    task_seconds = 1
    
    q = queue.Queue()

    def worker():
        while True:
            item = q.get()
            print(my_st)
            print(f'{threading.current_thread().name}: Working on {item}')
            time.sleep(task_seconds)
            print(f'Finished {item}')
            q.task_done()

    my_st = "MY string"
    threads = [threading.Thread(target=worker, daemon=True) for i in range(size_of_threads_pool)]
    for t in threads:
        t.start()


    # send the tasks requests to the worker
    for item in range(num_of_tasks):
        q.put(item)

    # block until all tasks are done
    q.join()
    print('All work completed')

    # NO need this, as threads are while True, so never will stop..
    # for t in threads:
    #    t.join()


if __name__ == '__main__':
    main()
$ time python3 q_test.py 
...
All work completed

real  0m3.064s
user  0m0.033s
sys   0m0.016s
import concurrent.futures
import urllib.request

URLS = ['http://www.foxnews.com/',
        'http://www.cnn.com/',
        'http://europe.wsj.com/',
        'http://www.bbc.co.uk/',
        'http://some-made-up-domain.com/']

# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
    with urllib.request.urlopen(url, timeout=timeout) as conn:
        return conn.read()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
    for future in concurrent.futures.as_completed(future_to_url):
        url = future_to_url[future]
        try:
            data = future.result()
        except Exception as exc:
            print('%r generated an exception: %s' % (url, exc))
        else:
            print('%r page is %d bytes' % (url, len(data)))