Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/19.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x 如何关闭TQM_gui?_Python 3.x_User Interface_Tqdm - Fatal编程技术网

Python 3.x 如何关闭TQM_gui?

Python 3.x 如何关闭TQM_gui?,python-3.x,user-interface,tqdm,Python 3.x,User Interface,Tqdm,我不熟悉并行处理。在我的软件中,我想做一些并行的解析器工作来加快速度,并且我想通知用户解析工作仍在运行。我找到了TQM来做这项工作,现在我遇到了问题,无法从TQM_gui中关闭该图 环顾四周,我发现Dan Shiebler的帖子: 我稍微修改了这段代码,以便在TQM的GUI中使用它。请参阅下面的代码片段 如果我再次调用parallel_process_gui,在(新的)图形后面仍然有一个(旧的)图形。我怎样才能关闭这两个或所有的数字 我已经尝试过将leave标志从True更改为False kw

我不熟悉并行处理。在我的软件中,我想做一些并行的解析器工作来加快速度,并且我想通知用户解析工作仍在运行。我找到了TQM来做这项工作,现在我遇到了问题,无法从TQM_gui中关闭该图

环顾四周,我发现Dan Shiebler的帖子:

我稍微修改了这段代码,以便在TQM的GUI中使用它。请参阅下面的代码片段

如果我再次调用parallel_process_gui,在(新的)图形后面仍然有一个(旧的)图形。我怎样才能关闭这两个或所有的数字

我已经尝试过将leave标志从True更改为False

kwargs = {
            'total': len(futures),
            'unit': 'it',
            'unit_scale': True,
            'leave': True,
            'desc': desc
        }
并尝试用

tqdm.tqdm_gui.close()
不走运

代码片段:


import tqdm
from concurrent.futures import ProcessPoolExecutor, as_completed
from time import sleep

def parallel_process_gui(array,
                         function,
                         n_jobs=16,
                         desc='process',
                         use_kwargs=False,
                         front_num=3):
    """
        A parallel version of the map function with a progress bar.

        http://danshiebler.com/2016-09-14-parallel-progress-bar/

        Args:
            array (array-like): An array to iterate over.
            function (function): A python function to apply to the elements of array
            n_jobs (int, default=16): The number of cores to use
            use_kwargs (boolean, default=False): Whether to consider the elements
            of array as dictionaries of keyword arguments to function
            front_num (int, default=3): The number of iterations to run serially
            before kicking off the parallel job. Useful for catching bugs

        Returns:
            [function(array[0]), function(array[1]), ...]
    """
    # We run the first few iterations serially to catch bugs
    if front_num > 0:
        front = [function(**a) if use_kwargs else function(a)
                 for a in array[:front_num]]
    else:
        front = []
    # If we set n_jobs to 1, just run a list comprehension. This is useful for
    # benchmarking and debugging.
    if n_jobs == 1:
        return front + [function(**a) if use_kwargs else function(a)
                        for a in tqdm.tqdm_gui(array[front_num:])]
    # Assemble the workers
    with ProcessPoolExecutor(max_workers=n_jobs) as pool:
        # Pass the elements of array into function
        if use_kwargs:
            futures = [pool.submit(function, **a) for a in array[front_num:]]
        else:
            futures = [pool.submit(function, a) for a in array[front_num:]]
        kwargs = {
            'total': len(futures),
            'unit': 'it',
            'unit_scale': True,
            'leave': True,
            'desc': desc
        }
        # Print out the progress as tasks complete
        for f in tqdm.tqdm_gui(as_completed(futures), **kwargs):
            pass

    out = []
    # Get the results from the futures.
    for i, future in tqdm.tqdm_gui(enumerate(futures)):
        try:
            out.append(future.result())
        except Exception as e:
            out.append(e)

    return front + out


def get_big_number(i, how_many):
    '''
    only for tests. Generates a big number
    :param i: factor
    :param how_many: iterations of additions
    '''
    return sum([100000 * 100000 * i for i in range(how_many)])


if __name__ == '__main__':
    '''
    build an array (arr) of dicts. Each dict has all parameters (i, how_many)
    of function (get_big_number) for parallel processing. in this example
    1000 processes are started
    '''
    arr = [{'i': i, 'how_many': 100000 if i % 2 else 220000}
           for i in range(10)]
    # show 1st 10 dicts
    for i in range(10):
        print (i, " ", arr[i])

    list_of_big = parallel_process_gui(
        arr,
        get_big_number,
        desc="progress 1",
        front_num=0,
        use_kwargs=True)

    arr = [{'i': i, 'how_many': 100000 if i % 2 else 220000}
           for i in range(1000)]

    #run it again, now there is one (old) window in background of 
    #a new progressbar 
    list_of_big = parallel_process_gui(
        arr,
        get_big_number,
        desc="progress 2",
        front_num=0,
        use_kwargs=True)

    # show 1st 10 results
    for i in range(10):
        print (i, " ", list_of_big[i])

    # show last 10 results
    for i in range(990, 1000):
        print (i, " ", list_of_big[i])

    sleep(10)
我知道tqdm_gui仍然是实验性的/alpha,但我希望在完成解析工作后progressbar会关闭

任何帮助都将不胜感激

托马斯我找到了我的窃听器

在parallel_process_gui中,最后我更改了从

    # Get the results from the futures.
    for i, future in tqdm.tqdm_gui(enumerate(futures)):
        try:
            out.append(future.result())
        except Exception as e:
            out.append(e)

这管用!使用

tqdm.tqdm_gui(enumerate(futures))
打开不需要的图形

托马斯

tqdm.tqdm_gui(enumerate(futures))