Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/spring-mvc/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 数据集字符串替换为线程不加速_Python_Multithreading_Replace_Dataset_Threadpool - Fatal编程技术网

Python 数据集字符串替换为线程不加速

Python 数据集字符串替换为线程不加速,python,multithreading,replace,dataset,threadpool,Python,Multithreading,Replace,Dataset,Threadpool,我最近参加了一个大学项目的自然语言处理,给我一个单词列表,我想尝试从字符串数据集中删除所有这些单词。 我的数据集看起来像这样,但要大得多: data_set = ['Human machine interface for lab abc computer applications', 'A survey of user opinion of computer system response time', 'The EPS user interface mana

我最近参加了一个大学项目的自然语言处理,给我一个单词列表,我想尝试从字符串数据集中删除所有这些单词。 我的数据集看起来像这样,但要大得多:

data_set = ['Human machine interface for lab abc computer applications',
         'A survey of user opinion of computer system response time',
         'The EPS user interface management system',
         'System and human system engineering testing of EPS',
         'Relation of user perceived response time to error measurement',
         'The generation of random binary unordered trees',
         'The intersection graph of paths in trees',
         'Graph minors IV Widths of trees and well quasi ordering',
         'Graph minors A survey']
要删除的单词列表如下所示,但同样要长得多:

to_remove = ['abc', 'of', 'quasi', 'well']
因为在Python中我没有找到任何直接从字符串中删除单词的函数,所以我使用了replace()函数。 程序应该获取数据集,对于要删除的每个单词,它应该对数据集的不同字符串调用replace()。我希望线程可以加快速度,但不幸的是,它所用的时间几乎与没有线程的程序所用的时间相同。我是否正确地实现了线程?还是我错过了什么

带有线程的代码如下所示:

from multiprocessing.dummy import Pool as ThreadPool

def remove_words(params):
    changed_data_set = params[0]
    for elem in params[1]:
        changed_data_set = changed_data_set.replace(' ' + elem, ' ')
    return changed_data_set

def parallel_task(params, threads=2):
    pool = ThreadPool(threads)
    results = pool.map(remove_words, params)
    pool.close()
    pool.join()
    return results

parameters = []
for rows in data_set:
    parameters.append((rows, to_remove))
new_data_set = parallel_task(parameters, 8)
def remove_words(data_set, to_replace):
    for len in range(len(data_set)):
        for word in to_replace:
            data_set[len] = data_set[len].replace(' ' + row, ' ')
    return data_set

changed_data_set = remove_words(data_set, to_remove)
没有线程的代码如下所示:

from multiprocessing.dummy import Pool as ThreadPool

def remove_words(params):
    changed_data_set = params[0]
    for elem in params[1]:
        changed_data_set = changed_data_set.replace(' ' + elem, ' ')
    return changed_data_set

def parallel_task(params, threads=2):
    pool = ThreadPool(threads)
    results = pool.map(remove_words, params)
    pool.close()
    pool.join()
    return results

parameters = []
for rows in data_set:
    parameters.append((rows, to_remove))
new_data_set = parallel_task(parameters, 8)
def remove_words(data_set, to_replace):
    for len in range(len(data_set)):
        for word in to_replace:
            data_set[len] = data_set[len].replace(' ' + row, ' ')
    return data_set

changed_data_set = remove_words(data_set, to_remove)

谢谢,我现在明白我做错了什么,并且能够更正我的代码。