Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/performance/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 利用常用词进行高效查找_Python_Performance_Pandas_Hashtable_Lookup - Fatal编程技术网

Python 利用常用词进行高效查找

Python 利用常用词进行高效查找,python,performance,pandas,hashtable,lookup,Python,Performance,Pandas,Hashtable,Lookup,我有一个名称列表(字符串)分为单词。有800万个名称,每个名称最多由20个单词(标记)组成。唯一代币的数量为220万。我需要一种有效的方法来从查询中找到至少包含一个单词的所有名称(可能最多包含20个单词,但通常只有几个) 我当前的方法使用Python Pandas,如下所示(后来称为original): 目前,在我的(相当强大的)机器上,这样的查找(在完整数据集上)需要5.75秒。我想把速度提高至少10倍 通过将所有列压缩为一列并对其执行查找(后来称为original,compressed),我

我有一个名称列表(字符串)分为单词。有800万个名称,每个名称最多由20个单词(标记)组成。唯一代币的数量为220万。我需要一种有效的方法来从查询中找到至少包含一个单词的所有名称(可能最多包含20个单词,但通常只有几个)

我当前的方法使用Python Pandas,如下所示(后来称为
original
):

目前,在我的(相当强大的)机器上,这样的查找(在完整数据集上)需要5.75秒。我想把速度提高至少10倍

通过将所有列压缩为一列并对其执行查找(后来称为
original,compressed
),我可以得到5.29s:

但这还不够快

另一个似乎易于实现的解决方案是使用Python多处理(由于GIL,线程在这里应该没有帮助,并且没有I/O,对吧?)。但问题是大数据帧需要复制到每个进程,这会占用所有内存。另一个问题是,我需要在一个循环中多次调用
filter\u by\u tokens
,这样每次调用都会复制数据帧,这是低效的

请注意,单词可能在名称中出现多次(例如,最流行的单词在名称中出现60万次),因此反向索引将非常庞大

有效地写这篇文章的好方法是什么?首选Python解决方案,但我也对其他语言和技术(如数据库)持开放态度


UPD: 我已经测量了我的两个解决方案和@piRSquared在他的报告中建议的5个解决方案的执行时间。以下是结果(tl;dr最好是2倍的改善):

mul+any
d1=pd.get\u dummies(df.stack()).groupby(level=0.sum()
(在128Gb RAM机器上)上提供MemoryError

isin
给出了
索引错误:在
s[d1.isin({'zoo',foo'}).unstack().any(1)]
上提供了不可对齐的布尔系列键,显然是因为
df.stack().isin(set(tokens)).unstack()的形状略小于原始数据帧的形状(8.39M vs 8.41M行),不知道为什么以及如何解决这个问题

请注意,我使用的机器有12个内核(尽管我在上面提到了并行化的一些问题)。所有解决方案都使用单个核心

结论(截至目前)
zip
(2.54s)比
原始压缩
解决方案(5.29s)提高了2.1x。这很好,不过如果可能的话,我的目标是至少提高10倍。因此,我暂时不接受(仍然很好的)@piRSquared的答案,以欢迎更多的建议。

idea 0
zip

def pir(s, token):
    return s[[bool(p & token) for p in s]]

pir(s, {'foo', 'zoo'})
想法1
合并

token = pd.DataFrame(dict(v=['foo', 'zoo']))
d1 = df.stack().reset_index('id', name='v')
s.ix[d1.merge(token).id.unique()]
创意2
mul
+
any

d1 = pd.get_dummies(df.stack()).groupby(level=0).sum()
token = pd.Series(1, ['foo', 'zoo'])
s[d1.mul(token).any(1)]
创意3
isin

d1 = df.stack()
s[d1.isin({'zoo', 'foo'}).unstack().any(1)]
创意4
查询

token = ('foo', 'zoo')
d1 = df.stack().to_frame('s')
s.ix[d1.query('s in @token').index.get_level_values(0).unique()]

我用以下工具做过类似的事情

Hbase - Key can have Multiple columns (Very Fast)
ElasticSearch - Nice easy to scale. You just need to import your data as JSON

ApacheLucene-将非常适合800万条记录

您可以使用反向索引;下面在pypy中运行的代码在57秒内构建索引,执行查询或20个单词需要0.00018秒,使用大约3.2Gb内存。Python 2.7在158秒内构建索引,并在0.0013秒内使用大约3.41Gb内存进行查询

实现这一点的最快方法是使用位图反向索引,压缩以节省空间

"""
8m records with between 1 and 20 words each, selected at random from 100k words
Build dictionary of sets, keyed by word number, set contains nos of all records
with that word
query merges the sets for all query words
"""
import random
import time   records = 8000000
words = 100000
wordlists = {}
print "build wordlists"
starttime = time.time()
wordlimit = words - 1
total_words = 0
for recno in range(records):
    for x in range(random.randint(1,20)):
        wordno = random.randint(0,wordlimit)
        try:
           wordlists[wordno].add(recno)
        except:
           wordlists[wordno] = set([recno])
        total_words += 1
print "build time", time.time() - starttime, "total_words", total_words
querylist = set()
query = set()
for x in range(20):
    while 1:
        wordno = (random.randint(0,words))
        if  wordno in wordlists: # only query words that were used
            if  not wordno in query:
                query.add(wordno)
                break
print "query", query
starttime = time.time()
for wordno in query:
    querylist.union(wordlists[wordno])
print "query time", time.time() - starttime
print "count = ", len(querylist)
for recno in querylist:
    print "record", recno, "matches"

也许我的第一个答案有点抽象;在没有实际数据的情况下,它会生成随机数据,以获得查询时间的感觉。这个代码是实用的

data =[['foo', 'bar', 'joe'],
       ['foo'],
       ['bar', 'joe'],
       ['zoo']]

wordlists = {}
print "build wordlists"
for x, d in enumerate(data):
    for word in d:
        try:
           wordlists[word].add(x)
        except:
           wordlists[word] = set([x])
print "query"
query = [ "foo", "zoo" ]
results = set()
for q in query:
    wordlist = wordlists.get(q)
    if  wordlist:
        results = results.union(wordlist)
l = list(results)
l.sort()
for x in l:
    print data[x]
时间和内存成本是建立单词表(反向索引);查询几乎是免费的。你有12核的机器,所以大概它有足够的内存。为了实现可重复性,构建单词列表,对每个单词列表进行pickle,并将单词作为键,将pickle集作为二进制blob写入sqlite或任何键/值数据库。那么你所需要的就是:

initialise_database()
query = [ "foo", "zoo" ]
results = set()                             
for q in query:                             
    wordlist = get_wordlist_from_database(q) # get binary blob and unpickle
    if  wordlist:                        
        results = results.union(wordlist)
l = list(results)
l.sort()   
for x in l:      
    print data[x]
或者,使用更高效的内存,并且可能更快地构建索引的数组。pypy比2.7快10倍

import array

data =[['foo', 'bar', 'joe'],
       ['foo'],
       ['bar', 'joe'],
       ['zoo']]

wordlists = {}
print "build wordlists"
for x, d in enumerate(data):
    for word in d:
        try:
           wordlists[word].append(x)
        except:
           wordlists[word] = array.array("i",[x])
print "query"
query = [ "foo", "zoo" ]
results = set()
for q in query:
    wordlist = wordlists.get(q)
    if  wordlist:
        for i in wordlist:
            results.add(i)
l = list(results)
l.sort()
for x in l:
    print data[x]

如果您知道您将看到的唯一标记的数量相对较少, 您可以非常轻松地构建一个高效的位掩码来查询匹配项

Naiver方法(在最初的文章中)将允许多达64个不同的令牌

下面的改进代码使用了类似于bloom过滤器的位掩码(设置位环绕64的模块化算法)。如果有64个以上的唯一令牌,则会出现一些误报,下面的代码将自动验证(使用原始代码)

现在,如果唯一令牌的数量(远)大于64,或者如果您特别不走运,那么最坏情况下的性能将下降。散列可以缓解这种情况

就性能而言,使用下面的基准数据集,我得到:

原始代码:4.67秒

位掩码代码:0.30秒

但是,当唯一令牌的数量增加时,位掩码代码仍然有效,而原始代码的速度大大降低。有了大约70个独特的代币,我得到了如下东西:

原始代码:~15秒

位掩码代码:0.80秒

注意:对于后一种情况,从提供的列表构建位掩码数组所需的时间与构建数据帧所需的时间大致相同。可能没有真正的理由来构建数据框架;保留它主要是为了便于与原始代码进行比较

class WordLookerUpper(object):
    def __init__(self, token_lists):
        tic = time.time()
        self.df = pd.DataFrame(token_lists,
                    index=pd.Index(
                        data=['id%d' % i for i in range(len(token_lists))],
                        name='index'))
        print('took %d seconds to build dataframe' % (time.time() - tic))
        tic = time.time()
        dii = {}
        iid = 0
        self.bits = np.zeros(len(token_lists), np.int64)
        for i in range(len(token_lists)):
            for t in token_lists[i]:
                if t not in dii:
                    dii[t] = iid
                    iid += 1
                # set the bit; note that b = dii[t] % 64
                # this 'wrap around' behavior lets us use this
                # bitmask as a probabilistic filter
                b = dii[t]
                self.bits[i] |= (1 << b)
        self.string_to_iid = dii
        print('took %d seconds to build bitmask' % (time.time() - tic))

    def filter_by_tokens(self, tokens, df=None):
        if df is None:
            df = self.df
        tic = time.time()
        # search within each column and then concatenate and dedup results    
        results = [df.loc[lambda df: df[i].isin(tokens)] for i in range(df.shape[1])]
        results = pd.concat(results).reset_index().drop_duplicates().set_index('index')
        print('took %0.2f seconds to find %d matches using original code' % (
                time.time()-tic, len(results)))
        return results

    def filter_by_tokens_with_bitmask(self, search_tokens):
        tic = time.time()
        bitmask = np.zeros(len(self.bits), np.int64)
        verify = np.zeros(len(self.bits), np.int64)
        verification_needed = False
        for t in search_tokens:
            bitmask |= (self.bits & (1<<self.string_to_iid[t]))
            if self.string_to_iid[t] > 64:
                verification_needed = True
                verify |= (self.bits & (1<<self.string_to_iid[t]))
        if verification_needed:
            results = self.df[(bitmask > 0 & ~verify.astype(bool))]
            results = pd.concat([results,
                                 self.filter_by_tokens(search_tokens,
                                    self.df[(bitmask > 0 & verify.astype(bool))])])
        else:
            results = self.df[bitmask > 0]
        print('took %0.2f seconds to find %d matches using bitmask code' % (
                time.time()-tic, len(results)))
        return results
运行原始代码和位掩码代码

>>> wlook = WordLookerUpper(token_lists)
took 5 seconds to build dataframe
took 10 seconds to build bitmask

>>> wlook.filter_by_tokens(['foo','zoo']).tail(n=1)
took 4.67 seconds to find 3000000 matches using original code
id7999995   zoo None    None    None

>>> wlook.filter_by_tokens_with_bitmask(['foo','zoo']).tail(n=1)
took 0.30 seconds to find 3000000 matches using bitmask code
id7999995   zoo None    None    None

多重处理模块是为了绕过多核机器中的吉尔限制。你应该考虑将你当前的算法扩展到非核心/集群,并继续使用数据帧。我更新了我的文章,想法0应该是对你的改进。have@sisanared这就是我在p中的意思
import array

data =[['foo', 'bar', 'joe'],
       ['foo'],
       ['bar', 'joe'],
       ['zoo']]

wordlists = {}
print "build wordlists"
for x, d in enumerate(data):
    for word in d:
        try:
           wordlists[word].append(x)
        except:
           wordlists[word] = array.array("i",[x])
print "query"
query = [ "foo", "zoo" ]
results = set()
for q in query:
    wordlist = wordlists.get(q)
    if  wordlist:
        for i in wordlist:
            results.add(i)
l = list(results)
l.sort()
for x in l:
    print data[x]
class WordLookerUpper(object):
    def __init__(self, token_lists):
        tic = time.time()
        self.df = pd.DataFrame(token_lists,
                    index=pd.Index(
                        data=['id%d' % i for i in range(len(token_lists))],
                        name='index'))
        print('took %d seconds to build dataframe' % (time.time() - tic))
        tic = time.time()
        dii = {}
        iid = 0
        self.bits = np.zeros(len(token_lists), np.int64)
        for i in range(len(token_lists)):
            for t in token_lists[i]:
                if t not in dii:
                    dii[t] = iid
                    iid += 1
                # set the bit; note that b = dii[t] % 64
                # this 'wrap around' behavior lets us use this
                # bitmask as a probabilistic filter
                b = dii[t]
                self.bits[i] |= (1 << b)
        self.string_to_iid = dii
        print('took %d seconds to build bitmask' % (time.time() - tic))

    def filter_by_tokens(self, tokens, df=None):
        if df is None:
            df = self.df
        tic = time.time()
        # search within each column and then concatenate and dedup results    
        results = [df.loc[lambda df: df[i].isin(tokens)] for i in range(df.shape[1])]
        results = pd.concat(results).reset_index().drop_duplicates().set_index('index')
        print('took %0.2f seconds to find %d matches using original code' % (
                time.time()-tic, len(results)))
        return results

    def filter_by_tokens_with_bitmask(self, search_tokens):
        tic = time.time()
        bitmask = np.zeros(len(self.bits), np.int64)
        verify = np.zeros(len(self.bits), np.int64)
        verification_needed = False
        for t in search_tokens:
            bitmask |= (self.bits & (1<<self.string_to_iid[t]))
            if self.string_to_iid[t] > 64:
                verification_needed = True
                verify |= (self.bits & (1<<self.string_to_iid[t]))
        if verification_needed:
            results = self.df[(bitmask > 0 & ~verify.astype(bool))]
            results = pd.concat([results,
                                 self.filter_by_tokens(search_tokens,
                                    self.df[(bitmask > 0 & verify.astype(bool))])])
        else:
            results = self.df[bitmask > 0]
        print('took %0.2f seconds to find %d matches using bitmask code' % (
                time.time()-tic, len(results)))
        return results
unique_token_lists = [
    ['foo', 'bar', 'joe'], 
    ['foo'], 
    ['bar', 'joe'], 
    ['zoo'],
    ['ziz','zaz','zuz'],
    ['joe'],
    ['joey','joe'],
    ['joey','joe','joe','shabadoo']
]
token_lists = []
for n in range(1000000):
    token_lists.extend(unique_token_lists)
>>> wlook = WordLookerUpper(token_lists)
took 5 seconds to build dataframe
took 10 seconds to build bitmask

>>> wlook.filter_by_tokens(['foo','zoo']).tail(n=1)
took 4.67 seconds to find 3000000 matches using original code
id7999995   zoo None    None    None

>>> wlook.filter_by_tokens_with_bitmask(['foo','zoo']).tail(n=1)
took 0.30 seconds to find 3000000 matches using bitmask code
id7999995   zoo None    None    None