Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/19.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x python-按列表顺序从匹配列表项和dict值中获取密钥_Python 3.x_List_Dictionary_Key - Fatal编程技术网

Python 3.x python-按列表顺序从匹配列表项和dict值中获取密钥

Python 3.x python-按列表顺序从匹配列表项和dict值中获取密钥,python-3.x,list,dictionary,key,Python 3.x,List,Dictionary,Key,我有以下代码: d = {'h' : 11111111, 't' : 1010101, 'e' : 10101111, 'n' : 1} my_list = [1010101, 11111111, 10101111, 1] get_keys = [k for k, v in d.items() if v in my_list] print(get_keys) 我得到的结果是: ['h', 't', 'e', 'n'] 但是,我希望它按照我的清单的顺序排列,以便: ['t', 'h',

我有以下代码:

d = {'h' : 11111111, 't' : 1010101, 'e' : 10101111, 'n' : 1}

my_list = [1010101, 11111111, 10101111, 1]

get_keys = [k for k, v in d.items() if v in my_list]

print(get_keys)
我得到的结果是:

['h', 't', 'e', 'n']
但是,我希望它按照我的清单的顺序排列,以便:

['t', 'h', 'e', 'n']
我该怎么做?谢谢大家!

给定(其中所有值也是唯一的):

d = {'h' : 11111111, 't' : 1010101, 'e' : 10101111, 'n' : 1}

my_list = [1010101, 11111111, 10101111, 1]

new_list = []

for i in my_list:
    for key, value in d.items():
        if value == i:
            new_list.append(key)

print(new_list)
您可以反转该命令:

>>> d_inverted={v:k for k,v in d.items()}
然后按预期编制索引:

>>> [d_inverted[e] for e in my_list]
['t', 'h', 'e', 'n']
这适用于任何最新版本的Python


请注意,您发布的方法具有
O(n^2)
复杂性。这意味着执行代码的时间将随着元素数的平方而增加将元素加倍,执行时间将翻两番。结果不好

从视觉上看,如下所示:

相比之下,我发布的方法是
O(n)
,或者与元素的数量成正比双倍数据等于双倍执行时间。更好的结果。(但不如
O(1)
好,后者的执行时间与数据大小无关。)

如果要对它们进行比较,请执行以下操作:

def bad(d,l):
    new_list = []

    for i in l:
        for key, value in d.items():
            if value == i:
                new_list.append(key)
    return new_list 

def better(d,l):
    d_inverted={v:k for k,v in d.items()}
    return [d_inverted[e] for e in my_list]

if __name__=='__main__':
    import timeit  
    import random 

    for tgt in (5,10,20,40,80,160,320,640,1280):
        d={chr(i):i for i in range(100,100+tgt)}
        my_list=list(d.values())
        random.shuffle(my_list)
        print("Case of {} elements:".format(len(my_list)))
        for f in (bad, better):
            print("\t{:10s}{:.4f} secs".format(f.__name__, timeit.timeit("f(d,my_list)", setup="from __main__ import f, d, my_list", number=100)))
印刷品:

Case of 5 elements:
    bad       0.0003 secs
    better    0.0001 secs
Case of 10 elements:
    bad       0.0006 secs
    better    0.0002 secs
Case of 20 elements:
    bad       0.0022 secs
    better    0.0003 secs
Case of 40 elements:
    bad       0.0071 secs
    better    0.0004 secs
Case of 80 elements:
    bad       0.0240 secs
    better    0.0008 secs
Case of 160 elements:
    bad       0.0912 secs
    better    0.0018 secs
Case of 320 elements:
    bad       0.3571 secs
    better    0.0032 secs
Case of 640 elements:
    bad       1.3704 secs
    better    0.0053 secs
Case of 1280 elements:
    bad       5.4443 secs
    better    0.0107 secs

您可以看到,嵌套循环方法从
3x
开始变慢,并随着数据大小的增加而增加到
500x
变慢。时间的增长与大O的预测密切相关。您可以想象数百万个元素会发生什么情况。

python字典没有排序。您可能想使用这些值是否都是唯一的?否则,一些键可能会模棱两可。所有键都是唯一的。是的,我现在已经排序了,无论如何,干杯!对于
my_列表中的每个项目
您正在dict
d
中的每个项目上循环。换句话说,这具有O(n^2)复杂性。请不要用这个。。。。
Case of 5 elements:
    bad       0.0003 secs
    better    0.0001 secs
Case of 10 elements:
    bad       0.0006 secs
    better    0.0002 secs
Case of 20 elements:
    bad       0.0022 secs
    better    0.0003 secs
Case of 40 elements:
    bad       0.0071 secs
    better    0.0004 secs
Case of 80 elements:
    bad       0.0240 secs
    better    0.0008 secs
Case of 160 elements:
    bad       0.0912 secs
    better    0.0018 secs
Case of 320 elements:
    bad       0.3571 secs
    better    0.0032 secs
Case of 640 elements:
    bad       1.3704 secs
    better    0.0053 secs
Case of 1280 elements:
    bad       5.4443 secs
    better    0.0107 secs