Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/variables/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 当我测量一个应该是O(n)=n的算法的运行时间时,我得到O(n)=1。我做错了什么?_Python_Arrays_Algorithm_Data Structures_Big O - Fatal编程技术网

Python 当我测量一个应该是O(n)=n的算法的运行时间时,我得到O(n)=1。我做错了什么?

Python 当我测量一个应该是O(n)=n的算法的运行时间时,我得到O(n)=1。我做错了什么?,python,arrays,algorithm,data-structures,big-o,Python,Arrays,Algorithm,Data Structures,Big O,我在自学数据结构,并试图测量将append方法实现到数组数据结构的有效和低效方法之间的时间复杂度差异。也就是说,根据我在纸上做的一些数学计算,无效的方法应该是O(n)=n^2,有效的方法应该是O(n)=n 问题是,当我运行模拟并将这两种情况绘制在一张图上时,效率低下的方法会按预期执行,但效率低下的方法会执行O(n)=1。我做错什么了吗 import datetime import time import random import matplotlib.pyplot as plt import

我在自学数据结构,并试图测量将append方法实现到数组数据结构的有效和低效方法之间的时间复杂度差异。也就是说,根据我在纸上做的一些数学计算,无效的方法应该是O(n)=n^2,有效的方法应该是O(n)=n

问题是,当我运行模拟并将这两种情况绘制在一张图上时,效率低下的方法会按预期执行,但效率低下的方法会执行O(n)=1。我做错什么了吗

import datetime
import time
import random
import matplotlib.pyplot as plt
import numpy as np

# Inefficient append
class PyListInef:
   
   
   def __init__(self):
       self.items = []
   
   def append(self, item):
       # Inefficient append -> appending n items to the list causes a O(n) = n^2, since for each i for i in 1, 2, 3...n 
       # we need i * k operations in order to append every element to the new list. Then, by weak induction we prove 
       # that the number of required operations is n(n+1)/2 which implies O(n) = n^2
       self.items = self.items + [item]
       
   # Using magic method for our PyList to be an iterable object.
   def __iter__(self):
       for c in self.items:
           yield c
           
# Efficient append:

class PyList:
   def __init__(self):
       self.items = []
   
   def append(self, item):
       self.items.append(item)
   
   def __iter__(self):
       for c in self.items:
           yield c

# The inefficient append running time

lst = PyListInef()

time_dict_inef = dict()
time_dict_ef = dict()

series = np.linspace(1, 301, 300)

   
time.sleep(2)
   
for i in range(300):
   starttime = time.time()
   for j in range(i):
       lst.append(series[j])
       
   elapsed_time = time.time() - starttime
   time_dict_inef[i] = elapsed_time * 100000
   
# The efficient append running time

lst = PyList()

time.sleep(2)

for i in range(300):
   starttime = time.time()
   for j in range(i):
       lst.append(series[j])
       
   elapsed_time = time.time() - starttime
   time_dict_ef[i] = elapsed_time * 100000

plt.figure(figsize = (14,7))
plt.plot(time_dict_inef.keys(), time_dict_inef.values())
plt.plot(time_dict_ef.keys(), time_dict_ef.values())
plt.xlabel('Number of elements to append')
plt.ylabel('Elapsed time (microseconds)')
plt.title('Comparison between efficient appending vs inefficient appending in a list data structure')
plt.show()


你能帮我指出我做错了什么吗?

time.time()
的分辨率有限。您的“高效附加”计时足够快,通常在
time.time()
分辨率的一个刻度之前完成。请注意,在黄色图表中显示了两个精确的时间:0个刻度和1个刻度。图右侧的1次勾选次数更频繁,因为即使时间短于一次勾选,较长的时间也意味着在运行期间发生勾选的概率更高。如果您使用更大的输入运行,您最终会看到2个刻度和更高的刻度。(还请注意,100000中没有足够的0,因此您的计时时间缩短了10倍。)

time.time()
的分辨率有限。您的“高效附加”计时足够快,通常在
time.time()
分辨率的一个刻度之前完成。请注意,在黄色图表中显示了两个精确的时间:0个刻度和1个刻度。图右侧的1次勾选次数更频繁,因为即使时间短于一次勾选,较长的时间也意味着在运行期间发生勾选的概率更高。如果您使用更大的输入运行,您最终会看到2个刻度和更高的刻度。(还要注意,100000没有足够的0,因此您的计时时间缩短了10倍。)

非常感谢,用户2357112支持Monica!我刚刚注意到丢失的0哈哈。我的时区是凌晨2点,我有点累。。我想该睡觉了。您知道是否有任何解决时间限制的方法。time()可以测量有效的函数并获得类似于直线的结果?非常感谢,user2357112支持Monica!我刚刚注意到丢失的0哈哈。我的时区是凌晨2点,我有点累。。我想该睡觉了。你知道有没有解决时间限制的办法吗。time()可以用来衡量函数的效率,得到类似于直线的东西。与其给一次运行计时,尝试计时100/1000次运行?使用更高的n值,如1,10100100010000011000,由于系统开销、设置时间等原因,只有几微秒的计时基本上是无用的…我将尝试这样做!!谢谢,伙计们。不要为一次运行计时,试着计时100/1000次?使用更高的n,比如10100100010000011000,因为系统开销、设置时间……我会尝试这样做!!谢谢各位。
import datetime
import time
import random
import matplotlib.pyplot as plt
import numpy as np

# Inefficient append
class PyListInef:
   
   
   def __init__(self):
       self.items = []
   
   def append(self, item):
       # Inefficient append -> appending n items to the list causes a O(n) = n^2, since for each i for i in 1, 2, 3...n 
       # we need i * k operations in order to append every element to the new list. Then, by weak induction we prove 
       # that the number of required operations is n(n+1)/2 which implies O(n) = n^2
       self.items = self.items + [item]
       
   # Using magic method for our PyList to be an iterable object.
   def __iter__(self):
       for c in self.items:
           yield c
           
# Efficient append:

class PyList:
   def __init__(self):
       self.items = []
   
   def append(self, item):
       self.items.append(item)
   
   def __iter__(self):
       for c in self.items:
           yield c

# The inefficient append running time

lst = PyListInef()

time_dict_inef = dict()
time_dict_ef = dict()

series = np.linspace(1, 301, 300)

   
time.sleep(2)
   
for i in range(300):
   starttime = time.time()
   for j in range(i):
       lst.append(series[j])
       
   elapsed_time = time.time() - starttime
   time_dict_inef[i] = elapsed_time * 100000
   
# The efficient append running time

lst = PyList()

time.sleep(2)

for i in range(300):
   starttime = time.time()
   for j in range(i):
       lst.append(series[j])
       
   elapsed_time = time.time() - starttime
   time_dict_ef[i] = elapsed_time * 100000

plt.figure(figsize = (14,7))
plt.plot(time_dict_inef.keys(), time_dict_inef.values())
plt.plot(time_dict_ef.keys(), time_dict_ef.values())
plt.xlabel('Number of elements to append')
plt.ylabel('Elapsed time (microseconds)')
plt.title('Comparison between efficient appending vs inefficient appending in a list data structure')
plt.show()