Python 为什么pickle加载numpy数组的速度比numpy快得多?
我在发布问题之前已经检查过了。 从那里的答案来看,我们可以认为Python 为什么pickle加载numpy数组的速度比numpy快得多?,python,numpy,pickle,Python,Numpy,Pickle,我在发布问题之前已经检查过了。 从那里的答案来看,我们可以认为numpy应该能够更快地使用ndarrays 但是看看这些实验 我们测试的功能: import numpy as np import pickle as pkl a = np.random.randn(1000,5) with open("test.npy", "wb") as f: np.save(f, a) with open("test.pkl", "wb") as f: pkl.dump(a,f)
numpy
应该能够更快地使用ndarrays
但是看看这些实验 我们测试的功能:
import numpy as np
import pickle as pkl
a = np.random.randn(1000,5)
with open("test.npy", "wb") as f:
np.save(f, a)
with open("test.pkl", "wb") as f:
pkl.dump(a,f)
def load_with_numpy(name):
for i in range(1000):
with open(name, "rb") as f:
np.load(f)
def load_with_pickle(name):
for i in range(1000):
with open(name, "rb") as f:
pkl.load(f)
%timeit load_with_numpy("test.npy")
296 ms ± 1.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit load_with_pickle("test.pkl")
28.2 ms ± 994 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
实验结果:
import numpy as np
import pickle as pkl
a = np.random.randn(1000,5)
with open("test.npy", "wb") as f:
np.save(f, a)
with open("test.pkl", "wb") as f:
pkl.dump(a,f)
def load_with_numpy(name):
for i in range(1000):
with open(name, "rb") as f:
np.load(f)
def load_with_pickle(name):
for i in range(1000):
with open(name, "rb") as f:
pkl.load(f)
%timeit load_with_numpy("test.npy")
296 ms ± 1.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit load_with_pickle("test.pkl")
28.2 ms ± 994 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
为什么会这样?如果不访问您正在使用的文件,我们无法复制这些内容,而且您还没有显示文件创建过程。此外,即使您向我们显示的内容也无法实际工作。例如,
load\u with_pickle
试图使用一个从未导入或定义过的非限定的load
函数。@user2357112修复了增加a
的大小,一旦它有100000个元素,我从numpy中看到了更好的性能。@user2699是的,现在我明白了。在我链接的主题中,有人说pickle使用numpy保存和加载numpy数组,但这似乎不是真的