Python cifar10随机训练和测试集

Python cifar10随机训练和测试集,python,numpy,random,keras,numpy-ndarray,Python,Numpy,Random,Keras,Numpy Ndarray,我想对keras.datasets库中存在的CIFAR-10数据集的60000个观测值进行随机分组。我知道,为了构造一个神经网络,它可能没有那么重要,但我是一个Python新手,我想学习用这种编程语言处理数据 因此,要导入数据集,我运行 from keras.datasets import cifar10 (X_train, Y_train), (X_test, Y_test) = cifar10.load_data() 这会自动为我提供列车和测试集的默认细分;但我想把它们混在一起。 我想采取

我想对keras.datasets库中存在的CIFAR-10数据集的60000个观测值进行随机分组。我知道,为了构造一个神经网络,它可能没有那么重要,但我是一个Python新手,我想学习用这种编程语言处理数据

因此,要导入数据集,我运行

from keras.datasets import cifar10
(X_train, Y_train), (X_test, Y_test) = cifar10.load_data()
这会自动为我提供列车和测试集的默认细分;但我想把它们混在一起。 我想采取的步骤是:

  • 在数据集X(60000,32,32,3)和数据集Y(60000,1)中连接列车和测试集
  • 生成一些随机索引以将X和Y数据集子集,例如,50000 obs的训练集和10000 obs的测试集
  • 创建与原始数据集形状相同的新数据集(以ndarray格式)X_-train、X_-test、Y_-train、Y_-test,这样我就可以开始训练卷积神经网络了
但也许有更快捷的方法

我已经尝试了几个小时不同的方法,但没有取得任何成果。有人能帮我吗?非常感谢。

您可以使用拆分数据。如果您希望在每次运行代码时使用相同的随机索引选择,可以设置
random_state
值,并且每次都有相同的测试/训练分割

from keras.datasets import cifar10
(X_train, Y_train), (X_test, Y_test) = cifar10.load_data()

# View first image
import matplotlib.pyplot as plt
plt.imshow(X_train[0])
plt.show()


这是您要求的完整演示。首先,我们下载数据并随机化一次,然后先进行50K训练,然后进行10K验证

In [21]: import tensorflow  
In [22]: import tensorflow.keras.datasets as datasets    
In [23]: cifar10 = datasets.cifar10.load_data() 
In [24]: (X_train, Y_train), (X_test, Y_test) = datasets.cifar10.load_data() 

In [25]: X_train.shape, Y_train.shape 
Out[25]: ((50000, 32, 32, 3), (50000, 1))

In [26]: X_test.shape, Y_test.shape 
Out[26]: ((10000, 32, 32, 3), (10000, 1)) 

In [27]: import numpy as np
In [28]: X, Y = np.vstack((X_train, X_test)), np.vstack((Y_train, Y_test))  

In [29]: X.shape, Y.shape 
Out[29]: ((60000, 32, 32, 3), (60000, 1)) 

In [30]: # Shuffle only the training data along axis 0 
    ...: def shuffle_train_data(X_train, Y_train): 
    ...:     """called after each epoch""" 
    ...:     perm = np.random.permutation(len(Y_train)) 
    ...:     Xtr_shuf = X_train[perm] 
    ...:     Ytr_shuf = Y_train[perm] 
    ...:      
    ...:     return Xtr_shuf, Ytr_shuf 


In [31]: X_shuffled, Y_shuffled = shuffle_train_data(X, Y) 

In [32]: (X_train_new, Y_train_new) = X_shuffled[:50000, ...], Y_shuffled[:50000, ...] 

In [33]: (X_test_new, Y_test_new) = X_shuffled[50000:, ...], Y_shuffled[50000:, ...] 

In [34]: X_train_new.shape, Y_train_new.shape 
Out[34]: ((50000, 32, 32, 3), (50000, 1))

In [35]: X_test_new.shape, Y_test_new.shape 
Out[35]: ((10000, 32, 32, 3), (10000, 1))


我们有一个函数
shuffle\u train\u data
,它可以持续地对数据进行洗牌,使示例及其标签保持相同的顺序。

非常感谢,这对像我这样的新手非常有帮助!
In [21]: import tensorflow  
In [22]: import tensorflow.keras.datasets as datasets    
In [23]: cifar10 = datasets.cifar10.load_data() 
In [24]: (X_train, Y_train), (X_test, Y_test) = datasets.cifar10.load_data() 

In [25]: X_train.shape, Y_train.shape 
Out[25]: ((50000, 32, 32, 3), (50000, 1))

In [26]: X_test.shape, Y_test.shape 
Out[26]: ((10000, 32, 32, 3), (10000, 1)) 

In [27]: import numpy as np
In [28]: X, Y = np.vstack((X_train, X_test)), np.vstack((Y_train, Y_test))  

In [29]: X.shape, Y.shape 
Out[29]: ((60000, 32, 32, 3), (60000, 1)) 

In [30]: # Shuffle only the training data along axis 0 
    ...: def shuffle_train_data(X_train, Y_train): 
    ...:     """called after each epoch""" 
    ...:     perm = np.random.permutation(len(Y_train)) 
    ...:     Xtr_shuf = X_train[perm] 
    ...:     Ytr_shuf = Y_train[perm] 
    ...:      
    ...:     return Xtr_shuf, Ytr_shuf 


In [31]: X_shuffled, Y_shuffled = shuffle_train_data(X, Y) 

In [32]: (X_train_new, Y_train_new) = X_shuffled[:50000, ...], Y_shuffled[:50000, ...] 

In [33]: (X_test_new, Y_test_new) = X_shuffled[50000:, ...], Y_shuffled[50000:, ...] 

In [34]: X_train_new.shape, Y_train_new.shape 
Out[34]: ((50000, 32, 32, 3), (50000, 1))

In [35]: X_test_new.shape, Y_test_new.shape 
Out[35]: ((10000, 32, 32, 3), (10000, 1))