Python 大数据集和多输入模型的批量训练

Python 大数据集和多输入模型的批量训练,python,keras,Python,Keras,我有一个keras模型,有50个输入(x1到x50)和1个输出。我面临的一个常见问题是,在Keras中训练多个大文件,这些文件组合起来太大,无法装入GPU内存 起初,我试图: x1 = np.load('x1_train.npy') x2 = np.load('x2_train.npy') x3 = np.load('x3_train.npy') x4 = np.load('x4_train.npy') x5 = np.load('x5_train.npy') x6 = np.load('x6_

我有一个keras模型,有50个输入(x1到x50)和1个输出。我面临的一个常见问题是,在Keras中训练多个大文件,这些文件组合起来太大,无法装入GPU内存

起初,我试图:

x1 = np.load('x1_train.npy')
x2 = np.load('x2_train.npy')
x3 = np.load('x3_train.npy')
x4 = np.load('x4_train.npy')
x5 = np.load('x5_train.npy')
x6 = np.load('x6_train.npy')

y_train = pd.read_csv("train_labels.csv")
然后使用以下方法拟合数据:

model.fit([x1,x2,x3,x4,x5,x6], y_train, validation_data = ([x1_val,x2_val,x3_val,x4_val,x5_val,x6_val],y_validate), epochs = 15, batch_size = 20, verbose = 2)
但可用的RAM不足以容纳数据,因此崩溃

现在我在做:

def generate_batches(batch_size):
  while True:
    x1 = np.load('x1_train.npy')
    x2 = np.load('x2_train.npy')
    x3 = np.load('x3_train.npy')
    x4 = np.load('x4_train.npy')
    x5 = np.load('x5_train.npy')
    x6 = np.load('x6_train.npy')

    y_train = pd.read_csv("train_labels.csv")

    for cbatch in range(0, x1.shape[0], batch_size):
      i = cbatch + batch_size
         yield ([x1[cbatch:i,:,:],x2[cbatch:i,:,:],x3[cbatch:i,:,:],x4[cbatch:i,:,:],x5[cbatch:i,:,:],x6[cbatch:i,:,:]], y_train[cbatch:i])
我计划使用
fit\u generator
来拟合模型,但上面的代码仍然崩溃

x1
x2
x50
所有的形状都是
(77156,30,50,1)