Python Keras模型可以';t并行到多个GPU

Python Keras模型可以';t并行到多个GPU,python,keras,multi-gpu,Python,Keras,Multi Gpu,我正在尝试做一个VAE来编码电影名称,然后在8核GPU上训练它。该模型在单个GPU上编译并符合预期,但当我尝试在多个GPU上运行它时,它会崩溃。以下是自动编码器的基本代码: from keras.layers import Input, GRU, RepeatVector, Conv1D, Dense, TimeDistributed, Dropout, MaxPooling1D from keras.models import Model from keras.utils import to_

我正在尝试做一个VAE来编码电影名称,然后在8核GPU上训练它。该模型在单个GPU上编译并符合预期,但当我尝试在多个GPU上运行它时,它会崩溃。以下是自动编码器的基本代码:

from keras.layers import Input, GRU, RepeatVector, Conv1D, Dense, TimeDistributed, Dropout, MaxPooling1D
from keras.models import Model
from keras.utils import to_categorical, plot_model
from keras.callbacks import ModelCheckpoint
import numpy as np
from keras import backend as K
from keras import metrics
from keras.layers import Lambda, Flatten, Layer
from keras import losses
import tensorflow as tf
import random

# Open file with 20k movie names from imdb
movies = open('/home/ubuntu/MovieNames/data/movies.dat')

data = []

# read data
for line in movies:
    data += [line.split("\t")]
names = [x[1] for x in data]

# get rid of the header
movie_names = names[1:]


chars = list('abcdefghijklmnopqrstuvwxyz ') + ['<END>', '<NULL>']
indices_for_chars = {c: i for i, c in enumerate(chars)}

NAME_MAX_LEN = 35 # include the <END> char

def name_to_vec(name, maxlen=NAME_MAX_LEN):
    name_lowercase = name.lower()
    v = np.zeros(maxlen, dtype=int)
    null_idx = indices_for_chars['<NULL>']
    v.fill(null_idx)
    # ignore cases
    for i, c in enumerate(name_lowercase):
        if i >= maxlen: break
        n = indices_for_chars.get(c, null_idx)
        v[i] = n
    v[min(len(name_lowercase), maxlen-1)] = indices_for_chars['<END>']
    return v

# convert to Keras-compatible form
names = np.array([to_categorical(name_to_vec(name),num_classes=len(chars)) for name in movie_names])

# Global parameters
NAME_LENGTH = names.shape[1]
ALPHABET = names.shape[2]
latent_dim = 10 * 8
intermediate_dim = 24 * 8
batch_size = 100 * 8
epochs = 20 
epsilon_std = 0.01

i = Input(shape=(NAME_LENGTH, ALPHABET))
x = Conv1D(256, 9)(i)
x = Dropout(0.2)(x) # o
x = Conv1D(256, 7)(x)
x = MaxPooling1D(pool_length=3)(x)
x = Dropout(0.2)(x)
x = Conv1D(256, 3)(x)
x = Dropout(0.2)(x)
x = Flatten()(x)
x = Dense(intermediate_dim, activation='relu')(x)
x = Dropout(0.2)(x)
z_mean = Dense(latent_dim)(x)
z_log_var = Dense(latent_dim)(x)

def sampling(args):
    z_mean, z_log_var = args
    epsilon = K.random_normal(shape=(batch_size, latent_dim),
                              mean=0., stddev=epsilon_std)
    return z_mean + K.exp(z_log_var) * epsilon

z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])

h = Dense(intermediate_dim, activation='relu')(z)
h = RepeatVector(NAME_LENGTH)(h)
h = GRU(256, return_sequences=True)(h)
h = Dropout(0.2)(h)
h = GRU(256, return_sequences=True)(h)
h = TimeDistributed(Dense(ALPHABET, activation='softmax'), name='decoded_mean')(h)

autoencoder = Model(i, h)

def vae_objective(y_true, y_pred):
    recon = K.sum(K.categorical_crossentropy(y_pred,y_true),axis=1)
    kl = 0.5 * K.sum(K.exp(z_log_var) + K.square(z_mean) - 1. - z_log_var,axis=1)
    return recon + kl
当我遇到以下问题时,就需要进行调整:

model = to_multi_gpu(autoencoder, n_gpus=8)
model.compile(loss=vae_objective, optimizer='adam', metrics=["accuracy"])
model.fit(names[:8000], names[:8000], batch_size=batch_size)
给我以下错误:

InvalidArgumentError: You must feed a value for placeholder tensor 'input_4' with dtype float
     [[Node: input_4 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
请注意,所有参数都可以被GPU数平均整除,因此我不认为这是问题所在。

使用

model = to_multi_gpu(autoencoder, n_gpus=8)
model.compile(loss=vae_objective, optimizer='adam', metrics=["accuracy"])
model.fit(names[:8000], names[:8000], batch_size=batch_size*8)
ie使用batch_size、run fit和batch_size*GPU对VAE进行编码

确保样本大小可以按批次大小*GPU划分使用

model = to_multi_gpu(autoencoder, n_gpus=8)
model.compile(loss=vae_objective, optimizer='adam', metrics=["accuracy"])
model.fit(names[:8000], names[:8000], batch_size=batch_size*8)
ie使用batch_size、run fit和batch_size*GPU对VAE进行编码


确保样本大小可以除以批次大小*GPU

我很确定错误在损失函数
vae\u目标中。你能用标准的损失函数来尝试这个模型吗?@WilmarvanOmmeren改为标准的分类交叉熵损失函数现在会产生
invalidargumeror(回溯见上文):不兼容的形状:[100,80]与[800,80]
。谢谢你的帮助@Benjaminley你找到这个问题的解决方案了吗。我也遇到了“不兼容形状”的问题。如果我将批处理大小更改为
n\u GPU
,则错误会消失,但显然这不是一个解决方案。@我的意思是,这已经有一段时间了,但据我记忆所及,没有,我从未解决过它。我很确定错误在损失函数
vae\u目标中。你能用标准的损失函数来尝试这个模型吗?@WilmarvanOmmeren改为标准的分类交叉熵损失函数现在会产生
invalidargumeror(回溯见上文):不兼容的形状:[100,80]与[800,80]
。谢谢你的帮助@Benjaminley你找到这个问题的解决方案了吗。我也遇到了“不兼容形状”的问题。如果我将批处理大小更改为
n\u gpus
,则错误会消失,但显然这不是一个解决方案。@我的意思是,这已经有一段时间了,但据我记忆所及,不,我从未解决过它。