Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/308.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow/Keras GPU内存页错误-CUDA_错误\u非法\u地址和嵌入层_Python_Tensorflow_Keras_Gpu - Fatal编程技术网

Python Tensorflow/Keras GPU内存页错误-CUDA_错误\u非法\u地址和嵌入层

Python Tensorflow/Keras GPU内存页错误-CUDA_错误\u非法\u地址和嵌入层,python,tensorflow,keras,gpu,Python,Tensorflow,Keras,Gpu,相同数据的一个热编码数组永远不会有问题,但创建嵌入式数组会导致GPU内存页错误,并且在训练模型时会导致CUDA_错误_非法_地址错误 有时,在抛出错误之前,训练可以进行几个阶段 完整日志在 完整的python文件位于 用于创建嵌入式aray的函数: def char2vec(dataset): """Convert dataset into an integer array for an Embedding layer x: Embedded array y: one

相同数据的一个热编码数组永远不会有问题,但创建嵌入式数组会导致GPU内存页错误,并且在训练模型时会导致CUDA_错误_非法_地址错误

有时,在抛出错误之前,训练可以进行几个阶段

完整日志在

完整的python文件位于

用于创建嵌入式aray的函数:

def char2vec(dataset):
    """Convert dataset into an integer array for an Embedding layer

    x: Embedded array
    y: one hot encoding array

    :param dataset:
    :return: x, y, samples, timesteps, features, char_to_int, int_to_char
    """

    try:
        raw_text = open(dataset, 'r').read().lower()
        print('[*]', dataset)
    except:
        raise

    chars = sorted(list(set(raw_text)))
    char_to_int = dict((c, i) for i, c in enumerate(chars))
    int_to_char = dict((i, c) for i, c in enumerate(chars))

    nb_chars = raw_text.__len__()
    features = chars.__len__()
    timesteps = seq_length

    # cut the text in semi-redundant sequences of seq_length

    step = 3
    X = []
    Y = []
    for i in range(0, nb_chars - seq_length, step):
        X.append(raw_text[i: i + seq_length])
        Y.append(raw_text[i + seq_length])

    samples = X.__len__()

    print('[*] Corpus Length:', nb_chars)   # 163817
    print('[*] Features:', features)        # 61
    print('[*] Samples:', samples)          # 163761
    print('[*] Timestep:', seq_length)      # 56

    # https://github.com/minimaxir/char-embeddings/blob/master/text_generator_keras.py#L48
    # x = np.zeros((len(sentences), maxlen), dtype=np.int)
    # y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
    # for i, sentence in enumerate(sentences):
    #     for t, char in enumerate(sentence):
    #         X[i, t] = char_indices[char]
    #     y[i, char_indices[next_chars[i]]] = 1

    print('[*] Vectorization...')
    x = np.zeros((samples, seq_length), dtype=np.int32)
    y = np.zeros((samples, features), dtype=np.bool)
    for i, sentence in enumerate(X):
        for t, char in enumerate(sentence):
            x[i, t] = char_to_int[char]
        y[i, char_to_int[Y[i]]] = 1

    return x, y, samples, timesteps, features, char_to_int, int_to_char
型号:

model = Sequential()
model.add(Embedding(output_dim=64, input_dim=features))
model.add(Dropout(0.2))
model.add(LSTM(128, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(64))
model.add(Dropout(0.2))
model.add(Dense(features))
model.add(Activation('relu'))
model.compile(loss='categorical_crossentropy',
            optimizer='adam', metrics=['accuracy'])
model.fit(x, y,
        batch_size=batch_size,
        epochs=epochs,
        verbose=1,
        callbacks=callbacks_list,
        # validation_data=(x_val, y_val),
        # validation_split=0.33,
        shuffle=False,
        initial_epoch=initial_epoch)