Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/292.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
“如何补救”;分段故障(堆芯倾倒)“;在Ubuntu 18.04上尝试在Python(Anaconda)中安装keras模型时出错_Python_Keras_Segmentation Fault_Anaconda_Ubuntu 18.04 - Fatal编程技术网

“如何补救”;分段故障(堆芯倾倒)“;在Ubuntu 18.04上尝试在Python(Anaconda)中安装keras模型时出错

“如何补救”;分段故障(堆芯倾倒)“;在Ubuntu 18.04上尝试在Python(Anaconda)中安装keras模型时出错,python,keras,segmentation-fault,anaconda,ubuntu-18.04,Python,Keras,Segmentation Fault,Anaconda,Ubuntu 18.04,我有一台新电脑(在Ubuntu 18.04上),它有一个2080Ti GPU。我正在尝试使用Keras(在Anaconda环境中)在Python中训练神经网络,但在尝试拟合模型时遇到了“分段错误(核心转储)”错误 我正在使用的代码在我的Windows PC(有一个1080Ti GPU)上运行完全正常。这个错误似乎与GPU内存有关,我可以看到在安装模型之前运行“nvidia smi”时发生了一些奇怪的事情,我看到大约800mb的可用11gb GPU内存正在被占用,但一旦我编译了模型,这些可用内存就

我有一台新电脑(在Ubuntu 18.04上),它有一个2080Ti GPU。我正在尝试使用Keras(在Anaconda环境中)在Python中训练神经网络,但在尝试拟合模型时遇到了“分段错误(核心转储)”错误

我正在使用的代码在我的Windows PC(有一个1080Ti GPU)上运行完全正常。这个错误似乎与GPU内存有关,我可以看到在安装模型之前运行“nvidia smi”时发生了一些奇怪的事情,我看到大约800mb的可用11gb GPU内存正在被占用,但一旦我编译了模型,这些可用内存就全部被占用了。在Processs部分中,我可以看到这与anaconda环境有关(即…ics link/anaconda3/envs/py35/bin/python=9677MiB)

以下是代码,仅供参考:

from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D, Activation, BatchNormalization
from keras.callbacks import ModelCheckpoint, CSVLogger
from keras import backend as K
import numpy as np

batch_size = 64
num_classes = 10
epochs = 10

# input image dimensions
img_rows, img_cols = 32, 32

# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 3, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 3, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 3)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 3)
    input_shape = (img_rows, img_cols, 3)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')

# normalise pixel values
mean = np.mean(x_train,axis=(0,1,2,3))
std = np.std(x_train,axis=(0,1,2,3))
x_train = (x_train-mean)/(std+1e-7)
x_test = (x_test-mean)/(std+1e-7)

print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))

model.add(Conv2D(64, (3, 3)))
#model.add(BatchNormalization())
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(128, (3, 3)))
#model.add(BatchNormalization())
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(256, (3, 3)))
#model.add(BatchNormalization())
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(1024))
model.add(Activation("relu"))
model.add(Dropout(0.25))

model.add(Dense(1024))
model.add(Activation("relu"))
model.add(Dropout(0.25))

model.add(Dense(1024))
model.add(Activation("relu"))
model.add(Dropout(0.25))

model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])

#load weights from previous run
#model.load_weights('model07_weights_best.hdf5')

from keras.preprocessing.image import ImageDataGenerator

datagen = ImageDataGenerator(
        featurewise_center=False,  # set input mean to 0 over the dataset
        samplewise_center=False,  # set each sample mean to 0
        featurewise_std_normalization=False,  # divide inputs by std of the dataset
        samplewise_std_normalization=False,  # divide each input by its std
        zca_whitening=False,  # apply ZCA whitening
        rotation_range=0.1,  # randomly rotate images in the range (degrees, 0 to 180)
        width_shift_range=0.1,  # randomly shift images horizontally (fraction of total width)
        height_shift_range=0.1,  # randomly shift images vertically (fraction of total height)
        horizontal_flip=True,  # randomly flip images
        vertical_flip=False)  # randomly flip images

# Compute quantities required for feature-wise normalization
# (std, mean, and principal components if ZCA whitening is applied).
datagen.fit(x_train)


#save weights and log
checkpoint = ModelCheckpoint("model14_weights_best.hdf5", monitor='val_acc', verbose=1, save_best_only=True, mode='max')
csv_logger = CSVLogger('model14_loss_log.csv', append=True, separator=';')
callbacks_list = [checkpoint,csv_logger]

# Fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train,
                                 batch_size=batch_size),
                                 epochs=epochs,
                                 validation_data=(x_test, y_test),
                                 callbacks = callbacks_list
                                 )
我并不期望任何东西会占用GPU上的大量空间,但它似乎已经饱和。正如我提到的,它在我的Windows PC上工作


关于什么原因有什么想法吗?

如果是内存问题,那么您可以使用较小的批处理大小对其进行训练。尝试将批处理大小减少到32,如果不起作用,则继续减少到批处理大小1,并观察GPU的使用情况

同时在代码顶部添加以下代码,它将动态分配GPU内存。因此,您将能够看到在较小的批处理大小下使用/需要多少GPU内存

import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True  # dynamically grow the memory used on the GPU
config.log_device_placement = True  # to log device placement (on which device the operation ran)
                                    # (nothing gets printed in Jupyter, only if you run it standalone)
sess = tf.Session(config=config)
set_session(sess)  # set this TensorFlow session as the default session for Keras
资料来源:


我希望这会有所帮助。

我认为这与内存大小无关。我最近一直在处理这个问题。Segmentation fault error表示GPU上的训练过程并行化失败。如果无论数据集有多大,进程都按顺序运行,则不会出现此错误。此外,也无需担心您的深度学习设置

由于您正准备安装一台新机器,我相信在您的上下文中出现分段错误一定有两个原因

首先,我会去检查我的GPU是否安装正确,但根据您提供的详细信息,我认为问题更多地与模块有关(在您的情况下是Keras),这是第二个原因:

  • 在这种情况下,您可能在模块或其依赖项的安装中遇到了一些奇怪的问题。我建议删除它,清理所有内容,然后重新安装

  • 您确定tensorflow gpu安装正确吗?cuda和cudnn呢

如果您认为keras安装正确,请尝试以下测试代码:

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
这将打印您的tensorflow是使用CPU还是GPU后端

如果上述步骤都顺利进行,我怀疑您会再次出现分割错误。


检查GPU上的tensorflow测试

我要补充的是,我正在通过Anaconda安装tensorflow gpu和keras,因此我将自动安装cuda和cudnn
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())