Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/338.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 我如何为不同的体积训练kears模型_Python_Tensorflow_Keras - Fatal编程技术网

Python 我如何为不同的体积训练kears模型

Python 我如何为不同的体积训练kears模型,python,tensorflow,keras,Python,Tensorflow,Keras,****我在一个从CT图像体积分割肝脏的项目中工作,CT体积具有不同的切片数,因此每个体积的形状不同,例如:(512512183)和(512512,64)以及(512512335)等等** 正如我在另一篇文章中看到的那样,我尝试使用None和Global maxpoolig3d() 但有相同的错误,即:- ** 回溯(最近一次呼叫最后一次): 文件“E:\Liver\u Seg\u Project\LiveSegtraining.py”,第123行,列车中 H=model.fit\u生成器(八月

****我在一个从CT图像体积分割肝脏的项目中工作,CT体积具有不同的切片数,因此每个体积的形状不同,例如:(512512183)和(512512,64)以及(512512335)等等**

正如我在另一篇文章中看到的那样,我尝试使用None和Global maxpoolig3d() 但有相同的错误,即:- **

回溯(最近一次呼叫最后一次): 文件“E:\Liver\u Seg\u Project\LiveSegtraining.py”,第123行,列车中
H=model.fit\u生成器(八月流量(数据、标签、批次大小=100), 文件“C:\python3.6.1\Python\lib\site packages\keras\u preprocessing\image\image\u data\u generator.py”,第430行,在flow subset=subset中 文件“C:\python3.6.1\Python\lib\site packages\keras\u preprocessing\image\numpy\u array\u iterator.py”,第72行,在init (len(x),len(xx)) ValueError:x中的所有数组都应该具有相同的长度。找到一对:len(x[0])=183,len(x[?])=64

这是我的型号:

class ModelNw2:
@staticmethod
def build(depth,height, width):
    input_size = (None,None,None,1)
    x = Input(input_size)
    # layer 1
    x1=Conv3D(32, 7, padding="same",data_format="channels_last")(x)
    x1=Activation("relu")(x1)
    x1=MaxPooling3D(pool_size=(2, 2,2), strides=(2, 2,2))(x1)
    # layer 2
    x2=Conv3D(64, 5, padding="same")(x1)
    x2=Activation("relu")(x2)
    x2=MaxPooling3D(pool_size=(2, 2,2), strides=(2, 2,2))(x2)

    # layer 3       
    x3=Conv3D(128, 5, padding="same")(x2)
    x3=Activation("relu")(x3)
    # layer 4
    x4=Conv3D(128, 3, padding="same")(x3)
    x4=Activation("relu")(x4)  
    # concat layer 3 and 4
    concat34 = concatenate([x3,x4], axis = -1)
  aug = ImageDataGenerator(rotation_range=45, width_shift_range=0.1,
    height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,
    horizontal_flip=True, fill_mode="nearest")


# initialize the model
print("[INFO] compiling model...")
model = ModelNw2.build(depth=183,width=512, height=512)
# train the network
print("[INFO] training network...")
weight_saver = ModelCheckpoint('weights1.h1', monitor='val_dice_coef', save_best_only=True, save_weights_only=True)
annealer = LearningRateScheduler(lambda x: 1e-3 * 0.8 ** x)
stop_here = EarlyStopping(patience=5)
start = timeit.default_timer()

H = model.fit_generator(aug.flow(data, label, batch_size=100),
    validation_data=(testX, testY), steps_per_epoch=50,
    epochs=EPOCHS, verbose=2, callbacks = [weight_saver, annealer])

end = timeit.default_timer()
这是培训代码的一部分

 # initialize the data and labels
print("[INFO] loading images...")
data=[]
label=[]

for i in range(10):
    if i>5:
        j=i+10 
        filename ='TrainingF/image/liver-orig0' + str(j+1) + '.mhd'
    else:
        j=i
        filename ='TrainingF/image/liver-orig00' + str(j+1) + '.mhd'

    image = sitk.ReadImage(filename)
    image = sitk.GetArrayFromImage(image) 
    image=Norma(image)
    image = img_to_array(image)
    data.append(image)
print("inssss=" + str(len(data)))


print("[INFO] loading maskss...")
for i in range(10):

    if i>5:
        j=i+10 
       # print("label "+ str(j+1))
        filename ='TrainingF/label/liver-seg0' + str(j+1) + '.mhd'
    else:
        j=i
        filename ='TrainingF/label/liver-seg00' + str(j+1) + '.mhd'
    image = sitk.ReadImage(filename)
    mask = sitk.GetArrayFromImage(image)
    mask = img_to_array(mask)
    label.append(mask)
最后,我将数据和掩码numpy数组适配到模型:

class ModelNw2:
@staticmethod
def build(depth,height, width):
    input_size = (None,None,None,1)
    x = Input(input_size)
    # layer 1
    x1=Conv3D(32, 7, padding="same",data_format="channels_last")(x)
    x1=Activation("relu")(x1)
    x1=MaxPooling3D(pool_size=(2, 2,2), strides=(2, 2,2))(x1)
    # layer 2
    x2=Conv3D(64, 5, padding="same")(x1)
    x2=Activation("relu")(x2)
    x2=MaxPooling3D(pool_size=(2, 2,2), strides=(2, 2,2))(x2)

    # layer 3       
    x3=Conv3D(128, 5, padding="same")(x2)
    x3=Activation("relu")(x3)
    # layer 4
    x4=Conv3D(128, 3, padding="same")(x3)
    x4=Activation("relu")(x4)  
    # concat layer 3 and 4
    concat34 = concatenate([x3,x4], axis = -1)
  aug = ImageDataGenerator(rotation_range=45, width_shift_range=0.1,
    height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,
    horizontal_flip=True, fill_mode="nearest")


# initialize the model
print("[INFO] compiling model...")
model = ModelNw2.build(depth=183,width=512, height=512)
# train the network
print("[INFO] training network...")
weight_saver = ModelCheckpoint('weights1.h1', monitor='val_dice_coef', save_best_only=True, save_weights_only=True)
annealer = LearningRateScheduler(lambda x: 1e-3 * 0.8 ** x)
stop_here = EarlyStopping(patience=5)
start = timeit.default_timer()

H = model.fit_generator(aug.flow(data, label, batch_size=100),
    validation_data=(testX, testY), steps_per_epoch=50,
    epochs=EPOCHS, verbose=2, callbacks = [weight_saver, annealer])

end = timeit.default_timer()