Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/322.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python PyTorch神经网络中不正确维数的修正_Python_Tensorflow_Keras_Neural Network_Pytorch - Fatal编程技术网

Python PyTorch神经网络中不正确维数的修正

Python PyTorch神经网络中不正确维数的修正,python,tensorflow,keras,neural-network,pytorch,Python,Tensorflow,Keras,Neural Network,Pytorch,我试图训练我的神经网络,它是用PyTorch编写的,但由于尺寸不正确,我得到了以下回溯。得到了以下追踪结果 Traceback (most recent call last): File "plot_parametric_pytorch.py", line 139, in <module> ops = opfun(X_train[smpl]) File "plot_parametric_pytorch.py", line 92, in <lambda>

我试图训练我的神经网络,它是用PyTorch编写的,但由于尺寸不正确,我得到了以下回溯。得到了以下追踪结果

Traceback (most recent call last):
  File "plot_parametric_pytorch.py", line 139, in <module>
    ops = opfun(X_train[smpl])
  File "plot_parametric_pytorch.py", line 92, in <lambda>
    opfun = lambda X: model.forward(Variable(torch.from_numpy(X)))
  File "/mnt_home/klee/LBSBGenGapSharpnessResearch/deepnet.py", line 77, in forward
    x = self.features(x)
  File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
    input = module(input)
  File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/modules/pooling.py", line 141, in forward
    self.return_indices)
  File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/_jit_internal.py", line 209, in fn
    return if_false(*args, **kwargs)
  File "/home/klee/anaconda3/envs/sharpenv/lib/python3.7/site-packages/torch/nn/functional.py", line 539, in _max_pool2d
    input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: Given input size: (512x1x1). Calculated output size: (512x0x0). Output size is too small
我正在尝试运行以下代码:

cudnn.benchmark = True
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = X_train.astype('float32')
X_train = np.transpose(X_train, axes=(0, 3, 1, 2))
X_test = X_test.astype('float32')
X_test = np.transpose(X_test, axes=(0, 3, 1, 2))
X_train /= 255
X_test /= 255
device = torch.device('cuda:0')

# This is where you can load any model of your choice.
# I stole PyTorch Vision's VGG network and modified it to work on CIFAR-10.
# You can take this line out and add any other network and the code
# should run just fine.
model = cifar_shallow.cifar10_shallow()
#model.to(device)

# Forward pass
opfun = lambda X: model.forward(Variable(torch.from_numpy(X)))

# Forward pass through the network given the input
predsfun = lambda op: np.argmax(op.data.numpy(), 1)

# Do the forward pass, then compute the accuracy
accfun = lambda op, y: np.mean(np.equal(predsfun(op), y.squeeze()))*100

# Initial point
x0 = deepcopy(model.state_dict())

# Number of epochs to train for
# Choose a large value since LB training needs higher values
# Changed from 150 to 30
nb_epochs = 30 
batch_range = [25, 40, 50, 64, 80, 128, 256, 512, 625, 1024, 1250, 1750, 2048, 2500, 3125, 4096, 4500, 5000]

# parametric plot (i.e., don't train the network if set to True)
hotstart = False

if not hotstart:
    for batch_size in batch_range:
        optimizer = torch.optim.Adam(model.parameters())
        model.load_state_dict(x0)
        #model.to(device)
        average_loss_over_epoch = '-'
        print('Optimizing the network with batch size %d' % batch_size)
        np.random.seed(1337) #So that both networks see same sequence of batches
        for e in range(nb_epochs):
            model.eval()
            print('Epoch:', e, ' of ', nb_epochs, 'Average loss:', average_loss_over_epoch)
            average_loss_over_epoch = 0

            # Checkpoint the model every epoch
            torch.save(model.state_dict(), "./models/ShallowNetCIFAR10BatchSize" + str(batch_size) + ".pth")
            array = np.random.permutation(range(X_train.shape[0]))
            slices = X_train.shape[0] // batch_size
            beginning = 0
            end = 1

            # Training loop!
            for _ in range(slices):
                start_index = batch_size * beginning 
                end_index = batch_size * end
                smpl = array[start_index:end_index]
                model.train()
                optimizer.zero_grad()
                ops = opfun(X_train[smpl]) <<----- error in this line
                tgts = Variable(torch.from_numpy(y_train[smpl]).long().squeeze())
                loss_fn = F.nll_loss(ops, tgts)
                average_loss_over_epoch += loss_fn.data.numpy() / (X_train.shape[0] // batch_size)
                loss_fn.backward()
                optimizer.step()
                beginning += 1
                end += 1

请让我知道我将神经网络模型从PyTorch转换为Keras的方式是否存在问题。据我所知,pytorch中的padding应始终等于1,因为Keras中的padding=相同设置。

第一次卷积不使用padding

nn.Conv2d(3,64,内核大小=3,偏差=False)
因此,空间尺寸将减少2。对于CIFAR,输入的大小为[batch_size,3,32,32],输出的大小为[batch_size,64,30,30]。对于所有其他卷积,空间维度不变,但最大池将使它们减半(整数除法)。由于最多有5个池层,因此高度/宽度更改如下:

30 -> 15 -> 7 -> 3 -> 1 -> 0 (error)
在Keras版本中,您也在max pooling层中使用padding,这可能仅在输入不能严格地被2整除时才应用。如果您想在PyTorch中复制这种行为,您必须为接收奇数高度/宽度输入的最大池层手动设置填充

我不认为在内核大小为2的max-pooling中使用padding是有益的,特别是当您在前面使用ReLU时,这意味着padded-max-pooling只保留了边界值(对于更大的内核大小,情况就不同了)

最简单的解决方案是在第一次卷积中使用填充,以便空间维度保持不变:

nn.Conv2d(3,64,内核大小=3,填充=1,偏差=False)

另一个选项是删除最后一个最大池层,因为高度/宽度已经是1,但这也意味着最后三个卷积只应用于一个值,因为输入大小将是[batch_size,512,1,1],这有点违背了使用卷积的目的。

嗨,我认为这很好。我想知道你是否可以在Keras(正确的模型)和Pytorch模型之间进行交叉验证,以确保它们相等。在验证测试准确性时,我遇到了以下奇怪的问题:

def deepnet(nb_classes):
    global img_size
    model = Sequential()
    model.add(Conv2D(64, (3, 3), input_shape=img_size))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu'))
    model.add(Dropout(0.3))
    model.add(Conv2D(64, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))




    model.add(Conv2D(128, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu')); model.add(Dropout(0.4))
    model.add(Conv2D(128, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))



    model.add(Conv2D(256, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu')); model.add(Dropout(0.4))
    model.add(Conv2D(256, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu')); model.add(Dropout(0.4))
    model.add(Conv2D(256, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))



    model.add(Conv2D(512, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu')); model.add(Dropout(0.4))
    model.add(Conv2D(512, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu')); model.add(Dropout(0.4))
    model.add(Conv2D(512, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))



    model.add(Conv2D(512, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu')); model.add(Dropout(0.4))
    model.add(Conv2D(512, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu')); model.add(Dropout(0.4))
    model.add(Conv2D(512, (3, 3), padding='same'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'))


    model.add(Flatten()); model.add(Dropout(0.5))
    model.add(Dense(512))
    model.add(BatchNormalization())
    model.add(Activation('relu')); model.add(Dropout(0.5))
    model.add(Dense(nb_classes, activation='softmax'))
    return model
30 -> 15 -> 7 -> 3 -> 1 -> 0 (error)