Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Keras与Pyrotch NN代码有细微差异,需要澄清_Python_Tensorflow_Keras_Pytorch - Fatal编程技术网

Python Keras与Pyrotch NN代码有细微差异,需要澄清

Python Keras与Pyrotch NN代码有细微差异,需要澄清,python,tensorflow,keras,pytorch,Python,Tensorflow,Keras,Pytorch,我有Keras和Pytorch代码用于同一个神经网络。有些线路在这两条线路之间切换。 我想知道为什么Pytorch版本的最大池出现在批量规范化和卷盘激活之前。在Keras中,它位于这两行之后。对于扁平化,我也很困惑Pytorch是如何使用64*7*7的(7是从哪里来的?) 以下是浅网Alex net的Keras版本: def shallownet(nb_classes): global img_size model = Sequential() model.add(Con

我有Keras和Pytorch代码用于同一个神经网络。有些线路在这两条线路之间切换。 我想知道为什么Pytorch版本的最大池出现在批量规范化和卷盘激活之前。在Keras中,它位于这两行之后。对于扁平化,我也很困惑Pytorch是如何使用64*7*7的(7是从哪里来的?)

以下是浅网Alex net的Keras版本:

def shallownet(nb_classes):
    global img_size
    model = Sequential()
    model.add(Conv2D(64, (5, 5), input_shape=img_size, data_format='channels_first'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(3,3), strides=(2,2), padding='same', data_format='channels_first'))

    model.add(Conv2D(64, (5, 5), padding='same', data_format='channels_first'))
    model.add(BatchNormalization(axis=1))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(3,3), strides=(2,2), padding='same', data_format='channels_first'))

    model.add(Flatten())
    model.add(Dense(384))
    model.add(BatchNormalization())
    model.add(Activation('relu'))
    model.add(Dropout(0.5))
    model.add(Dense(192))
    model.add(BatchNormalization())
    model.add(Activation('relu'))
    model.add(Dropout(0.5))
    model.add(Dense(nb_classes, activation='softmax'))
    return model
和Pytorch版本:

class AlexNet(nn.Module):

    def __init__(self, num_classes=10):
        super(AlexNet, self).__init__()
        self.features = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=5, padding=2,
                      bias=False),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.Conv2d(64, 64, kernel_size=5, padding=2, bias=False),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
        )
        self.classifier = nn.Sequential(
            nn.Linear(64 * 7 * 7, 384, bias=False),
            nn.BatchNorm1d(384),
            nn.ReLU(inplace=True),
            nn.Dropout(0.5),
            nn.Linear(384, 192, bias=False),
            nn.BatchNorm1d(192),
            nn.ReLU(inplace=True),
            nn.Dropout(0.5),
            nn.Linear(192, num_classes)
        )
        self.regime = {
            0: {'optimizer': 'SGD', 'lr': 1e-3,
                'weight_decay': 5e-4, 'momentum': 0.9},
            60: {'lr': 1e-2},
            120: {'lr': 1e-3},
            180: {'lr': 1e-4}
        }

    def forward(self, x):
        x = self.features(x)
        x = x.view(-1, 64 * 7 * 7)
        x = self.classifier(x)
        return F.log_softmax(x)


def cifar10_shallow(**kwargs):
    num_classes = getattr(kwargs, 'num_classes', 10)
    return AlexNet(num_classes)


def cifar100_shallow(**kwargs):
    num_classes = getattr(kwargs, 'num_classes', 100)
    return AlexNet(num_classes)

Max pooling通过选取某个值池中的最大值对数据进行下采样。数据之间的比较不会受到批标准化和ReLU激活的影响,因为两者都是一对一单调递增函数

relu(x)=最大值(0,x)
bn(x)=(x-mu)/sigma
因此,max-pool在这两层之后还是之前并不重要(在这两层之前使用它可能更有效)

关于展平,我认为7s是
flatte()
之前层的空间尺寸,即
H=W=7
。因此,值的总数等于空间维度乘以信道大小,信道大小为
64*7*7