Pytorch 预期为4维输入,但获得3维输入

Pytorch 预期为4维输入,但获得3维输入,pytorch,conv-neural-network,artificial-intelligence,tensor,generative-adversarial-network,Pytorch,Conv Neural Network,Artificial Intelligence,Tensor,Generative Adversarial Network,有两个鉴别器d_l1和d_l2,以及两个生成器g_l1和g_l2。原始框架中的逻辑流程如下所示: 为了预训练鉴别器,我们有正数据,但我们需要从生成器生成负数据。因此调用了一个函数“generate_samples()” def generate_samples(model_dict, negative_file, batch_size, use_cuda=False, temperature=1.0): neg_data = []

有两个鉴别器d_l1和d_l2,以及两个生成器g_l1和g_l2。原始框架中的逻辑流程如下所示:

为了预训练鉴别器,我们有正数据,但我们需要从生成器生成负数据。因此调用了一个函数“generate_samples()”

def generate_samples(model_dict, negative_file, batch_size,
                     use_cuda=False, temperature=1.0):
   
    neg_data = []
    for i in range(batch_size):
        sample = get_sample(model_dict, use_cuda, temperature)

        if i < 25:
            print("Generated: %s" % sample)
        elif i == 25:        
            sample = sample.cpu()
            neg_data.append(sample.data.numpy())
    neg_data = np.concatenate(neg_data, axis=0)

    print("Saving generated samples for reuse.")
   
    np.save(negative_file, neg_data)
现在,有三个可能的因素。首先,第二级鉴别器不将其输入数据嵌入
nn.Embedding
向量中,因为数据已采用浮点格式,而l1鉴别器则采用浮点格式。此外,在行
torch.zero(批大小、序列长度、向量大小)、vocab大小)
中,一个额外的维度被添加到零张量“向量大小”。最后,变量vocab_大小不应该真正相关,因为不存在计算第二级鉴别器可能输入的词汇量这样的事情,这基本上是不可数无限的。虽然我没有改变或删除vocab_的大小,因为我不知道我会把什么放在它的位置上。这只是第一级鉴别器的剩余部分

2st级别鉴别器的代码已从第一级鉴别器克隆而来,并稍作修改,但其构造函数仍以与第一级鉴别器完全相同的方式初始化CONV2D模块列表

第一级(目前为第二级)鉴别器初始化其conv2d模块的方式如下:

self.convs = nn.ModuleList([
    nn.Conv2d(1, num_f, (f_size, self.dis_emb_dim)) for f_size, num_f in zip(self.filter_sizes, self.num_filters)
])
其中self.filter_size和self.num_filters都是一组数字,描述了一组过滤器大小以及为每个创建的过滤器设置的该大小的过滤器数量。THL1和l2鉴别器都应该具有适当的值

如何更改nn.Conv2d模块的modulelist的初始化,以便第二级鉴别器中的模块列表采用三维张量作为输入,而不是原始张量

找到的pytorch for nn.conv2d文档没有太大帮助

谢谢你抽出时间

Traceback (most recent call last):   File "/mnt/tmp/ReLeakGan/./aux.py", line 48, in <module>
    model_dict_l2, optimizer_dict_l2, scheduler_dict_l2 = pretrain_discriminator_l2(model_dict_l2,   File "/mnt/tmp/ReLeakGan/main.py", line 367, in pretrain_discriminator_l2
    generate_samples(model_dict_l2, neg_l2_fd, batch_size, "l2", use_cuda, temperature)   File "/mnt/tmp/ReLeakGan/main.py", line 279, in generate_samples
    sample = model_dict["discriminator"].get_sample(model_dict, use_cuda, temperature)   File "/mnt/tmp/ReLeakGan/Discriminator.py", line 117, in get_sample
    f_t = discriminator(cur_sen)["feature"]   File "/usr/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)   File "/mnt/tmp/ReLeakGan/Discriminator.py", line 71, in forward
    convs = [F.relu(conv(x)).squeeze(3) for conv in self.convs] # [batch_size * num_filter * seq_len] --> seq_length: Number of sentences in padded paragraph.   File "/mnt/tmp/ReLeakGan/Discriminator.py", line 71, in <listcomp>
    convs = [F.relu(conv(x)).squeeze(3) for conv in self.convs] # [batch_size * num_filter * seq_len] --> seq_length: Number of sentences in padded paragraph.   File "/usr/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)   File "/usr/lib64/python3.9/site-packages/torch/nn/modules/conv.py", line 423, in forward
    return self._conv_forward(input, self.weight)   File "/usr/lib64/python3.9/site-packages/torch/nn/modules/conv.py", line 419, in _conv_forward
    return F.conv2d(input, weight, self.bias, self.stride, RuntimeError: Expected 4-dimensional input for 4-dimensional weight [100, 1, 1, 10], but got 3-dimensional input of size [10, 30, 2000] instead
self.convs = nn.ModuleList([
    nn.Conv2d(1, num_f, (f_size, self.dis_emb_dim)) for f_size, num_f in zip(self.filter_sizes, self.num_filters)
])