Python 如何将此TensorFlow代码转换为PyTorch?
我有这个简单的tensorflow代码块,pytorch中的等价物是什么?我被困在试图编码它。由于维度的原因,我遇到了多个运行时错误。 这是tensorflow代码:Python 如何将此TensorFlow代码转换为PyTorch?,python,tensorflow,deep-learning,neural-network,pytorch,Python,Tensorflow,Deep Learning,Neural Network,Pytorch,我有这个简单的tensorflow代码块,pytorch中的等价物是什么?我被困在试图编码它。由于维度的原因,我遇到了多个运行时错误。 这是tensorflow代码: x = tf.placeholder(tf.float32, [708, 256, 3]) y = tf.placeholder(tf.float32, [708, 4]) f1 = tf.Variable(tf.random_normal([5,3,8*3])) f2 = tf.Variable(tf.random_norma
x = tf.placeholder(tf.float32, [708, 256, 3])
y = tf.placeholder(tf.float32, [708, 4])
f1 = tf.Variable(tf.random_normal([5,3,8*3]))
f2 = tf.Variable(tf.random_normal([5,8*3,4*3]))
n1 = tf.Variable(tf.random_normal([61*12,12]))
n2 = tf.Variable(tf.random_normal([12,4]))
b1 = tf.Variable(tf.random_normal([8*3]))
b2 = tf.Variable(tf.random_normal([4*3]))
b3 = tf.Variable(tf.random_normal([12]))
b4 = tf.Variable(tf.random_normal([4]))
conv1 = tf.nn.conv1d(x,f1,stride=1,padding="VALID")
conv1 = tf.nn.bias_add(conv1, b1)
conv1 = tf.nn.sigmoid(conv1)
p1 = tf.layers.average_pooling1d(conv1, pool_size=2, strides=2, padding='VALID')
conv2 = tf.nn.conv1d(p1,f2,stride=1,padding="VALID")
conv2 = tf.nn.bias_add(conv2, b2)
conv2 = tf.nn.sigmoid(conv2)
p2 = tf.layers.average_pooling1d(conv2, pool_size=2, strides=2, padding='VALID')
nn = tf.layers.Flatten()(p2)
fc1 = tf.add(tf.matmul(nn, n1), b3)
fc1 = tf.nn.sigmoid(fc1)
out = tf.add(tf.matmul(fc1, n2), b4)
out = tf.nn.softmax(out)
如何在pytorch中实现相同的功能?下面的代码是我尝试过的,但是我认为由于通道输入,尺寸被弄乱了
class TwoLayerNet(torch.nn.Module):
def __init__(self):
super(TwoLayerNet,self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(1,3, 3*8, kernel_size=5, stride=1),
nn.Sigmoid(),
nn.AvgPool1d(kernel_size=2, stride=0))
self.conv2 = nn.Sequential(
nn.Conv1d(3*8, 12, kernel_size=5, stride=1),
nn.Sigmoid(),
nn.AvgPool1d(kernel_size=2, stride = 0))
#self.drop_out = nn.Dropout()
self.fc1 = nn.Linear(708, 732)
self.fc2 = nn.Linear(732, 4)
def forward(self, x):
out = self.conv1(x)
out = self.conv2(out)
out = out.reshape(out.size(0), -1)
out = self.drop_out(out)
out = self.fc1(out)
out = self.fc2(out)
return out