Tensorflow 转换到特定长度
我正在建立一个组织,面临着一个愚蠢的问题。 我的特征是1D和[174,1]作为形状,当鉴别器工作时,我有一些问题要在生成器中将我的潜在变量上采样到174,1。 我修改了mnist的一些代码,它使用了conv2dtranspose,所以可能有直接用于1d信号的东西Tensorflow 转换到特定长度,tensorflow,keras,transpose,convolution,cnn,Tensorflow,Keras,Transpose,Convolution,Cnn,我正在建立一个组织,面临着一个愚蠢的问题。 我的特征是1D和[174,1]作为形状,当鉴别器工作时,我有一些问题要在生成器中将我的潜在变量上采样到174,1。 我修改了mnist的一些代码,它使用了conv2dtranspose,所以可能有直接用于1d信号的东西 z_size = 100 output_size=(174, 1) n_filters=128 n_blocks=2 size_factor = 2**n_blocks hidden_size = (output_size[0]//si
z_size = 100
output_size=(174, 1)
n_filters=128
n_blocks=2
size_factor = 2**n_blocks
hidden_size = (output_size[0]//size_factor)
model = tf.keras.Sequential()
model.add(Input(shape=z_size,))
model.add(Dense(units=n_filters*hidden_size))
model.add(BatchNormalization())
model.add(LeakyReLU())
model.add(Reshape((hidden_size, n_filters)))
# now we upsample the feature space to
model.add(Conv1DTranspose(filters=n_filters,
kernel_size=4,
strides=1,
padding='same',
use_bias=False
))
model.add(BatchNormalization())
model.add(LeakyReLU())
nf = n_filters
for i in range(n_blocks):
nf = nf // 2
model.add(Conv1DTranspose(filters=nf,
kernel_size=5,
strides=2,
padding='same',
use_bias=False
))
model.add(BatchNormalization())
model.add(LeakyReLU())
model.add(Conv1DTranspose(filters=output_size[1],
kernel_size=5,
strides=1,
padding='same',
use_bias=False,
activation='tanh')
)
model.summary()
但是尺寸就差了
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 5504) 555904
_________________________________________________________________
batch_normalization (BatchNo (None, 5504) 22016
_________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 5504) 0
_________________________________________________________________
reshape (Reshape) (None, 43, 128) 0
_________________________________________________________________
conv1d_transpose (Conv1DTran (None, 43, 128) 65536
_________________________________________________________________
batch_normalization_1 (Batch (None, 43, 128) 512
_________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 43, 128) 0
_________________________________________________________________
conv1d_transpose_1 (Conv1DTr (None, 86, 64) 40960
_________________________________________________________________
batch_normalization_2 (Batch (None, 86, 64) 256
_________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 86, 64) 0
_________________________________________________________________
conv1d_transpose_2 (Conv1DTr (None, 172, 32) 10240
_________________________________________________________________
batch_normalization_3 (Batch (None, 172, 32) 128
_________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 172, 32) 0
_________________________________________________________________
conv1d_transpose_3 (Conv1DTr (None, 172, 1) 160
=================================================================
Total params: 695,712
Trainable params: 684,256
Non-trainable params: 11,456
_________________________________________________________________
这源于隐藏的大小=(输出大小[0]//大小系数),它是174//4->43.5四舍五入到43,因此当上采样到172时。
那么我应该手动计算正确的大小吗?或者只进行一次上采样