Python 3.x ';非类型';对象不可调用-带有keras和反向传播的MDN
我已经用很多方法编辑了这段代码,我正在尝试实现一个混合密度网络,它可以获得一维输入并返回二维输出。Python 3.x ';非类型';对象不可调用-带有keras和反向传播的MDN,python-3.x,tensorflow,keras-layer,backpropagation,mdns,Python 3.x,Tensorflow,Keras Layer,Backpropagation,Mdns,我已经用很多方法编辑了这段代码,我正在尝试实现一个混合密度网络,它可以获得一维输入并返回二维输出。 我是tensorflow的新手,非常感谢您的帮助。每次我尝试这个,都会出问题。现在它给了我这个错误: TypeError:在转换的代码中: <ipython-input-127-593edcacdfbd>:6 train_step * mdn_loss = mdn_loss_func(output_dim, num_mixes, x_true, y_true) <ipy
我是tensorflow的新手,非常感谢您的帮助。每次我尝试这个,都会出问题。现在它给了我这个错误: TypeError:在转换的代码中:
<ipython-input-127-593edcacdfbd>:6 train_step *
mdn_loss = mdn_loss_func(output_dim, num_mixes, x_true, y_true)
<ipython-input-124-785c7bda58fa>:3 mdn_loss_func *
y_pred = mdn_model(x_true)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\autograph\impl\api.py:396 converted_call
return py_builtins.overload_of(f)(*args)
TypeError: 'NoneType' object is not callable
损失函数的定义如下:
def mdn_loss_func(output_dim, num_mixes, x_true, y_true):
y_pred = mdn_model(x_true)
print('y_pred shape is {}'.format(y_pred.shape))
y_pred = tf.reshape(y_pred, [-1, (2 * num_mixes * output_dim) + num_mixes], name='reshape_ypreds')
y_true = tf.reshape(y_true, [-1, output_dim], name='reshape_ytrue')
out_mu, out_sigma, out_pi = tf.split(y_pred, num_or_size_splits=[num_mixes * output_dim,
num_mixes * output_dim,
num_mixes],
axis=-1, name='mdn_coef_split')
# Construct the mixture models
cat = tfd.Categorical(logits=out_pi)
component_splits = [output_dim] * num_mixes
mus = tf.split(out_mu, num_or_size_splits=component_splits, axis=1)
sigs = tf.split(out_sigma, num_or_size_splits=component_splits, axis=1)
coll = [tfd.MultivariateNormalDiag(loc=loc, scale_diag=scale) for loc, scale
in zip(mus, sigs)]
mixture = tfd.Mixture(cat=cat, components=coll)
loss = mixture.log_prob(y_true)
loss = tf.negative(loss)
loss = tf.reduce_mean(loss)
return loss
我使用了keras的Adam优化器:
mdn_optimizer = tf.keras.optimizers.Adam(1e-4)
然后我像这样训练它:
def train(dataset, output_dim, num_mixes, epochs):
for epoch in range(epochs):
start = time.time()
for x_true, y_true in dataset:
train_step(x_true, y_true, output_dim, num_mixes)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
display.clear_output(wait=True)
下面是代码:由于格式错误,请重新格式化上面的代码很难说,但看起来您的
mdn\u模型
函数没有返回任何内容。@IanQuah,是的。感谢您的帮助以下是代码:由于格式错误,请重新格式化上面的代码很难说,但您的mdn_模型
函数似乎没有返回任何内容。@IanQuah,是的。谢谢你的帮助
@tf.function
def train_step(x_true, y_true, output_dim, num_mixes):
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
mdn_loss = mdn_loss_func(output_dim, num_mixes, x_true, y_true)
gradients_of_mdn = gen_tape.gradient(mdn_loss, mdn_model.trainable_variables)
mdn_optimizer.apply_gradients(zip(gradients_of_mdn, mdn_model.trainable_variables))
def train(dataset, output_dim, num_mixes, epochs):
for epoch in range(epochs):
start = time.time()
for x_true, y_true in dataset:
train_step(x_true, y_true, output_dim, num_mixes)
print ('Time for epoch {} is {} sec'.format(epoch + 1, time.time()-start))
display.clear_output(wait=True)
%%time
train(train_dataset, OUTPUT_DIMS, N_MIXES, EPOCHS)
``````