Python 在tensorflow 2中,以50%的真正速率计算分布的Jensen-Shannon散度的自定义度量
我不熟悉TensorFlow,也不熟悉一般的编码。我试图使用一个自定义指标:输入变量之一的概率分布的Jensen-Shannon散度,真实正率(回忆)为50%。我正在努力使它工作。我还使用了一个自定义的loss函数,我设法使它工作(但为了简单起见,我在下面的代码中保留了一个标准loss) 在运行上述行时,我遇到以下错误:Python 在tensorflow 2中,以50%的真正速率计算分布的Jensen-Shannon散度的自定义度量,python,tensorflow,keras,Python,Tensorflow,Keras,我不熟悉TensorFlow,也不熟悉一般的编码。我试图使用一个自定义指标:输入变量之一的概率分布的Jensen-Shannon散度,真实正率(回忆)为50%。我正在努力使它工作。我还使用了一个自定义的loss函数,我设法使它工作(但为了简单起见,我在下面的代码中保留了一个标准loss) 在运行上述行时,我遇到以下错误: ValueError: in user code: <ipython-input-79-2185a89bc166>:39 jsd * j
ValueError: in user code:
<ipython-input-79-2185a89bc166>:39 jsd *
js = (tf.keras.losses.KLDivergence(prob_a,m) + tf.keras.losses.KLDivergence(prob_b,m))/2
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:1091 __init__ **
kl_divergence, name=name, reduction=reduction)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:235 __init__
super(LossFunctionWrapper, self).__init__(reduction=reduction, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:99 __init__
losses_utils.ReductionV2.validate(reduction)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/losses/loss_reduction.py:68 validate
raise ValueError('Invalid Reduction Key %s.' % key)
ValueError: Invalid Reduction Key Tensor("metrics_2/jsd/truediv:0", shape=(50,), dtype=float64).
ValueError:在用户代码中:
:39 jsd*
js=(tf.keras.loss.kldisference(prob_a,m)+tf.keras.loss.kldisference(prob_b,m))/2
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/loss.py:1091**
kl_分歧,名称=名称,缩减=缩减)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/loss.py:235__init__
super(LossFunctionWrapper,self)。\uuuuu init\uuuuu(reduce=reduce,name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/loss.py:99\u init__
损失_utils.ReductionV2.validate(减少)
/usr/local/lib/python3.6/dist包/tensorflow/python/ops/loss/loss_reduction.py:68
raise VALUERROR('无效的缩减键%s.%Key)
ValueError:还原键张量无效(“metrics_2/jsd/truediv:0”,shape=(50,),dtype=float64)。
tf.compat.v1.disable_eager_execution()
initializer = keras.initializers.Orthogonal()
l2_layer1 = 0.0
l2_layer2 = 0.0
l2_layer3 = 0.0
l2_layer4 = 0.0
def neural_network():
# create model
i = Input(shape=(n_cols,))
x1 = Dense(32, activation='relu',kernel_regularizer=l2(l2_layer1),kernel_initializer=initializer)(i)
x2 = Dense(32, activation='relu',kernel_regularizer=l2(l2_layer2),kernel_initializer=initializer)(x1)
x3 = Dense(32, activation='relu',kernel_regularizer=l2(l2_layer3),kernel_initializer=initializer)(x2)
x4 = Dense(32, activation='relu',kernel_regularizer=l2(l2_layer3),kernel_initializer=initializer)(x3)
o = Dense(2, activation='softmax',kernel_regularizer=l2(l2_layer4),kernel_initializer=initializer)(x4)
model = Model(i,o)
opt = tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
model.compile(loss=custom_loss(i,10), optimizer=opt,metrics=['accuracy',custom_metric(i)])
return model
model = neural_network()
history = History()
# fit the model
#history = model.fit(trainX, trainY, epochs=5, verbose=1,batch_size=2048,shuffle = True)
history = model.fit(trainX, trainY, validation_data=(valX, valY), epochs=50, verbose=0 ,batch_size=2048,shuffle = True)
ValueError: in user code:
<ipython-input-79-2185a89bc166>:39 jsd *
js = (tf.keras.losses.KLDivergence(prob_a,m) + tf.keras.losses.KLDivergence(prob_b,m))/2
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:1091 __init__ **
kl_divergence, name=name, reduction=reduction)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:235 __init__
super(LossFunctionWrapper, self).__init__(reduction=reduction, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:99 __init__
losses_utils.ReductionV2.validate(reduction)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/losses/loss_reduction.py:68 validate
raise ValueError('Invalid Reduction Key %s.' % key)
ValueError: Invalid Reduction Key Tensor("metrics_2/jsd/truediv:0", shape=(50,), dtype=float64).