Python 如何在tensorflow 2.0中使用额外输入进行自定义损耗

Python 如何在tensorflow 2.0中使用额外输入进行自定义损耗,python,tensorflow-datasets,tensorflow2.0,tf.keras,Python,Tensorflow Datasets,Tensorflow2.0,Tf.keras,在TF2.0中使用TF.keras和数据集获取带有额外参数的自定义损失函数时,我遇到了很多麻烦 在下面的例子中,额外的参数是模型的输入数据,它包含在数据集中。在1.14的情况下,我会运行。生成一个\u shot\u迭代器()。在数据集上获取下一个(),然后将得到的张量传递给loss函数。同样的东西在2.0中不起作用 类加权SDR损失(keras.损失.损失): def uuu init uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu

在TF2.0中使用TF.keras和数据集获取带有额外参数的自定义损失函数时,我遇到了很多麻烦

在下面的例子中,额外的参数是模型的输入数据,它包含在
数据集中。在1.14的情况下,我会运行
。生成一个\u shot\u迭代器()。在数据集上获取下一个()
,然后将得到的张量传递给loss函数。同样的东西在2.0中不起作用

类加权SDR损失(keras.损失.损失):
def uuu init uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
super()
自噪声信号=噪声信号
def sdr_损失(自身、信号真实、信号预测):
返回(-tf.reduce_平均值(sig_真*sig_pred)/
tf.reduce_均值(tf.norm(张量=sig_pred)*tf.norm(张量=sig_真)))
def调用(self、y_true、y_pred):
噪声\u真=自噪声\u信号-y\u真
noise_pred=自噪声信号-y_pred
alpha=(tf.reduce_-mean(tf.square(y_-true))/
减小平均值(tf.square(y_真)+tf.square(自噪声信号-y_pred)))
返回alpha*self.sdr\u损失(y\u真,y\u pred)+(1-alpha)*self.sdr\u损失(noise\u真,noise\u pred)
数据x=np.random.rand(5,4,1)
数据y=np.random.rand(5,4,1)
x=keras.layers.Input([4,1])
y=keras.layers.Activation('tanh')(x)
模型=keras.models.model(输入=x,输出=y)
train_dataset=tf.data.dataset.from_tensor_切片((数据x,数据y))
x_数据集=train_数据集.map(λx,y:x)
compile(loss=WeightedSDRLoss(x_数据集),optimizer='Adam')
model.fit(训练数据集)
但我在tensorflow中得到以下错误:

\uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow\u core/python/training/tracking/base.py:457:in\u method\u wrapper
结果=方法(自身、*args、**kwargs)
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow_core/python/keras/engine/training.py:377:编译中
self.\u编译\u权重\u损失\u和\u加权\u度量()
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow\u core/python/training/tracking/base.py:457:in\u method\u wrapper
结果=方法(自身、*args、**kwargs)
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow\u core/python/keras/engine/training.py:1618:in(编译权重)和(加权)度量
self.total_loss=self._prepare_total_loss(掩码)
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow\u core/python/keras/engine/training.py:1678:in\u prepare\u total\u loss
每样本损失=损失fn.call(y\u true,y\u pred)
…:144:随时待命
噪声\u真=自噪声\u信号-y\u真
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow_core/python/ops/math_ops.py:924:in r_binary_op_wrapper
x=ops.convert_to_tensor(x,dtype=y.dtype.base_dtype,name=“x”)
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow\u core/python/framework/ops.py:1184:in convert\u to\u tensor
返回convert_to_tensor_v2(值、数据类型、首选数据类型、名称)
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow_core/python/framework/ops.py:1242:in convert_to_tensor_v2
as_ref=False)
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow\u core/python/framework/ops.py:1296:在内部转换为tensor
ret=conversion\u func(值,dtype=dtype,name=name,as\u ref=as\u ref)
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow_core/python/framework/constant_op.py:286:in _constant_tensor_conversion_函数
返回常量(v,dtype=dtype,name=name)
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow\u core/python/framework/constant\u op.py:227:in constant
允许(广播=真)
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow\u core/python/framework/constant\u op.py:265:in\u constant\u impl
允许广播=允许广播)
../../anaconda3/envs/../lib/python3.6/site packages/tensorflow\u core/python/framework/tensor\u util.py:449:in make\u tensor\u proto
_资产可兼容(值、数据类型)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
值=
dtype=tf.32
def_资产可兼容(值、数据类型):
如果数据类型为“无”:
fn=\u检查\u非\u张量
其他:
尝试:
fn=_TF_TO_是否正常[dtype]
除KeyError外:
#没有具体的fn,因此我们尽可能做到最好。
如果dtype.is_integer:
fn=_check_int
elif dtype.is_浮动:
fn=\u检查\u浮动
elif dtype.is_复杂:
fn=_check_complex
elif dtype.U被量化:
fn=\u检查\u量化
其他:
fn=\u检查\u非\u张量
尝试:
fn(价值观)
除ValueError为e外:
[不匹配]=e.args
如果数据类型为“无”:
raise TypeError(“需要单个张量时的张量列表”)
其他:
raise TypeError(“应为%s,改为%s,类型为“%s”。%
>(dtype.name、repr(不匹配)、type(不匹配)。\uu name
E TypeError:应为float32,而应为“MapDataset”类型。
问题似乎是我正在将一个数据集传递到损失函数中,但它需要一个热切求值的张量

相反,我尝试将输入层传递到自定义丢失中,但这也不起作用:

data\ux=np.random.rand(5,4,1)
数据y=np.random.rand(5,4,1)
x=keras.layers.Input(shape=[4,1])
y=keras.layers.Activation('tanh')(x)
模型=keras.models.model(输入=x,输出=y)
train_dataset=tf.data.dataset.from_tensor_切片((data_x,data_y)).batch(1)
compile(loss=WeightedSDRLoss(x),optimizer='Adam')
model.fit(训练数据集)
相反,我得到了一个错误:

op\u name='\u推理\u分布式函数\u 169',num\u输出=2
我