Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/342.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow v2序列损失的备选方案_Python_Tensorflow_Machine Learning_Deep Learning_Tensorflow2.0 - Fatal编程技术网

Python Tensorflow v2序列损失的备选方案

Python Tensorflow v2序列损失的备选方案,python,tensorflow,machine-learning,deep-learning,tensorflow2.0,Python,Tensorflow,Machine Learning,Deep Learning,Tensorflow2.0,我正在探索以下tensorflow示例:它显然是用TFV1编写的,因此我使用v2升级脚本进行了升级,存在三个主要的不一致之处: ERROR: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib. tf.contrib.rnn.DropoutWrapper cannot be converted automatically. tf.contrib will not be distributed wit

我正在探索以下tensorflow示例:它显然是用TFV1编写的,因此我使用v2升级脚本进行了升级,存在三个主要的不一致之处:

ERROR: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib. tf.contrib.rnn.DropoutWrapper cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
ERROR: Using member tf.contrib.legacy_seq2seq.sequence_loss_by_example in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.sequence_loss_by_example cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
ERROR: Using member tf.contrib.framework.get_or_create_global_step in deprecated module tf.contrib. tf.contrib.framework.get_or_create_global_step cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
所以为了兼容性,我用
tf.compat.v1.train.get\u或\u create\u global\u step
手动替换了
framework.get\u或\u create\u global\u step
,也用
tf.compat.v1.nn.rnn\u cell.dropoutrapper
替换了
rnn.dropoutrapper

但我无法找到一个解决方案来处理
tf.contrib.legacy_seq2seq.sequence_loss_by_example
方法,因为我找不到向后兼容的替代方法。我试着安装Tensroflow插件并使用它,但无法找出如何使其适应其他代码

偶然发现一些错误,如
考虑将元素强制转换为支持的类型。
Logits必须是[batch\u size x sequence\u length x Logits]张量
,因为我可能没有正确实现某些内容

因此,我的问题是:您是否知道,我可以使用的某个第三方插件/库是否支持
legacy_seq2seq.sequence_loss_by_example
,更重要的是,是否有人可以向我展示如何实现支持的tensorflow v2替代此loss函数,使其与下面的代码类似

output=tf.reformate(tf.concat(轴=1,值=输出),[-1,大小])
softmax_w=tf.compat.v1.get_变量(“softmax_w”,[size,len(TARGETS)],dtype=tf.float32)
softmax_b=tf.compat.v1.get_变量(“softmax_b”,[len(TARGETS)],dtype=tf.float32)
logits=tf.matmul(输出,softmax_w)+softmax_b
自身预测=tf.argmax(输入=logits,轴=1)
self.\u targets=tf.重塑(输入\u.targets,[-1])
损耗=tfa.seq2seq.sequence\u损耗(
[logits],
[tf.重塑(输入目标,[-1]),
[tf.ones([batch\u size*num\u steps],dtype=tf.float32)])
自身成本=成本=tf。减少总和(输入张量=损失)/批量大小
self.\u final\u state=状态

完整代码。

在Tensorflow 2.x中
tf.contrib.legacy_seq2seq
移动到Tensorflow插件。该
tfa.seq2seq.seq.loss
tf.contrib.legacy_seq2seq
的唯一替代方案。有关更多详细信息,请参阅。谢谢在Tensorflow 2.x
tf.contrib.legacy_seq
中,已移动到Tensorflow插件。该
tfa.seq2seq.seq.loss
tf.contrib.legacy_seq2seq
的唯一替代方案。有关更多详细信息,请参阅。谢谢