Python tf.train.import_meta_graph(';model.meta';)无法专心处理seq2seq模型?

Python tf.train.import_meta_graph(';model.meta';)无法专心处理seq2seq模型?,python,tensorflow,sequence,Python,Tensorflow,Sequence,环境: Ubuntu 16.04; TensorFlow v1.0.0(CPU) 尝试使用“tf.train.import_meta_graph('model.meta')”导入保存的图形时,我遇到以下错误: Traceback (most recent call last): File "test_load.py", line 19, in new_saver = tf.train.import_meta_graph('model.meta') File "/usr/local/lib/p

环境: Ubuntu 16.04; TensorFlow v1.0.0(CPU)

尝试使用“tf.train.import_meta_graph('model.meta')”导入保存的图形时,我遇到以下错误:

Traceback (most recent call last):
File "test_load.py", line 19, in new_saver = 
tf.train.import_meta_graph('model.meta') 
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", 
line 1577, in import_meta_graph **kwargs)

File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/meta_graph.py", 
line 498, in import_scoped_meta_graph producer_op_list=producer_op_list)

File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", 
line 259, in import_graph_def 
raise ValueError('No op named %s in defined operations.' % node.op)
ValueError: No op named attn_add_fun_f32f32f32 in defined operations.
当我不加注意地重新训练我的模型并导入具有相同代码行的图形时,不会引发此错误。 当前是否不支持加载经过注意训练的模型?以下是我的注意力实现的样子:

attention_states = tf.transpose(self.encoder_outputs, [1, 0, 2])

(attention_keys,
 attention_values,
 attention_score_fn,
 attention_construct_fn) = seq2seq.prepare_attention(
            attention_states = attention_states,
            attention_option = "bahdanau",
            num_units         = self.decoder_cell.output_size)

 decoder_fn_train = seq2seq.attention_decoder_fn_train(
            encoder_state          = self.encoder_state,
            attention_keys         = attention_keys,
            attention_values       = attention_values,
            attention_score_fn     = attention_score_fn,
            attention_construct_fn = attention_construct_fn,
            name                   = 'attention_decoder')

decoder_fn_inference = seq2seq.attention_decoder_fn_inference(
            output_fn              = output_fn,
            encoder_state          = self.encoder_state,
            attention_keys         = attention_keys,
            attention_values       = attention_values,
            attention_score_fn     = attention_score_fn,
            attention_construct_fn = attention_construct_fn,
            embeddings             = self.embedding_matrix,
            start_of_sequence_id   = self.EOS,
            end_of_sequence_id     = self.EOS,
            maximum_length         = tf.reduce_max(self.encoder_inputs_length) + 3,
            num_decoder_symbols    = self.vocab_size,)

谢谢

原始保存的图形(注意)是否使用较旧的TensorFlow版本保存?看起来事情已经发生了变化,这对contrib来说并不奇怪。我安装了TFV1.0.0并运行了培训。训练不需要很长时间。很抱歉你遇到了这个。你能检查一下a是否能帮你解决这个问题吗?如果您想立即尝试,您需要从源代码构建TensorFlow(更改已在今天提交)。它成功了!非常感谢!感谢您快速而出色的回复。
注意\u解码器\u fn
从主人那里消失了吗?因为上面的补丁在master中,而注意力解码器仅在r1.0中。有人知道发生了什么吗?原始保存的图形(注意)是使用旧的TensorFlow版本保存的吗?看起来事情已经发生了变化,这对contrib来说并不奇怪。我安装了TFV1.0.0并运行了培训。训练不需要很长时间。很抱歉你遇到了这个。你能检查一下a是否能帮你解决这个问题吗?如果您想立即尝试,您需要从源代码构建TensorFlow(更改已在今天提交)。它成功了!非常感谢!感谢您快速而出色的回复。
注意\u解码器\u fn
从主人那里消失了吗?因为上面的补丁在master中,而注意力解码器仅在r1.0中。有人知道发生了什么吗?