Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/331.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 编码器-解码器型号属性错误:';非类型';对象没有属性'_入站节点';_Python_Tensorflow_Keras_Deep Learning - Fatal编程技术网

Python 编码器-解码器型号属性错误:';非类型';对象没有属性'_入站节点';

Python 编码器-解码器型号属性错误:';非类型';对象没有属性'_入站节点';,python,tensorflow,keras,deep-learning,Python,Tensorflow,Keras,Deep Learning,我正在尝试实现附加注意。下面是我的编码器-解码器模型的代码 在模型输出时,我面临以下错误,我不明白为什么会发生此错误 我在google中搜索了此错误,但无法找到此问题的解决方案 #Encoder inputs encoder_inputs = Input(shape=(None,)) encoder_embedding = Embedding(vocab_size, 1024, mask_zero=True)(encoder_inputs) encoder_outputs , state_h

我正在尝试实现附加注意。下面是我的编码器-解码器模型的代码
在模型输出时,我面临以下错误,我不明白为什么会发生此错误
我在google中搜索了此错误,但无法找到此问题的解决方案

#Encoder inputs 
encoder_inputs = Input(shape=(None,))
encoder_embedding = Embedding(vocab_size, 1024, mask_zero=True)(encoder_inputs)
encoder_outputs , state_h , state_c = LSTM(1024, return_sequences=True, return_state=True)(encoder_embedding)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c] 
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the 
# return states in the training model, but we will use them in inference.
decoder_embedding = Embedding(vocab_size, 1024, mask_zero=True)(decoder_inputs)
decoder_lstm = LSTM(1024, return_state=True, return_sequences=True)
#https://www.tensorflow.org/tutorials/text/nmt_with_attention#define_the_optimizer_and_the_loss_function
# preparing data for attention layer 
d0 = Dense(1024)
d1 = Dense(1024)
d2 = Dense(1024)
#encoder hidden state 1
hidden_with_time_axis_1 = state_h
#encoder hidden state 2 
hidden_with_time_axis_2 = state_c
#score = FC(tanh(FC(EO) + FC(H)))
score = d0(keras.activations.tanh(encoder_outputs) + d1(hidden_with_time_axis_1) + d2(hidden_with_time_axis_2))
#attention weights = softmax(score, axis = 1)
attention_weights = keras.activations.softmax(score, axis=1)
#context vector = sum(attention weights * EO, axis = 1)
context_vector = attention_weights * encoder_outputs 
context_vector = tf.reduce_sum(context_vector, axis=1)
context_vector = tf.expand_dims(context_vector, 1)
context_vector = tf.reshape(context_vector,[-1,-1,1024])
#merged vector = concat(embedding output, context vector)
cl = concatenate([context_vector,decoder_embedding], axis=-1)
#This merged vector is then given input to the decoder LSTM
decoder_outputs, _, _ = decoder_lstm(cl, initial_state=encoder_states)
decoder_dense = TimeDistributed(Dense(vocab_size, activation='softmax'))
output = decoder_dense(decoder_outputs)
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], output)
#compiling the model 
model.compile(optimizer='adam', loss='categorical_crossentropy')
#model summary
model.summary()
下面是错误i’,不理解为什么会发生此错误

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-324-553ce04010c1> in <module>()
     32 output = decoder_dense(decoder_outputs)
     33 # `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
---> 34 model = Model([encoder_inputs, decoder_inputs], output)
     35 #compiling the model
     36 model.compile(optimizer='adam', loss='categorical_crossentropy')

7 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/network.py in build_map(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index)
   1378             ValueError: if a cycle is detected.
   1379         """
-> 1380         node = layer._inbound_nodes[node_index]
   1381 
   1382         # Prevent cycles.

AttributeError: 'NoneType' object has no attribute '_inbound_nodes'
---------------------------------------------------------------------------
AttributeError回溯(最近一次呼叫上次)
在()
32输出=解码器密度(解码器输出)
33#“编码器输入数据”和“解码器输入数据”进入“解码器目标数据”`
--->34型号=型号([编码器输入,解码器输入],输出)
35#编译模型
36模型。编译(优化器='adam',loss='classifical\u crossentropy')
7帧
/usr/local/lib/python3.6/dist-packages/keras/engine/network.py in build\u map(张量、完成的节点、进行中的节点、层、节点索引、张量索引)
1378值错误:如果检测到循环。
1379         """
->1380节点=层。\入站\节点[节点\索引]
1381
1382#防止循环。
AttributeError:“非类型”对象没有属性“\u入站节点”

您使用的是哪个TF版本?您使用的是
导入keras
还是
导入TF.keras
?对于模型,我使用keras只是为了引起注意,我使用的是TF.kerashey thushv89,我明白了您的意思。我更改了TF.keras格式的所有内容。非常感谢您的回答。您使用的是哪个TF版本?您使用的是
 导入keras
或导入tf.keras?对于模型,我使用keras只是为了引起注意我使用了tf.kerashey thushv89,我明白你的意思了我更改了tf.keras格式的所有内容它工作得很好谢谢你的回答