Tensorflow 无法在Embedded_attention_seq2seq中将feed_设置为tf.bool之前

Tensorflow 无法在Embedded_attention_seq2seq中将feed_设置为tf.bool之前,tensorflow,Tensorflow,我正在使用tf.contrib.legacy\u seq2seq.embedded\u attention\u seq2seq()函数,并尝试将feed\u previous参数设置为tf.placeholder(tf.bool),但是,这不起作用。错误消息: Traceback (most recent call last): File "D:/beamSearch/main.py", line 6, in <module> parser.main() File "

我正在使用
tf.contrib.legacy\u seq2seq.embedded\u attention\u seq2seq()
函数,并尝试将
feed\u previous
参数设置为
tf.placeholder(tf.bool)
,但是,这不起作用。错误消息:

Traceback (most recent call last):
  File "D:/beamSearch/main.py", line 6, in <module>
    parser.main()
  File "D:\beamSearch\neuralParser\neuralParser.py", line 107, in main
    self.model = Model(self.args, self.textData)
  File "D:\beamSearch\neuralParser\model.py", line 26, in __init__
    self.buildNetwork()
  File "D:\beamSearch\neuralParser\model.py", line 62, in buildNetwork
    feed_previous=self.test
  File "C:\Python35\lib\site-packages\tensorflow\contrib\legacy_seq2seq\python\ops\seq2seq.py", line 849, in embedding_attention_seq2seq
    encoder_cell = copy.deepcopy(cell)
  File "C:\Python35\lib\copy.py", line 182, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "C:\Python35\lib\copy.py", line 297, in _reconstruct
    state = deepcopy(state, memo)
  File "C:\Python35\lib\copy.py", line 155, in deepcopy
    y = copier(x, memo)
  File "C:\Python35\lib\copy.py", line 243, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Python35\lib\copy.py", line 155, in deepcopy
    y = copier(x, memo)
  File "C:\Python35\lib\copy.py", line 218, in _deepcopy_list
    y.append(deepcopy(a, memo))
  File "C:\Python35\lib\copy.py", line 182, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "C:\Python35\lib\copy.py", line 297, in _reconstruct
    state = deepcopy(state, memo)
  File "C:\Python35\lib\copy.py", line 155, in deepcopy
    y = copier(x, memo)
  File "C:\Python35\lib\copy.py", line 243, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Python35\lib\copy.py", line 182, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "C:\Python35\lib\copy.py", line 297, in _reconstruct
    state = deepcopy(state, memo)
  File "C:\Python35\lib\copy.py", line 155, in deepcopy
    y = copier(x, memo)
  File "C:\Python35\lib\copy.py", line 243, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Python35\lib\copy.py", line 182, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "C:\Python35\lib\copy.py", line 297, in _reconstruct
    state = deepcopy(state, memo)
  File "C:\Python35\lib\copy.py", line 155, in deepcopy
    y = copier(x, memo)
  File "C:\Python35\lib\copy.py", line 243, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Python35\lib\copy.py", line 182, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "C:\Python35\lib\copy.py", line 297, in _reconstruct
    state = deepcopy(state, memo)
  File "C:\Python35\lib\copy.py", line 155, in deepcopy
    y = copier(x, memo)
  File "C:\Python35\lib\copy.py", line 243, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Python35\lib\copy.py", line 174, in deepcopy
    rv = reductor(4)
TypeError: can't pickle _thread.lock objects

看起来copy.deepcopy(cell)方法不支持占位符。由于此方法用于复制指定为单元参数的RNN模型,因此编码器和解码器都可以使用相同的体系结构,并且不会提供任何其他备选方案来单独指定它们,因此,如果您确实需要使用占位符,唯一的可能是您可能必须自己复制此seq2seq模块(通过复制源代码并进行轻微更改)。在您自己的实现中,您只需为编码器和解码器创建两个单独的模型。

类型错误:无法pickle_thread.lock对象
这不是tensorflow错误。我在tensorflow 1.1.0中使用
legacy_seq2seq
包时也会遇到同样的错误。您找到解决方案了吗?实际上,在示例代码中,这个问题m不是因为feed_previous占位符,而是因为dropout占位符(因为它位于需要复制的单元格的创建内部)。这意味着,如果您只想使用feed_previous,您可以毫无问题地使用它。
self.dropOut = tf.placeholder(dtype=tf.float32, shape=(), name='dropOut')
self.test = tf.placeholder(tf.bool, name='test')

with tf.variable_scope("cell"):  # TODO: How to make this appear on the graph ?
    encoDecoCell = tf.contrib.rnn.BasicLSTMCell(self.args.hiddenSize,
                                                state_is_tuple=True)  # Or GRUCell, LSTMCell(args.hiddenSize)
    encoDecoCell = tf.contrib.rnn.DropoutWrapper(encoDecoCell, input_keep_prob=self.dropOut,
                                                 output_keep_prob=self.dropOut)
    encoDecoCell = tf.contrib.rnn.MultiRNNCell([encoDecoCell] * self.args.numLayers, state_is_tuple=True)

# Network input (placeholders)

with tf.name_scope('placeholder_encoder'):
    # encoderInputs are intergers, representing the index of words in the sentences
    self.encoderInputs = tf.placeholder(tf.int32, [None, self.args.maxLength])

with tf.name_scope('placeholder_decoder'):
    self.decoderInputs = tf.placeholder(tf.int32, [None, self.args.maxLength+2], name='decoderInputs')
    self.decoderTargets = tf.placeholder(tf.int32, [None, self.args.maxLength+2], name='decoderTargets')
    self.decoderWeights = tf.placeholder(tf.int32, [None, self.args.maxLength+2], name='decoderWeights')

decoderOutputs, states = tf.contrib.legacy_seq2seq.embedding_attention_seq2seq(
    self.encoderInputs,
    self.decoderInputs,
    encoDecoCell,
    self.textData.getInVocabularySize(),
    self.textData.getOutVocabularySize(),
    embedding_size=self.args.embeddingSize,
    feed_previous=self.test
)