Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/visual-studio-2008/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Machine learning 张量流。从BasicRNNCell切换到LSTMCell_Machine Learning_Tensorflow_Neural Network_Deep Learning_Recurrent Neural Network - Fatal编程技术网

Machine learning 张量流。从BasicRNNCell切换到LSTMCell

Machine learning 张量流。从BasicRNNCell切换到LSTMCell,machine-learning,tensorflow,neural-network,deep-learning,recurrent-neural-network,Machine Learning,Tensorflow,Neural Network,Deep Learning,Recurrent Neural Network,我已经用BasicRNN构建了一个RNN,现在我想使用LSTMCell,但是这段话看起来并不琐碎。我应该换什么 首先,我定义所有占位符和变量: X_placeholder = tf.placeholder(tf.float32, [batch_size, truncated_backprop_length, embedding_size]) Y_placeholder = tf.placeholder(tf.int32, [batch_size, truncated_backprop_lengt

我已经用BasicRNN构建了一个RNN,现在我想使用LSTMCell,但是这段话看起来并不琐碎。我应该换什么

首先,我定义所有占位符和变量:

X_placeholder = tf.placeholder(tf.float32, [batch_size, truncated_backprop_length, embedding_size])
Y_placeholder = tf.placeholder(tf.int32, [batch_size, truncated_backprop_length])

init_state = tf.placeholder(tf.float32, [batch_size, state_size])

W = tf.Variable(np.random.rand(state_size, num_classes),dtype=tf.float32)
b = tf.Variable(np.zeros((batch_size, num_classes)), dtype=tf.float32)

W2 = tf.Variable(np.random.rand(state_size, num_classes),dtype=tf.float32)
b2 = tf.Variable(np.zeros((batch_size, num_classes)), dtype=tf.float32)
然后我打开标签:

labels_series = tf.transpose(batchY_placeholder)
labels_series = tf.unstack(batchY_placeholder, axis=1)
inputs_series = X_placeholder
然后我定义我的RNN:

cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple = False)
states_series, current_state = tf.nn.dynamic_rnn(cell, inputs_series, initial_state = init_state)
我得到的错误是:

InvalidArgumentError                      Traceback (most recent call last)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, debug_python_shape_fn, require_shape_fn)
    669           node_def_str, input_shapes, input_tensors, input_tensors_as_shapes,

--> 670           status)
    671   except errors.InvalidArgumentError as err:

/home/deepnlp2017/anaconda3/lib/python3.5/contextlib.py in __exit__(self, type, value, traceback)
     65             try:
---> 66                 next(self.gen)
     67             except StopIteration:

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py in raise_exception_on_not_ok_status()
    468           compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 469           pywrap_tensorflow.TF_GetCode(status))
    470   finally:

InvalidArgumentError: Dimensions must be equal, but are 50 and 100 for 'rnn/while/basic_lstm_cell/mul' (op: 'Mul') with input shapes: [32,50], [32,100].

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
<ipython-input-19-2ac617f4dde4> in <module>()
      4 #cell = tf.contrib.rnn.BasicRNNCell(state_size)
      5 cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple = False)
----> 6 states_series, current_state = tf.nn.dynamic_rnn(cell, inputs_series, initial_state = init_state)

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in dynamic_rnn(cell, inputs, sequence_length, initial_state, dtype, parallel_iterations, swap_memory, time_major, scope)
    543         swap_memory=swap_memory,
    544         sequence_length=sequence_length,
--> 545         dtype=dtype)
    546 
    547     # Outputs of _dynamic_rnn_loop are always shaped [time, batch, depth].

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in _dynamic_rnn_loop(cell, inputs, initial_state, parallel_iterations, swap_memory, sequence_length, dtype)
    710       loop_vars=(time, output_ta, state),
    711       parallel_iterations=parallel_iterations,
--> 712       swap_memory=swap_memory)
    713 
    714   # Unpack final output if not using output tuples.

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py in while_loop(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, name)
   2624     context = WhileContext(parallel_iterations, back_prop, swap_memory, name)
   2625     ops.add_to_collection(ops.GraphKeys.WHILE_CONTEXT, context)
-> 2626     result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
   2627     return result
   2628 

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py in BuildLoop(self, pred, body, loop_vars, shape_invariants)
   2457       self.Enter()
   2458       original_body_result, exit_vars = self._BuildLoop(
-> 2459           pred, body, original_loop_vars, loop_vars, shape_invariants)
   2460     finally:
   2461       self.Exit()

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py in _BuildLoop(self, pred, body, original_loop_vars, loop_vars, shape_invariants)
   2407         structure=original_loop_vars,
   2408         flat_sequence=vars_for_body_with_tensor_arrays)
-> 2409     body_result = body(*packed_vars_for_body)
   2410     if not nest.is_sequence(body_result):
   2411       body_result = [body_result]

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in _time_step(time, output_ta_t, state)
    695           skip_conditionals=True)
    696     else:
--> 697       (output, new_state) = call_cell()
    698 
    699     # Pack state if using state tuples

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in <lambda>()
    681 
    682     input_t = nest.pack_sequence_as(structure=inputs, flat_sequence=input_t)
--> 683     call_cell = lambda: cell(input_t, state)
    684 
    685     if sequence_length is not None:

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py in __call__(self, inputs, state, scope)
    182       i, j, f, o = array_ops.split(value=concat, num_or_size_splits=4, axis=1)
    183 
--> 184       new_c = (c * sigmoid(f + self._forget_bias) + sigmoid(i) *
    185                self._activation(j))
    186       new_h = self._activation(new_c) * sigmoid(o)

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py in binary_op_wrapper(x, y)
    882       if not isinstance(y, sparse_tensor.SparseTensor):
    883         y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
--> 884       return func(x, y, name=name)
    885 
    886   def binary_op_wrapper_sparse(sp_x, y):

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py in _mul_dispatch(x, y, name)
   1103   is_tensor_y = isinstance(y, ops.Tensor)
   1104   if is_tensor_y:
-> 1105     return gen_math_ops._mul(x, y, name=name)
   1106   else:
   1107     assert isinstance(y, sparse_tensor.SparseTensor)  # Case: Dense * Sparse.

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_math_ops.py in _mul(x, y, name)
   1623     A `Tensor`. Has the same type as `x`.
   1624   """
-> 1625   result = _op_def_lib.apply_op("Mul", x=x, y=y, name=name)
   1626   return result
   1627 

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py in apply_op(self, op_type_name, name, **keywords)
    761         op = g.create_op(op_type_name, inputs, output_types, name=scope,
    762                          input_types=input_types, attrs=attr_protos,
--> 763                          op_def=op_def)
    764         if output_structure:
    765           outputs = op.outputs

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in create_op(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_shapes, compute_device)
   2395                     original_op=self._default_original_op, op_def=op_def)
   2396     if compute_shapes:
-> 2397       set_shapes_for_outputs(ret)
   2398     self._add_op(ret)
   2399     self._record_op_seen_by_control_dependencies(ret)

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in set_shapes_for_outputs(op)
   1755       shape_func = _call_cpp_shape_fn_and_require_op
   1756 
-> 1757   shapes = shape_func(op)
   1758   if shapes is None:
   1759     raise RuntimeError(

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in call_with_requiring(op)
   1705 
   1706   def call_with_requiring(op):
-> 1707     return call_cpp_shape_fn(op, require_shape_fn=True)
   1708 
   1709   _call_cpp_shape_fn_and_require_op = call_with_requiring

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py in call_cpp_shape_fn(op, input_tensors_needed, input_tensors_as_shapes_needed, debug_python_shape_fn, require_shape_fn)
    608     res = _call_cpp_shape_fn_impl(op, input_tensors_needed,
    609                                   input_tensors_as_shapes_needed,
--> 610                                   debug_python_shape_fn, require_shape_fn)
    611     if not isinstance(res, dict):
    612       # Handles the case where _call_cpp_shape_fn_impl calls unknown_shape(op).

/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, debug_python_shape_fn, require_shape_fn)
    673       missing_shape_fn = True
    674     else:
--> 675       raise ValueError(err.message)
    676 
    677   if missing_shape_fn:

ValueError: Dimensions must be equal, but are 50 and 100 for 'rnn/while/basic_lstm_cell/mul' (op: 'Mul') with input shapes: [32,50], [32,100].
InvalidArgumentError回溯(最近一次调用上次)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/common\u shapes.py in\u call\u cpp\u shape\u fn\u impl(op,输入所需张量,输入所需张量,调试python\u shape\u fn,需要形状)
669节点定义、输入形状、输入张量、输入张量作为形状,
-->(670状态)
671错误除外。InvalidArgumentError作为错误:
/home/deepnlp2017/anaconda3/lib/python3.5/contextlib.py in____________(自我、类型、值、回溯)
65尝试:
--->66下一个(self.gen)
67除停止迭代外:
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/errors\u impl.py在raise\u exception\u on\u not\u ok\u status()中
468兼容as_文本(pywrap_tensorflow.TF_消息(状态)),
-->469 pywrap_tensorflow.TF_GetCode(状态))
470最后:
InvalidArgumentError:尺寸必须相等,但对于输入形状为[32,50],[32100]的“rnn/while/basic_lstm_cell/mul”(op:“mul”)而言,尺寸分别为50和100。
在处理上述异常期间,发生了另一个异常:
ValueError回溯(最近一次调用上次)
在()
4#单元=tf.contrib.rnn.basicrncell(状态大小)
5 cell=tf.contrib.rnn.BasicLSTMCell(state\u size,state\u为\u tuple=False)
---->6状态\u序列,当前\u状态=tf.nn.dynamic\u rnn(单元,输入\u序列,初始\u状态=初始\u状态)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py(单元格、输入、序列长度、初始状态、数据类型、并行迭代、交换内存、时间、范围)
543交换内存=交换内存,
544序列长度=序列长度,
-->545数据类型=数据类型)
546
547#动态循环的输出总是成形的[时间、批次、深度]。
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in\u dynamic\u rnn\u循环(单元、输入、初始状态、并行迭代、交换内存、序列长度、数据类型)
710循环变量=(时间、输出、状态),
711并行迭代=并行迭代,
-->712交换内存=交换内存)
713
714#如果不使用输出元组,则解压缩最终输出。
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control\u flow\u ops.py在while\u循环中(cond、body、loop\u vars、shape\u不变量、并行迭代、back\u prop、swap\u memory、name)
2624 context=WhileContext(并行迭代、备份、交换内存、名称)
2625 ops.将_添加到_集合(ops.GraphKeys.WHILE_CONTEXT,CONTEXT)
->2626 result=context.BuildLoop(条件、主体、循环变量、形状不变量)
2627返回结果
2628
/BuildLoop中的home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py(self、pred、body、loop_vars、shape_不变量)
2457 self.Enter()
2458原始\u body\u结果,退出\u vars=self.\u BuildLoop(
->2459 pred,主体,原始循环变量,循环变量,形状不变量)
2460最后:
2461自我退出()
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/control\u flow\u ops.py in\u BuildLoop(self、pred、body、original\u loop\u vars、loop\u vars、shape\u不变量)
2407结构=原始循环变量,
2408平面\u序列=变量\u用于\u体\u与\u张量\u阵列)
->2409阀体结果=阀体(*阀体的封装变量)
2410如果不是嵌套。是\u序列(正文\u结果):
2411 body_result=[body_result]
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in\u time\u step(时间、输出、状态)
695跳过(条件=真)
696其他:
-->697(输出,新状态)=调用单元()
698
699#使用状态元组时打包状态
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py in()
681
682输入\u t=nest.pack\u sequence\u as(结构=输入,平坦\u sequence=input\u t)
-->683调用单元=λ:单元(输入,状态)
684
685如果序列长度不是无:
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/contrib/rnn/python/ops/core\u rnn\u cell\u impl.py in\uuuuuu调用(self、input、state、scope)
182 i,j,f,o=数组操作拆分(值=concat,num或大小拆分=4,轴=1)
183
-->184新的c=(c*sigmoid(f+self.\u忘记_偏见)+sigmoid(i)*
185自激活(j))
186新h=自激活(新c)*乙状结肠(o)
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py二进制_op_包装(x,y)
882如果不存在(y,稀疏张量,稀疏传感器):
883 y=ops.convert_to_tensor(y,dtype=x.dtype.base_dtype,name=“y”)
-->884返回函数(x,y,name=name)
885
886 def二进制文件(sp_x,y):
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/math\u ops.py in\u mul\u dispatch(x,y,name)
1103是张量(y,运算张量)
1104如果是张量:
->1105返回gen\u math\u ops.\u mul(x,y,name=name)
1106其他:
1107断言存在(y,稀疏张量。稀疏传感器)#情况:稠密*稀疏。
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/ops/gen_math_ops.py in_mul(x,y,name)
1623 A‘张量’。与“x”的类型相同。
1624   """
->1625结果=_op_def_lib.apply_op(“Mul”,x=x,y=y,name=name)
1626返回结果
1627
/home/deepnlp2017/.local/lib/python3.5/site-packages/tensorflow/python/framework/o
cell = tf.contrib.rnn.BasicLSTMCell(state_size, state_is_tuple=False)
init_state = cell.zero_state(batch_size, dtype)