Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/325.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/.htaccess/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 具有张量流的深层神经网络精度低_Python_Machine Learning_Tensorflow_Deep Learning - Fatal编程技术网

Python 具有张量流的深层神经网络精度低

Python 具有张量流的深层神经网络精度低,python,machine-learning,tensorflow,deep-learning,Python,Machine Learning,Tensorflow,Deep Learning,我正在跟踪上的第三个Jupyter笔记本 在运行问题4时,我尝试实现一个函数,该函数可以自动构建多个隐藏层,而无需手动编码每个层的配置 但是,模型的运行提供了非常低的精度(10%),因此我认为这种函数可能无法与Tensorflow的图形生成器兼容 我的代码如下: def hlayers(n_layers, n_nodes, i_size, a, r=0, keep_p=1): for i in range(n_layers): if i > 0: i_size =

我正在跟踪上的第三个Jupyter笔记本

在运行问题4时,我尝试实现一个函数,该函数可以自动构建多个隐藏层,而无需手动编码每个层的配置

但是,模型的运行提供了非常低的精度(10%),因此我认为这种函数可能无法与Tensorflow的图形生成器兼容

我的代码如下:

def hlayers(n_layers, n_nodes, i_size, a, r=0, keep_p=1):

  for i in range(n_layers):
    if i > 0:
      i_size = n_nodes
    w = tf.Variable(tf.truncated_normal([i_size, n_nodes]), name=f'W{i}')
    b = tf.Variable(tf.zeros([n_nodes]), name=f'b{i}')
    pa = tf.nn.relu(tf.add(tf.matmul(a, w), b))
    a = tf.nn.dropout(pa, keep_prob=keep_p, name=f'a{i}')
    r += tf.nn.l2_loss(w, name=f'r{i}')

  return a, r

batch_size = 128
num_nodes = 1024
beta = 0.01

graph = tf.Graph()
with graph.as_default():

  # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(
    tf.float32,
    shape=(batch_size, image_size * image_size),
    name='Dataset')
  tf_train_labels = tf.placeholder(
    tf.float32,
    shape=(batch_size, num_labels),
    name='Labels')
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)

  keep_p = tf.placeholder(tf.float32, name='KeepProb')

  # Hidden layers.
  a, r = hlayers(
    n_layers=3,
    n_nodes=num_nodes,
    i_size=image_size * image_size,
    a=tf_train_dataset,
    keep_p=keep_p)

  # Output layer.
  wo = tf.Variable(tf.truncated_normal([num_nodes, num_labels]), name='Wo')
  bo = tf.Variable(tf.zeros([num_labels]), name='bo')
  logits = tf.add(tf.matmul(a, wo), bo, name='Logits')
  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(
      labels=tf_train_labels, logits=logits))

  # Regularizer.
  regularizers = tf.add(r, tf.nn.l2_loss(wo))
  loss = tf.reduce_mean(loss + beta * regularizers, name='Loss')

  # Optimizer.
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(logits)

  a, _ = hlayers(
    n_layers=3,
    n_nodes=num_nodes,
    i_size=image_size * image_size,
    a=tf_valid_dataset)
  valid_prediction = tf.nn.softmax(tf.add(tf.matmul(a, wo), bo))

  a, _ = hlayers(
    n_layers=3,
    n_nodes=num_nodes,
    i_size=image_size * image_size,
    a=tf_test_dataset)
  test_prediction = tf.nn.softmax(tf.add(tf.matmul(a, wo), bo))

num_steps = 3001

with tf.Session(graph=graph) as session:
  tf.global_variables_initializer().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {
      tf_train_dataset : batch_data,
      tf_train_labels : batch_labels,
      keep_p : 0.5}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
  print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))

层越多,权重正则化越强。因此,您可以尝试减少正则化,并查看精度是否提高。

该问题是由损失函数和权重中的
nan
引起的,如中所述

通过根据每个权重张量的尺寸(如He等人[1]所述)为每个权重张量引入不同的标准偏差,我能够成功地训练网络

[1] :He等人(2015年)