Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/282.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow:如何在模型中添加正则化_Python_Tensorflow_Deep Learning - Fatal编程技术网

Python Tensorflow:如何在模型中添加正则化

Python Tensorflow:如何在模型中添加正则化,python,tensorflow,deep-learning,Python,Tensorflow,Deep Learning,我想将正则化添加到我的优化器中,如下所示: tf.train.AdadeltaOptimizer(learning_rate=1).minimize(loss) 但我不知道如何在下面的代码中设计函数“loss” 我看到的网站是: 修改后的代码最初来自谷歌机器学习课程: 有人能给我一些建议或与我讨论吗 在损失函数中加入正则化。您的优化器AdadeltaOptimizer不支持正则化参数。如果要将正则化添加到优化器中,应使用tf.train.proximaladagradoOptimizer

我想将正则化添加到我的优化器中,如下所示:

tf.train.AdadeltaOptimizer(learning_rate=1).minimize(loss)
但我不知道如何在下面的代码中设计函数“loss”

我看到的网站是:

修改后的代码最初来自谷歌机器学习课程:

有人能给我一些建议或与我讨论吗



在损失函数中加入正则化。您的优化器
AdadeltaOptimizer
不支持正则化参数。如果要将正则化添加到优化器中,应使用
tf.train.proximaladagradoOptimizer
,因为它具有
l2\u正则化强度
l1\u正则化强度
参数,您可以在其中设置值。这些参数是原始算法的一部分

另一方面,您只需对自定义损耗函数应用正则化,但
DNNClassifier
不允许使用任何自定义损耗函数。您必须为此手动创建网络。 如何添加正则化,请检查它

def train_nn_classifier_model_new(
    my_optimizer,
    steps,
    batch_size,
    hidden_units,
    training_examples,
    training_targets,
    validation_examples,
    validation_targets):

  periods = 10
  steps_per_period = steps / periods

  # Create a DNNClassifier object.

  my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
  dnn_classifier = tf.estimator.DNNClassifier(
      feature_columns=construct_feature_columns(training_examples),
      hidden_units=hidden_units,
      optimizer=my_optimizer
      )

  # Create input functions.
  training_input_fn = lambda: my_input_fn(training_examples, 
                                          training_targets["deal_or_not"], 
                                          batch_size=batch_size)
  predict_training_input_fn = lambda: my_input_fn(training_examples,        
                                         training_targets["deal_or_not"], 
                                         num_epochs=1, 
                                         shuffle=False)
  predict_validation_input_fn = lambda: my_input_fn(validation_examples, 
                                         validation_targets["deal_or_not"], 
                                         num_epochs=1, 
                                         shuffle=False)
  # Train the model, but do so inside a loop so that we can periodically assess
  # loss metrics.
  print("Training model...")
  print("LogLoss (on training data):")
  training_log_losses = []
  validation_log_losses = []
  for period in range (0, periods):
    # Train the model, starting from the prior state.
    dnn_classifier.train(
        input_fn=training_input_fn,
        steps=steps_per_period
    )
    # Take a break and compute predictions.    
    training_probabilities = 
    dnn_classifier.predict(input_fn=predict_training_input_fn)
    training_probabilities = np.array([item['probabilities'] for item in training_probabilities])
    print(training_probabilities)

    validation_probabilities = dnn_classifier.predict(input_fn=predict_validation_input_fn)
    validation_probabilities = np.array([item['probabilities'] for item in validation_probabilities])

    training_log_loss = metrics.log_loss(training_targets, training_probabilities)
    validation_log_loss = metrics.log_loss(validation_targets, validation_probabilities)
    # Occasionally print the current loss.
    print("  period %02d : %0.2f" % (period, training_log_loss))
    # Add the loss metrics from this period to our list.
    training_log_losses.append(training_log_loss)
    validation_log_losses.append(validation_log_loss)
  print("Model training finished.")

  # Output a graph of loss metrics over periods.
  plt.ylabel("LogLoss")
  plt.xlabel("Periods")
  plt.title("LogLoss vs. Periods")
  plt.tight_layout()
  plt.plot(training_log_losses, label="training")
  plt.plot(validation_log_losses, label="validation")
  plt.legend()

  return dnn_classifier




result = train_nn_classifier_model_new(
    my_optimizer=tf.train.AdadeltaOptimizer (learning_rate=1),
    steps=30000,
    batch_size=250,
    hidden_units=[150, 150, 150, 150],
    training_examples=training_examples,
    training_targets=training_targets,
    validation_examples=validation_examples,
    validation_targets=validation_targets
    )