Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/336.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 列车功能内部和外部的不同结果_Python_Tensorflow_Machine Learning_Keras_Deep Learning - Fatal编程技术网

Python 列车功能内部和外部的不同结果

Python 列车功能内部和外部的不同结果,python,tensorflow,machine-learning,keras,deep-learning,Python,Tensorflow,Machine Learning,Keras,Deep Learning,我在玩tensorflow 2。我做了我自己的模型,类似于它是如何做的 然后我创建了自己的拟合函数。现在我得到了有史以来最奇怪的东西。下面是我做测试的笔记本上的复制/粘贴输出: def fit(x_train, y_train, learning_rate=0.01, epochs=10, batch_size=100, normal=True, verbose=True, display_freq=100): if normal: x_train = normalize

我在玩tensorflow 2。我做了我自己的模型,类似于它是如何做的

然后我创建了自己的拟合函数。现在我得到了有史以来最奇怪的东西。下面是我做测试的笔记本上的复制/粘贴输出:

def fit(x_train, y_train, learning_rate=0.01, epochs=10, batch_size=100, normal=True, verbose=True, display_freq=100):
    if normal:
        x_train = normalize(x_train)  # TODO: This normalize could be a bit different for each and be bad.

    num_tr_iter = int(len(y_train) / batch_size)  # Number of training iterations in each epoch
    if verbose:
        print("Starting training...")
    for epoch in range(epochs):
        # Randomly shuffle the training data at the beginning of each epoch
        x_train, y_train = randomize(x_train, y_train)
        for iteration in range(num_tr_iter):
            # Get the batch
            start = iteration * batch_size
            end = (iteration + 1) * batch_size
            x_batch, y_batch = get_next_batch(x_train, y_train, start, end)
            # Run optimization op (backpropagation)
            # import pdb; pdb.set_trace()
            if verbose and (epoch * batch_size + iteration) % display_freq == 0:
                current_loss = _apply_loss(y_train, model(x_train, training=True))
                current_acc = evaluate_accuracy(x_train, y_train)
                print("Epoch: {0}/{1}; batch {2}/{3}; loss: {4:.4f}; accuracy: {5:.2f} %"
                      .format(epoch, epochs, iteration, num_tr_iter, current_loss, current_acc*100))
            train_step(x_batch, y_batch, learning_rate)

    current_loss = _apply_loss(y_train, model(x_train, training=True))
    current_acc = evaluate_accuracy(x_train, y_train)
    print("End: loss: {0:.4f}; accuracy: {1:.2f} %".format(current_loss, current_acc*100))

import logging
logging.getLogger('tensorflow').disabled = True
fit(x_train, y_train)

current_loss = _apply_loss(y_train, model(x_train, training=True))
current_acc = evaluate_accuracy(x_train, y_train)
print("End: loss: {0:.4f}; accuracy: {1:.2f} %".format(current_loss, current_acc*100))
本部分产出:

Starting training...
Epoch: 0/10; batch 0/80; loss: 0.9533; accuracy: 59.67 %
Epoch: 1/10; batch 0/80; loss: 0.9386; accuracy: 60.15 %
Epoch: 2/10; batch 0/80; loss: 0.9259; accuracy: 60.50 %
Epoch: 3/10; batch 0/80; loss: 0.9148; accuracy: 61.05 %
Epoch: 4/10; batch 0/80; loss: 0.9051; accuracy: 61.15 %
Epoch: 5/10; batch 0/80; loss: 0.8968; accuracy: 61.35 %
Epoch: 6/10; batch 0/80; loss: 0.8896; accuracy: 61.27 %
Epoch: 7/10; batch 0/80; loss: 0.8833; accuracy: 61.51 %
Epoch: 8/10; batch 0/80; loss: 0.8780; accuracy: 61.52 %
Epoch: 9/10; batch 0/80; loss: 0.8733; accuracy: 61.54 %
End: loss: 0.8733; accuracy: 61.54 %
End: loss: 0.4671; accuracy: 77.08 %

现在我的问题是,我怎么会在最后两行上得到不同的值!?我也在做同样的事情,对吗?我在这里完全感到困惑。我甚至不知道如何用谷歌搜索这个问题。

所以这个问题很愚蠢。这是因为我在火车示例开始时做了规范化的事情!删除它并开始正常工作。

所以这个问题很愚蠢。这是因为我在火车示例开始时做了规范化的事情!移除它并开始正常工作