Python 有什么区别吗?

Python 有什么区别吗?,python,machine-learning,neural-network,Python,Machine Learning,Neural Network,我有两段使用tensorflow编写的代码。 其中之一是: import tensorflow as tf class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if(logs.get('accuracy')>0.99): print("\nReached 99% accuracy so cancelling trainin

我有两段使用tensorflow编写的代码。 其中之一是:

import tensorflow as tf

class myCallback(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
      if(logs.get('accuracy')>0.99):
        print("\nReached 99% accuracy so cancelling training!")
      self.model.stop_training = True

mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

callbacks = myCallback()

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(512, activation=tf.nn.relu),
    tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])
import tensorflow as tf

def train_mnist():

    class myCallback(tf.keras.callbacks.Callback):
        def on_epoch_end(self, epoch, logs={}):
            if(logs.get('accuracy')>99):
                print("\n Se incheie antrenamentul")
                self.model.stop_training = True

    mnist = tf.keras.datasets.mnist

    (x_train, y_train),(x_test, y_test) = mnist.load_data()

    x_train, x_test = x_train / 255.0, x_test / 255.0

    callbacks = myCallback()

    model = tf.keras.Sequential([
        tf.keras.layers.Flatten(input_shape=(28, 28)),
        tf.keras.layers.Dense(512, activation=tf.nn.relu),
        tf.keras.layers.Dense(10, activation=tf.nn.softmax)])

    model.compile(optimizer='adam',
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
    
    # model fitting
    history = model.fit(x_train, y_train, epochs = 10, callbacks=[callbacks])
    
    # model fitting
    return history.epoch, history.history['acc'][-1]

train_mnist()
另一个是:

import tensorflow as tf

class myCallback(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
      if(logs.get('accuracy')>0.99):
        print("\nReached 99% accuracy so cancelling training!")
      self.model.stop_training = True

mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

callbacks = myCallback()

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(512, activation=tf.nn.relu),
    tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])
import tensorflow as tf

def train_mnist():

    class myCallback(tf.keras.callbacks.Callback):
        def on_epoch_end(self, epoch, logs={}):
            if(logs.get('accuracy')>99):
                print("\n Se incheie antrenamentul")
                self.model.stop_training = True

    mnist = tf.keras.datasets.mnist

    (x_train, y_train),(x_test, y_test) = mnist.load_data()

    x_train, x_test = x_train / 255.0, x_test / 255.0

    callbacks = myCallback()

    model = tf.keras.Sequential([
        tf.keras.layers.Flatten(input_shape=(28, 28)),
        tf.keras.layers.Dense(512, activation=tf.nn.relu),
        tf.keras.layers.Dense(10, activation=tf.nn.softmax)])

    model.compile(optimizer='adam',
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
    
    # model fitting
    history = model.fit(x_train, y_train, epochs = 10, callbacks=[callbacks])
    
    # model fitting
    return history.epoch, history.history['acc'][-1]

train_mnist()

第一种方法给出了3或4个时代后的精度为0.99。第二种方法给出了10个时代后的精度为0.91。为什么?在我看来他们都一样。有什么想法吗?

它们几乎完全相同。我刚刚检查了两种方法的准确性。您的准确性显示不同的唯一原因是,在第二个代码中,您返回了

history.history['acc'][-1]
而不是

history.history['accurity'][-1]
您还需要保存第一个代码的历史记录,以便进行如下比较:

history=model.fit(x\u-train,y\u-train,epochs=10,callbacks=[callbacks])
还发现您已经在第一个代码中的if条件之外停止了模型训练

classmycallback(tf.keras.callbacks.Callback):
_epoch_end上的def(self、epoch、logs={}):
如果(logs.get('accurity')>0.99):
打印(“\n已达到99%的准确率,因此取消培训!”)
self.model.stop\u training=True
应该是这样的:

classmycallback(tf.keras.callbacks.Callback):
_epoch_end上的def(self、epoch、logs={}):
如果(logs.get('accurity')>0.99):
打印(“\n已达到99%的准确率,因此取消培训!”)
self.model.stop\u training=True
这两种代码的准确率都必须在0.99%左右

因为你的第二个代码没有显示准确度。我张贴整个修改后的代码为您的第二个代码

将tensorflow导入为tf
从操作系统导入路径,getcwd,chdir
path=f“{getcwd()}/./tmp2/mnist.npz”
def序列列表():
类myCallback(tf.keras.callbacks.Callback):
_epoch_end上的def(self、epoch、logs={}):
如果(logs.get('accurity')>99):
打印(“\n Se incheie antrenamentul”)
self.model.stop\u training=True
mnist=tf.keras.datasets.mnist
(x_列,y_列),(x_测试,y_测试)=列表负载数据()
x_系列,x_测试=x_系列/255.0,x_测试/255.0
callbacks=myCallback()
模型=tf.keras.Sequential([
tf.keras.layers.Flatten(输入_形状=(28,28)),
tf.keras.layers.Dense(512,活化=tf.nn.relu),
tf.keras.layers.Dense(10,活化=tf.nn.softmax)])
model.compile(优化器='adam',
损失=“稀疏”\u分类”\u交叉熵',
指标=[‘准确度’])
#模型拟合
history=model.fit(x\u-train,y\u-train,epochs=10,callbacks=[callbacks])
#模型拟合
返回history.epoch,history.history['accurity'][-1]
列车时刻表()

由于您重新运行了代码,请发布两种不同方法返回的确切结果(从您的帖子中,不清楚它们是否确实不同)。Sooo。。如何让我的第二个代码也显示99%的准确性?尝试修改为“准确性”而不是“acc”,但没有任何效果。请参阅我更新的帖子。如果第一个代码中的循环进行了修改,那么您已经停止了模型训练。还是什么都没做。问题在第二个代码中。这就是我想要的99%的代码。嗨,我已经再次更新了为你的第二个代码写完整代码的帖子。检查谷歌colab本身的准确性。它们都是一样的。由于随机性,每次运行时输出可能都不同