由于tensorflow的Python代码中存在语法错误,无法完成此问题?

由于tensorflow的Python代码中存在语法错误,无法完成此问题?,python,tensorflow,machine-learning,syntax-error,Python,Tensorflow,Machine Learning,Syntax Error,“返回”在函数之外。我必须返回元组中的值。基本上,这里有两个错误。首先,“返回”在函数之外。其次,结果没有作为元组返回 def train_mnist(): class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if logs.get('acc') > 0.99: print ('\nReached 99% accura

“返回”在函数之外。我必须返回元组中的值。基本上,这里有两个错误。首先,“返回”在函数之外。其次,结果没有作为元组返回

def train_mnist():

class myCallback(tf.keras.callbacks.Callback):

    def on_epoch_end(self, epoch, logs={}):
        if logs.get('acc') > 0.99:
            print ('\nReached 99% accuracy so cancelling training!')
        self.model.stop_training = True

mnist = tf.keras.datasets.mnist

((x_train, y_train), (x_test, y_test)) = mnist.load_data(path=path)
(x_train, x_test) = (x_train / 255.0, x_test / 255.0)

callbacks = myCallback()

model = \
    tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28,
                               28)), tf.keras.layers.Dense(512,
                               activation=tf.nn.relu),
                               tf.keras.layers.Dense(10,
                               activation=tf.nn.softmax)])
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

history = model.fit(x_train, y_train, epochs=10,
                    callbacks=[callbacks])


return (history.epoch, history.history['acc'][-1])

问题在于缩进和从
模型日志中获取准确性

我已经修改了你的代码如下,并得到了预期的输出

def train_mnist():

  class myCallback(tf.keras.callbacks.Callback):

      def on_epoch_end(self, epoch, logs):
          if logs["accuracy"] > 0.99:
              print ('\nReached 99% accuracy so cancelling training!')
              self.model.stop_training = True

  mnist = tf.keras.datasets.mnist

  ((x_train, y_train), (x_test, y_test)) = mnist.load_data()
  (x_train, x_test) = (x_train / 255.0, x_test / 255.0)

  callbacks = myCallback()

  model = \
      tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28,
                                28)), tf.keras.layers.Dense(512,
                                activation=tf.nn.relu),
                                tf.keras.layers.Dense(10,
                                activation=tf.nn.softmax)])
  model.compile(optimizer='adam',
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])

  history = model.fit(x_train, y_train, epochs=10,
                      callbacks=[callbacks])


  return (history.epoch, history.history['accuracy'][-1]) 
输出:

Epoch 1/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.2026 - accuracy: 0.9392
Epoch 2/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0799 - accuracy: 0.9755
Epoch 3/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0521 - accuracy: 0.9839
Epoch 4/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0353 - accuracy: 0.9894
Epoch 5/10
1867/1875 [============================>.] - ETA: 0s - loss: 0.0278 - accuracy: 0.9910
Reached 99% accuracy so cancelling training!
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0278 - accuracy: 0.9910
([0, 1, 2, 3, 4], 0.9909833073616028)

问题在于缩进和从
模型日志中获取准确性

我已经修改了你的代码如下,并得到了预期的输出

def train_mnist():

  class myCallback(tf.keras.callbacks.Callback):

      def on_epoch_end(self, epoch, logs):
          if logs["accuracy"] > 0.99:
              print ('\nReached 99% accuracy so cancelling training!')
              self.model.stop_training = True

  mnist = tf.keras.datasets.mnist

  ((x_train, y_train), (x_test, y_test)) = mnist.load_data()
  (x_train, x_test) = (x_train / 255.0, x_test / 255.0)

  callbacks = myCallback()

  model = \
      tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28,
                                28)), tf.keras.layers.Dense(512,
                                activation=tf.nn.relu),
                                tf.keras.layers.Dense(10,
                                activation=tf.nn.softmax)])
  model.compile(optimizer='adam',
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])

  history = model.fit(x_train, y_train, epochs=10,
                      callbacks=[callbacks])


  return (history.epoch, history.history['accuracy'][-1]) 
输出:

Epoch 1/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.2026 - accuracy: 0.9392
Epoch 2/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0799 - accuracy: 0.9755
Epoch 3/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0521 - accuracy: 0.9839
Epoch 4/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0353 - accuracy: 0.9894
Epoch 5/10
1867/1875 [============================>.] - ETA: 0s - loss: 0.0278 - accuracy: 0.9910
Reached 99% accuracy so cancelling training!
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0278 - accuracy: 0.9910
([0, 1, 2, 3, 4], 0.9909833073616028)

缺少很多缩进,因此我们无法确定它在原始代码中的真实外观。为什么要使用return命令,因为您的模型不在函数中。缺少很多缩进,因此我们无法确定它在原始代码中的真实外观。为什么要使用return命令,您的模型不在函数内。@a.Khan-如果您的问题由上述答案解决,请接受并投票表决。@a.Khan-如果您的问题由上述答案解决,请接受并投票表决。