Python KERAS低配合损耗和高损耗评估

Python KERAS低配合损耗和高损耗评估,python,tensorflow,keras,conv-neural-network,Python,Tensorflow,Keras,Conv Neural Network,我是凯拉斯的新手。这段代码正在对有无肿瘤的脑部MRI图像进行分类。当我运行model.evaluate()查看精度时,我会得到非常高的损失值,即使在我训练模型时损失值很低(正常值小于1),我会得到以下错误: WARNING:tensorflow:6 out of the last 11 calls to <function Model.make_test_function.<locals>.test_function at 0x00000221AC143AF0> trig

我是凯拉斯的新手。这段代码正在对有无肿瘤的脑部MRI图像进行分类。当我运行
model.evaluate()
查看精度时,我会得到非常高的损失值,即使在我训练模型时损失值很低(正常值小于1),我会得到以下错误:

WARNING:tensorflow:6 out of the last 11 calls to <function Model.make_test_function.<locals>.test_function at 0x00000221AC143AF0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
忽略警告


你的低培训损失和高评估损失意味着你的模型是过度拟合。当验证精度开始提高时,停止培训。

减少隐藏层或隐藏层节点的数量可能会有所帮助?可能会有所帮助。你拥有的训练变量越多,就越适合过度训练。也可以在密集层之后添加衰减层
import numpy as np
import matplotlib.pyplot as plt
import os
import cv2

import tensorflow as tf
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D

def load_data( DATADIR, IMG_SIZE, CATEGORIES ):
    data = []
    for category in CATEGORIES:  # do dogs and cats
        
        path = os.path.join(DATADIR,category)  # create path to dogs and cats
        class_num = CATEGORIES.index(category)  # get the classification  (0 or a 1). 0=dog 1=cat

        for img in os.listdir(path):  # iterate over each image per dogs and cats
            try:
                img_array = cv2.imread(os.path.join(path,img) ,cv2.IMREAD_GRAYSCALE)  # convert to array
                
                img_array = cv2.medianBlur(img_array,5)
                
                img_array = cv2.adaptiveThreshold(img_array,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2)
                
                new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))  # resize to normalize data size
                
                data.append([new_array, class_num])  # add this to our training_data
            except Exception as e:  # in the interest in keeping the output clean...
                pass
            #except OSError as e:
            #    print("OSErrroBad img most likely", e, os.path.join(path,img))
            #except Exception as e:
            #    print("general exception", e, os.path.join(path,img))
    return data

TRAIN_DATADIR = "F:\Train"
TEST_DATADIR = "F:\Test"

CATEGORIES = ["no", "yes"]
IMG_SIZE = 128
training_data = load_data(TRAIN_DATADIR, IMG_SIZE, CATEGORIES)
testing_data = load_data(TEST_DATADIR, IMG_SIZE, CATEGORIES)

print(len(training_data))

import random
random.shuffle(training_data)
random.shuffle(testing_data)

X_train = []
y_train = []

for features,label in training_data:
    X_train.append(features)
    y_train.append(label)

#print(X[0].reshape(-1, IMG_SIZE, IMG_SIZE, 1))

X_train = np.asarray(X_train)
y_train = np.asarray(y_train)

X_train = np.array(X_train).reshape(-1, IMG_SIZE, IMG_SIZE, 1)


X_test = []
y_test = []

for features,label in testing_data:
    X_test.append(features)
    y_test.append(label)

    
X_test = np.asarray(X_test)
y_test = np.asarray(y_test)
#print(X[0].reshape(-1, IMG_SIZE, IMG_SIZE, 1))

X_test = np.array(X_test).reshape(-1, IMG_SIZE, IMG_SIZE, 1)

X_train = X_train/255.0


model = Sequential()

model.add(Conv2D(32, (3, 3), input_shape = X_train.shape[1:]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

model.fit(X_train, y_train, batch_size=10, epochs=15)

score = model.evaluate(X_test, y_test,verbose=1)