Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/19.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Keras错误登录和标签必须具有相同的形状((无,17,17,1)与(无,1))_Python_Python 3.x_Tensorflow_Keras_Binary - Fatal编程技术网

Python Keras错误登录和标签必须具有相同的形状((无,17,17,1)与(无,1))

Python Keras错误登录和标签必须具有相同的形状((无,17,17,1)与(无,1)),python,python-3.x,tensorflow,keras,binary,Python,Python 3.x,Tensorflow,Keras,Binary,作为初学者,我编写了一个马和人的分类器 # dependencies import os import zipfile import numpy as np import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.optimizers import RMSprop import matplotlib.pyplot as plt

作为初学者,我编写了一个马和人的分类器

# dependencies
import os
import zipfile
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import RMSprop
import matplotlib.pyplot as plt

# Extracting ZipFiles
zip_path1 = r'C:/Users/91736/Documents/DEEP LEARNING PRACTICE/Week 1/files/horse-or-human.zip'
zip_ref1 = zipfile.ZipFile(zip_path1 , 'r')
zip_ref1.extractall(r'C:/Users/91736/Documents/DEEP LEARNING PRACTICE/Week 1/horse-or-human')
zip_ref1.close()

zip_path2 = r'C:/Users/91736/Documents/DEEP LEARNING PRACTICE/Week 1/files/validation-horse-or-human.zip'
zip_ref2 = zipfile.ZipFile(zip_path2 , 'r')
zip_ref2.extractall(r'C:/Users/91736/Documents/DEEP LEARNING PRACTICE/Week 1/validation-horse-or-human')
zip_ref2.close()


# setting up local dir
train_base_dir = r'C:/Users/91736/Documents/DEEP LEARNING PRACTICE/Week 1/horse-or-human'
valid_base_dir = r'C:/Users/91736/Documents/DEEP LEARNING PRACTICE/Week 1/validation-horse-or-human'

# setting up train and test dir
train_horse_dir = os.path.join(train_base_dir , 'horses')
train_human_dir = os.path.join(train_base_dir , 'humans')


valid_horse_dir = os.path.join(valid_base_dir ,'horses')
valid_human_dir = os.path.join(valid_base_dir , 'humans')


# defining model

model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(filters = 32 ,
                           kernel_size= (3,3) ,
                           input_shape = (150, 150,3),
                           activation = 'relu'),
    tf.keras.layers.MaxPooling2D(2,2),
    tf.keras.layers.Conv2D(64 , (3,3) , activation = 'relu'),
    tf.keras.layers.MaxPooling2D(2,2),
    tf.keras.layers.Conv2D(128 , (3,3) , activation = 'relu'),
    tf.keras.layers.MaxPool2D(2,2),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512 , activation ='relu'),
    tf.keras.layers.Dense(1 , activation = 'sigmoid')    
    ])

model.compile(loss = 'binary_crossentropy' , optimizer=RMSprop(lr = 0.001) , metrics = ['accuracy'])
model.summary()

# defining augmentation
train_datgen = ImageDataGenerator(rescale = 1./255 ,
                                  rotation_range=40,
                                  width_shift_range= 0.2,
                                  height_shift_range= 0.2,
                                  shear_range= 0.2,
                                  zoom_range = 0.2,
                                  horizontal_flip = True,
                                  fill_mode= 'nearest')

valid_datagen = ImageDataGenerator(rescale = 1./255)




# calling geenrators
train_gen = train_datgen.flow_from_directory(train_base_dir,
                                       target_size = (150, 150),
                                       batch_size = 20,
                                       class_mode = 'binary')

valid_gen = valid_datagen.flow_from_directory(valid_base_dir,
                                        target_size = (150, 150),
                                        batch_size = 20,
                                        class_mode = 'binary')
                                        

history = model.fit_generator(train_gen,
                    validation_data= valid_gen,
                    epochs = 100 ,
                    steps_per_epoch= 10, 
                    validation_steps = 10,
                    verbose = 1)
但执行时,请使用这些警告运行

2020-10-26 15:36:58.620164:W tensorflow/core/kernels/data/generator_dataset_op.cc:103]错误 完成GeneratorDataset迭代器时发生:已取消: 手术取消了

图形现在默认在“打印”窗格中渲染。使它们也出现 在控制台中,取消选中 打印窗格选项菜单


正如人们可以看到的那样,没有
准确度
损失
验证
验证
准确度值,并且上面的消息记录了所有100个时代,为什么会这样

我想你忘了展平层
tf.keras.layers.MaxPool2D(2,2)
的输出张量。只需添加一个
展平
层,希望它能工作。

您需要从上一个Conv2D和最后一个密集层执行一些展平操作。它不工作,是的,形状问题已修复,但我在GPU上运行它,它表示奇怪的错误,而在cpu上执行时没有错误,2020-10-26 15:36:58.620164:W tensorflow/core/kernels/data/generator_dataset_op.cc:103]完成generator时出错数据集迭代器:已取消:操作已取消