Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/339.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
培训中的Tensorflow错误:Tensorflow.python.framework.errors\u impl.InvalidArgumentError_Python_Tensorflow_Machine Learning_Deep Learning_Tensorflow2.0 - Fatal编程技术网

培训中的Tensorflow错误:Tensorflow.python.framework.errors\u impl.InvalidArgumentError

培训中的Tensorflow错误:Tensorflow.python.framework.errors\u impl.InvalidArgumentError,python,tensorflow,machine-learning,deep-learning,tensorflow2.0,Python,Tensorflow,Machine Learning,Deep Learning,Tensorflow2.0,我一直在尝试训练一个模型,用CNN检测照片中的猫。我一直在使用这些数据集: 我一直收到这个错误: tensorflow.python.framework.errors_impl.InvalidArgumentError: Input size should match (header_size + row_size * abs_height) but they differ by 2 [[{{node decode_image/DecodeImage}}]] [[Ite

我一直在尝试训练一个模型,用CNN检测照片中的猫。我一直在使用这些数据集:

  • 我一直收到这个错误:

    tensorflow.python.framework.errors_impl.InvalidArgumentError:  Input size should match (header_size + row_size * abs_height) but they differ by 2
         [[{{node decode_image/DecodeImage}}]]
         [[IteratorGetNext]] [Op:__inference_train_function_962]
    
    Function call stack:
    train_function
    
    这是代码(我重复使用并调整了早期项目中的代码):

    (我使用的是TensorFlow版本2.4.0和Python版本3.8.7)

    import matplotlib.pyplot as plt
    
    import numpy as np
    import os
    import PIL 
    import tensorflow as tf
    
    from tensorflow import keras
    from tensorflow.keras import layers
    from tensorflow.keras.models import Sequential
    
    
    batch_size = 64
    img_height = 180
    img_width = 180
    
    
    
    
    
    train_ds = tf.keras.preprocessing.image_dataset_from_directory(
      "D://datasets and work//archive(2)//PetImages",
      validation_split=0.2,
    
      subset="training",
      seed=123,
      image_size=(img_height, img_width),
      batch_size=batch_size)
    
    val_ds = tf.keras.preprocessing.image_dataset_from_directory(
      "D://datasets and work//archive(2)//PetImages",
      validation_split=0.2,
    
      subset="validation",
      seed=123,
      image_size=(img_height, img_width),
      batch_size=batch_size)
    
    
    
    class_names = train_ds.class_names
    print(class_names)
    AUTOTUNE = tf.data.AUTOTUNE
    
    train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
    val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
    
    num_classes = 2
    data_augmentation = tf.keras.Sequential([
      layers.experimental.preprocessing.RandomZoom(0.1)
    ])
    
    # horizontal image flipping is ok, chexnet did it;implement it later, Sahal
    model = Sequential([
      layers.experimental.preprocessing.Rescaling(1./255),
      layers.Conv2D(32, 2, padding='same', activation='relu'),
      layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding='valid'),
      layers.Conv2D(32, 2, padding='same', activation='relu'),
      layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding='valid'),
      layers.Conv2D(64, 2, padding='same', activation='relu'),
      layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding='valid'),
      layers.Conv2D(8, 2, padding='same', activation='relu'),
    #  layers.Dropout(0.1),
      layers.Flatten(),
      layers.Dense(128, activation='relu'),
      layers.Dense(num_classes)
    ])
    
    model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01),
                  loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits = True),
                  metrics=['accuracy'])
    
    epochs=10
    history = model.fit(
      train_ds,
      validation_data=val_ds,
      epochs=epochs
    )