Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 实例化ImageDataGenerator后如何更改其批量大小?_Python_Tensorflow_Keras_Deep Learning - Fatal编程技术网

Python 实例化ImageDataGenerator后如何更改其批量大小?

Python 实例化ImageDataGenerator后如何更改其批量大小?,python,tensorflow,keras,deep-learning,Python,Tensorflow,Keras,Deep Learning,我知道的方法是这样的 from tensorflow.keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale=1./255) val_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_di

我知道的方法是这样的

from tensorflow.keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        train_dir,  
        target_size=(150, 150), 

        batch_size=20,             <---------------------------

        class_mode='binary')
来自tensorflow.keras.preprocessing.image import ImageDataGenerator的

列车\数据发生器=图像数据发生器(重缩放=1./255)
val_datagen=ImageDataGenerator(重缩放=1./255)
train_generator=来自目录的train_datagen.flow_(
列车主任,
目标_大小=(150150),

batch_size=20,您可以在创建
ImageDataGenerator
对象后更改批次大小:

train_generator.batch_size = 2

批次大小将为2。

Nicolas的答案是正确的。创建生成器后,您可以轻松选择批次大小层。我喜欢的另一条信息是关于
模型中的
批次大小。根据

批处理大小:…如果数据是以数据集、生成器或
keras.utils.Sequence
实例的形式出现,请不要指定
批处理大小
(因为它们生成批处理)


因此,根据文档,如果我们使用generator为培训生成批次,我们不应该指定
批次大小
我从keras.dataset.cifar10复制了代码,并使用了cats\u vs\u dogs数据集的链接

from tensorflow.python.keras import backend
from tensorflow.python.keras.datasets.cifar import load_batch
from tensorflow.python.keras.utils.data_utils import get_file
from tensorflow.python.util.tf_export import keras_export

dirname = 'cifar-10-batches-py'
origin = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip '
path = get_file(
    dirname,
    origin=origin,
    untar=True,
    file_hash=
    '6d958be074577803d12ecdefd02955f39262c83c16fe9348329d7fe0b5c001ce')

num_train_samples = 50000

x_train = np.empty((num_train_samples, 3, 32, 32), dtype='uint8')
y_train = np.empty((num_train_samples,), dtype='uint8')

for i in range(1, 6):
  fpath = os.path.join(path, 'data_batch_' + str(i))
  (x_train[(i - 1) * 10000:i * 10000, :, :, :],
    y_train[(i - 1) * 10000:i * 10000]) = load_batch(fpath)

fpath = os.path.join(path, 'test_batch')
x_test, y_test = load_batch(fpath)

y_train = np.reshape(y_train, (len(y_train), 1))
y_test = np.reshape(y_test, (len(y_test), 1))

if backend.image_data_format() == 'channels_last':
  x_train = x_train.transpose(0, 2, 3, 1)
  x_test = x_test.transpose(0, 2, 3, 1)

x_test = x_test.astype(x_train.dtype)
y_test = y_test.astype(y_train.dtype)

train\u generator
中将
batch\u size
设置为
None
。如果将batch size设置为non,则默认为32,但您不再使用ImageDataGenerator!…或者只使用
tfds.load('cats\u vs\u dogs')