Python 实例化ImageDataGenerator后如何更改其批量大小?
我知道的方法是这样的Python 实例化ImageDataGenerator后如何更改其批量大小?,python,tensorflow,keras,deep-learning,Python,Tensorflow,Keras,Deep Learning,我知道的方法是这样的 from tensorflow.keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale=1./255) val_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_di
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(150, 150),
batch_size=20, <---------------------------
class_mode='binary')
来自tensorflow.keras.preprocessing.image import ImageDataGenerator的
列车\数据发生器=图像数据发生器(重缩放=1./255)
val_datagen=ImageDataGenerator(重缩放=1./255)
train_generator=来自目录的train_datagen.flow_(
列车主任,
目标_大小=(150150),
batch_size=20,您可以在创建ImageDataGenerator
对象后更改批次大小:
train_generator.batch_size = 2
批次大小将为2。Nicolas的答案是正确的。创建生成器后,您可以轻松选择批次大小层。我喜欢的另一条信息是关于模型中的批次大小。根据
批处理大小:…如果数据是以数据集、生成器或keras.utils.Sequence
实例的形式出现,请不要指定批处理大小
(因为它们生成批处理)
因此,根据文档,如果我们使用generator为培训生成批次,我们不应该指定批次大小
我从keras.dataset.cifar10复制了代码,并使用了cats\u vs\u dogs数据集的链接
from tensorflow.python.keras import backend
from tensorflow.python.keras.datasets.cifar import load_batch
from tensorflow.python.keras.utils.data_utils import get_file
from tensorflow.python.util.tf_export import keras_export
dirname = 'cifar-10-batches-py'
origin = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip '
path = get_file(
dirname,
origin=origin,
untar=True,
file_hash=
'6d958be074577803d12ecdefd02955f39262c83c16fe9348329d7fe0b5c001ce')
num_train_samples = 50000
x_train = np.empty((num_train_samples, 3, 32, 32), dtype='uint8')
y_train = np.empty((num_train_samples,), dtype='uint8')
for i in range(1, 6):
fpath = os.path.join(path, 'data_batch_' + str(i))
(x_train[(i - 1) * 10000:i * 10000, :, :, :],
y_train[(i - 1) * 10000:i * 10000]) = load_batch(fpath)
fpath = os.path.join(path, 'test_batch')
x_test, y_test = load_batch(fpath)
y_train = np.reshape(y_train, (len(y_train), 1))
y_test = np.reshape(y_test, (len(y_test), 1))
if backend.image_data_format() == 'channels_last':
x_train = x_train.transpose(0, 2, 3, 1)
x_test = x_test.transpose(0, 2, 3, 1)
x_test = x_test.astype(x_train.dtype)
y_test = y_test.astype(y_train.dtype)
在train\u generator
中将batch\u size
设置为None
。如果将batch size设置为non,则默认为32,但您不再使用ImageDataGenerator!…或者只使用tfds.load('cats\u vs\u dogs')