Python 找不到合适的CNN
我在Colab中使用Keras Tensorflow,我正在处理数据集。任务是图像分类。有相当多的类别(102)和没有那么多的图像每类。我尝试建立不同的神经网络,从简单的到更复杂的,有或没有图像增强、丢失、超参数调整、批量大小调整、优化器调整、图像大小调整。。。。然而,我没能找到一个好的CNN,它给了我一个可接受的val_准确度,最后是一个好的测试准确度。到目前为止,我所能得到的最大val_精度是0.3倍。我非常肯定,这是有可能得到更好的结果,我不知何故只是没有找到正确的CNN设置。到目前为止,我的代码是:Python 找不到合适的CNN,python,tensorflow,keras,Python,Tensorflow,Keras,我在Colab中使用Keras Tensorflow,我正在处理数据集。任务是图像分类。有相当多的类别(102)和没有那么多的图像每类。我尝试建立不同的神经网络,从简单的到更复杂的,有或没有图像增强、丢失、超参数调整、批量大小调整、优化器调整、图像大小调整。。。。然而,我没能找到一个好的CNN,它给了我一个可接受的val_准确度,最后是一个好的测试准确度。到目前为止,我所能得到的最大val_精度是0.3倍。我非常肯定,这是有可能得到更好的结果,我不知何故只是没有找到正确的CNN设置。到目前为止,
import tensorflow as tf
from keras.models import Model
import tensorflow_datasets as tfds
import tensorflow_hub as hub
# update colab tensorflow_datasets to current version 3.2.0,
# otherwise tfds.load will lead to error when trying to load oxford_flowers102 dataset
!pip install tensorflow_datasets --upgrade
# restart runtime
oxford, info = tfds.load("oxford_flowers102", with_info=True, as_supervised=True)
train_data=oxford['train']
test_data=oxford['test']
validation_data=oxford['validation']
IMG_SIZE = 224
def format_example(image, label):
image = tf.cast(image, tf.float32)
image = image*1/255.0
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
train = train_data.map(format_example)
validation = validation_data.map(format_example)
test = test_data.map(format_example)
BATCH_SIZE = 32
SHUFFLE_BUFFER_SIZE = 1000
train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_batches = test.batch(BATCH_SIZE)
validation_batches = validation.batch(BATCH_SIZE)
我尝试的第一款车型:
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(102)
])
base_learning_rate = 0.0001
model.compile(optimizer=tf.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(256, (3,3), activation='relu'),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (2,2), activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(102)
])
base_learning_rate = 0.001
model.compile(optimizer=tf.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=250)
新纪元20/20 32/32[=================================]-4s 127ms/步-
损失:2.9830-精度:0.2686-val_损失:4.8426-val_精度:
0.0637
当我运行它更多的时代,它过度拟合,val_损失上升,val_准确度没有上升
第二种型号(非常简单):
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(102)
])
base_learning_rate = 0.0001
model.compile(optimizer=tf.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(256, (3,3), activation='relu'),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (2,2), activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(102)
])
base_learning_rate = 0.001
model.compile(optimizer=tf.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=250)
根本不起作用,损失保持在4.6250
第三种型号:
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(102)
])
base_learning_rate = 0.0001
model.compile(optimizer=tf.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(256, (3,3), activation='relu'),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (2,2), activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(102)
])
base_learning_rate = 0.001
model.compile(optimizer=tf.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=250)
模型过度拟合。Val_精度不超过0.15
我在这个模型中添加了退出层(尝试不同的速率),并调整了内核。然而,没有真正的改善。还尝试了adam optimizer
第四款车型:
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(102)
])
base_learning_rate = 0.0001
model.compile(optimizer=tf.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(256, (3,3), activation='relu'),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (2,2), activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(102)
])
base_learning_rate = 0.001
model.compile(optimizer=tf.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=250)
同样的问题再次出现,没有很好的估值准确性。还使用RMSprop优化器进行了尝试。无法获得高于0.2的val_精度
第五款车型:
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(102)
])
base_learning_rate = 0.0001
model.compile(optimizer=tf.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(256, (3,3), activation='relu'),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Dense(102)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=20)
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 3)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (2,2), activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(102)
])
base_learning_rate = 0.001
model.compile(optimizer=tf.optimizers.RMSprop(lr=base_learning_rate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_batches, validation_data=validation_batches, epochs=250)
val_精度最高约为0.3倍。也和亚当一起试过
当我尝试迁移学习时,使用Mobilenet,我立即在10个时代内获得了0.7倍的成绩。所以我想知道为什么我不能用一个自建的CNN来接近这一点?我不期望0.8或击败Mobilenet。但我的错误在哪里?自建的CNN会是什么样子,我可以用它来获得0.6-0.7瓦卢的准确度?你的问题并不完全清楚:你是否担心你的模型架构不如say MobileNet的,或者你的性能无法与MobileNet的迁移学习相比 作为对第一个问题的回应,一般来说,诸如ResNet、MobileNet、AlexNet等流行的体系结构都是精心构建的网络,因此,除非您自己做一些非常聪明的事情,否则它们可能比手工定义的网络更好地表示数据 作为对第二种情况的响应,模型越复杂,它需要的数据越多,以便更好地训练它,使其不与数据拟合不足或过拟合。这在诸如your(有几千张图像)之类的数据集上造成了一个问题,因为复杂的CNN很难学习有意义的规则(内核)来从图像中提取信息,而不学习记忆有限训练输入集的规则。总之,您需要一个更大的模型来做出更准确的预测,但这反过来又需要更多的数据,而这些数据有时是您没有的。我怀疑,如果在牛津花卉102数据集上使用未经培训的MobileNet,而不是未经培训的网络,您会看到同样糟糕的性能
进入转移学习。通过在相对较大的数据集上预训练相对较大的模型(大多数在拥有数百万张图像的ImageNet上预训练),该模型能够比在较小的数据集上更好地从任意图像中提取相关信息。这些特征提取的一般规则也适用于较小的数据集,因此只需稍加微调,迁移学习模型的性能可能会远远超过仅在数据集上训练的任何模型。为什么您认为需要获得更高的精度?此外,关于培训和验证损失的观察结果是什么?我的模型不是最好的。有了更好的模型,就有可能获得更高的精度。我想找一个能给我展示更好的模型架构的人。我可以看到,在我的很多案例中,由于每个类的图像都非常低,过度拟合,训练损失减少了,而验证损失增加了。这里有很多模型架构,在这个数据集上实现了>99%的准确性。许多都是基于resnet的,如果您想这样做,从零开始实现resnet非常简单。如简短描述中所述,大多数(但不是全部)使用迁移学习来实现这些结果。我不想使用迁移学习,也不想使用这些复杂的模型。我也没有要求0.99精度的模型。我要求对我的模型进行更好的架构,以获得0.6x-0.7x,因为我确信有人可以很容易地向我展示一个模型,我可以使用该模型获得0.6x-0.7x。那里有几个模型在不使用附属论文进行转移学习的情况下实现了良好的性能(>95%)。不,我的观点是我的CNN不是最优的。然而,我没有建造更好的。这就是为什么我要明确地寻找一个人,他可以为我展示这个数据集的特定模型体系结构,从而带来更好的性能。因为我确信有更好的,我只是无法“找到”一个。没有迁移学习,你不可能做得更好。但是,例如,如果您打算手动定义体系结构,则可以手动复制MobileNet体系结构。有一个相关的研究领域叫做“少镜头学习”,其中模型被训练成擅长推理,只要给出一组非常小的示例,你就可以尝试这些体系结构。不,我不想手动复制MobileNet体系结构。我的观点是,我正在寻找一位能够向我展示更好的模型体系结构的人,因为我确信这是可能的,而无需迁移学习。