Python 用于CenterNet的急切少镜头目标检测Colab

Python 用于CenterNet的急切少镜头目标检测Colab,python,tensorflow2.0,object-detection-api,Python,Tensorflow2.0,Object Detection Api,我正在使用Tensorflow对象检测API。最近更新为Tensorflow2。 作者们用它制作了一个伟大的Colab。他们在新数据集上微调RetinaNet,但我不明白如何使用它微调CenterNet(和EfficientSet) 他们有以下初始化视网膜网模型的代码: tf.keras.backend.clear_session() print('Building model and restoring weights for fine-tuning...', flush=True) num

我正在使用Tensorflow对象检测API。最近更新为Tensorflow2。 作者们用它制作了一个伟大的Colab。他们在新数据集上微调RetinaNet,但我不明白如何使用它微调CenterNet(和EfficientSet)

他们有以下初始化视网膜网模型的代码:

tf.keras.backend.clear_session()

print('Building model and restoring weights for fine-tuning...', flush=True)
num_classes = 1
pipeline_config = 'models/research/object_detection/configs/tf2/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.config'
checkpoint_path = 'models/research/object_detection/test_data/checkpoint/ckpt-0'

# Load pipeline config and build a detection model.
#
# Since we are working off of a COCO architecture which predicts 90
# class slots by default, we override the `num_classes` field here to be just
# one (for our new rubber ducky class).
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
model_config = configs['model']
model_config.ssd.num_classes = num_classes
model_config.ssd.freeze_batchnorm = True
detection_model = model_builder.build(
      model_config=model_config, is_training=True)

# Set up object-based checkpoint restore --- RetinaNet has two prediction
# `heads` --- one for classification, the other for box regression.  We will
# restore the box regression head but initialize the classification head
# from scratch (we show the omission below by commenting out the line that
# we would add if we wanted to restore both heads)
fake_box_predictor = tf.compat.v2.train.Checkpoint(
    _base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,
    # _prediction_heads=detection_model._box_predictor._prediction_heads,
    #    (i.e., the classification head that we *will not* restore)
    _box_prediction_head=detection_model._box_predictor._box_prediction_head,
    )
fake_model = tf.compat.v2.train.Checkpoint(
          _feature_extractor=detection_model._feature_extractor,
          _box_predictor=fake_box_predictor)
ckpt = tf.compat.v2.train.Checkpoint(model=fake_model)
ckpt.restore(checkpoint_path).expect_partial()

# Run model through a dummy image so that variables are created
image, shapes = detection_model.preprocess(tf.zeros([1, 640, 640, 3]))
prediction_dict = detection_model.predict(image, shapes)
_ = detection_model.postprocess(prediction_dict, shapes)
print('Weights restored!')
我尝试使用CenterNet模型做类似的事情(在本Colab教程中用于推断):

但是,由于形状不兼容(因为我更改了类的数量),因此引发了异常。在视网膜网的例子中,这个技巧(据我所知)被用来制作形状正确的张量:

fake_box_predictor = tf.compat.v2.train.Checkpoint(
    _base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,
    # _prediction_heads=detection_model._box_predictor._prediction_heads,
    #    (i.e., the classification head that we *will not* restore)
    _box_prediction_head=detection_model._box_predictor._box_prediction_head,
    )
fake_model = tf.compat.v2.train.Checkpoint(
          _feature_extractor=detection_model._feature_extractor,
          _box_predictor=fake_box_predictor)

但我如何才能发现我应该在检查点函数中写些什么呢?(例如,
\u base\u tower\u layers\u for theu heads=detection\u model.\u box\u predictor.\u base\u tower\u layers\u for theu heads
\u box\u prediction\u head=detection\u model.\u box\u prediction\u head

你能告诉我你是如何解决这个问题的吗?你能告诉我你是如何解决这个问题的吗?
fake_box_predictor = tf.compat.v2.train.Checkpoint(
    _base_tower_layers_for_heads=detection_model._box_predictor._base_tower_layers_for_heads,
    # _prediction_heads=detection_model._box_predictor._prediction_heads,
    #    (i.e., the classification head that we *will not* restore)
    _box_prediction_head=detection_model._box_predictor._box_prediction_head,
    )
fake_model = tf.compat.v2.train.Checkpoint(
          _feature_extractor=detection_model._feature_extractor,
          _box_predictor=fake_box_predictor)