Python API tensorflow检测快速RCNN和FPN

Python API tensorflow检测快速RCNN和FPN,python,tensorflow,object-detection,Python,Tensorflow,Object Detection,我想使用更快的RCNN resnet 50和FPN(特征金字塔网络),以便在小对象上获得更好的精度。(对于ssd,结果比更快的rcnn更糟糕)。这就是为什么我会尝试更快的rcnn与FPN。 但是我有点卡住了,我不知道如何将它添加到配置文件中。 我目前正在使用tensorflow的API检测: 我必须用多刻度锚发电机更换第一级锚发电机吗 以下是FPN的配置,如果我很了解protos: message MultiscaleAnchorGenerator { // minimum level in

我想使用更快的RCNN resnet 50和FPN(特征金字塔网络),以便在小对象上获得更好的精度。(对于ssd,结果比更快的rcnn更糟糕)。这就是为什么我会尝试更快的rcnn与FPN。 但是我有点卡住了,我不知道如何将它添加到配置文件中。 我目前正在使用tensorflow的API检测:

我必须用多刻度锚发电机更换第一级锚发电机吗

以下是FPN的配置,如果我很了解protos:

message MultiscaleAnchorGenerator {
  // minimum level in feature pyramid
  optional int32 min_level = 1 [default = 3];

  // maximum level in feature pyramid
  optional int32 max_level = 2 [default = 7];

  // Scale of anchor to feature stride
  optional float anchor_scale = 3 [default = 4.0];

  // Aspect ratios for anchors at each grid point.
  repeated float aspect_ratios = 4;

  // Number of intermediate scale each scale octave
  optional int32 scales_per_octave = 5 [default = 2];

  // Whether to produce anchors in normalized coordinates.
  optional bool normalize_coordinates = 6 [default = true];
}
更快的RCNN的配置文件:

model {
  faster_rcnn {
    num_classes: 4
    image_resizer {
      keep_aspect_ratio_resizer {
        min_dimension: 1920
        max_dimension: 1920
      }
    }
    feature_extractor {
      type: "faster_rcnn_resnet50"
      first_stage_features_stride: 16
    }
    first_stage_anchor_generator {
      grid_anchor_generator {
        height_stride: 16
        width_stride: 16
        scales: 0.10000000149011612
        scales: 0.25
        scales: 0.5
        scales: 1.0
        scales: 2.0
        aspect_ratios: 0.25
        aspect_ratios: 0.5
        aspect_ratios: 1.0
        aspect_ratios: 2.0
      }
    }
    first_stage_box_predictor_conv_hyperparams {
      op: CONV
      regularizer {
        l2_regularizer {
          weight: 0.0
        }
      }
      initializer {
        truncated_normal_initializer {
          stddev: 0.009999999776482582
        }
      }
    }
    first_stage_nms_score_threshold: 0.0
    first_stage_nms_iou_threshold: 0.699999988079071
    first_stage_max_proposals: 300
    first_stage_localization_loss_weight: 2.0
    first_stage_objectness_loss_weight: 1.0
    initial_crop_size: 14
    maxpool_kernel_size: 2
    maxpool_stride: 2
    second_stage_box_predictor {
      mask_rcnn_box_predictor {
        fc_hyperparams {
          op: FC
          regularizer {
            l2_regularizer {
              weight: 0.0
            }
          }
          initializer {
            variance_scaling_initializer {
              factor: 1.0
              uniform: true
              mode: FAN_AVG
            }
          }
        }
        use_dropout: false
        dropout_keep_probability: 1.0
      }
    }
    second_stage_post_processing {
      batch_non_max_suppression {
        score_threshold: 0.0
        iou_threshold: 0.6000000238418579
        max_detections_per_class: 100
        max_total_detections: 300
      }
      score_converter: SOFTMAX
    }
    second_stage_localization_loss_weight: 2.0
    second_stage_classification_loss_weight: 1.0
  }
}