Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/322.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python model_main.py未能培训mobilenet ssd v2-tensorflow目标检测api_Python_Tensorflow - Fatal编程技术网

Python model_main.py未能培训mobilenet ssd v2-tensorflow目标检测api

Python model_main.py未能培训mobilenet ssd v2-tensorflow目标检测api,python,tensorflow,Python,Tensorflow,我正在使用TensorFlow 1.15,并尝试使用TensorFlow对象检测API和我自己的数据集对MobileneTSDV2进行微调 我按照tf repo中所述的方式创建了我的tf记录,并像这样读取图像 with tf.gfile.GFile(folder_path+"temp.jpeg", 'rb') as fid: encoded_image_data = fid.read() 我已经按需要的宽度和高度划分了我的点,然后我调整了配置以适应我的课程数,但是当

我正在使用TensorFlow 1.15,并尝试使用TensorFlow对象检测API和我自己的数据集对MobileneTSDV2进行微调

我按照tf repo中所述的方式创建了我的tf记录,并像这样读取图像

with tf.gfile.GFile(folder_path+"temp.jpeg", 'rb') as fid:
    encoded_image_data = fid.read()
我已经按需要的宽度和高度划分了我的点,然后我调整了配置以适应我的课程数,但是当我运行训练过程时,我仍然会遇到这个错误(我尝试了很多方法使其工作,但都没有成功)

编辑: 这是到tf记录代码的转换

def create_tf_示例(图像属性):
高度=图像属性[im\U height]
宽度=图像属性[im\U宽度]
filename=image_prop_dict['im_name']#图像的文件名。如果图像不是来自文件,则为空
encoded_image_data=image_prop_dict['encoded_image']#编码图像字节
图像_格式=字节('jpeg','utf-8')#b'jpeg'或b'png'
xmins=image_prop_dict['x_mins']#边界框中标准化左x坐标列表(每个框1个)
xmaxs=image_prop_dict['x_maxs']#边界框中规范化的右x坐标列表
#(每盒1个)
ymins=image_prop_dict['x_mins']#边界框中规范化顶部y坐标列表(每个框1个)
ymaxs=image_prop_dict['y_maxs']#边界框中规范化底部y坐标的列表
#(每盒1个)
classes_text=image_prop_dict['classes_labels']#边界框的字符串类名称列表(每个框1个)
classes=image_prop_dict['classes_ints']#边界框的整数类id列表(每个框1个)
示例=tf.train.example(特征=tf.train.features(特征={
“图像/高度”:数据集\u util.int64\u功能(高度),
“图像/宽度”:数据集\u util.int64\u功能(宽度),
“图像/文件名”:数据集\u util.bytes\u功能(文件名),
“图像/源\u id”:数据集\u util.bytes\u功能(文件名),
“图像/编码”:数据集\u util.bytes\u功能(编码的\u图像\u数据),
“图像/格式”:数据集\u util.bytes\u功能(图像\u格式),
“image/object/bbox/xmin”:数据集\u util.float\u列表\u功能(xmins),
“image/object/bbox/xmax”:数据集\u util.float\u列表\u功能(xmaxs),
“image/object/bbox/ymin”:数据集\u util.float\u列表\u功能(ymins),
“image/object/bbox/ymax”:数据集_util.float_list_功能(ymax),
“图像/对象/类/文本”:数据集\u util.bytes\u列表\u功能(类\u文本),
“图像/对象/类/标签”:数据集\u util.int64\u列表\u功能(类),
}))
返回tf_示例
文件夹中的def convert_jsons_(文件夹路径、类目录):
“”“循环浏览json标签文件夹,将每个json转换为yolo格式,并将其保存为.txt
同名的。
:param folder_path:str包含json文件的文件夹的路径
:param classes_dict:dict[类名]=类号
"""
json_name_list=[]
图像_字典=[]
对于os.listdir(文件夹路径)中的文件名:
如果文件_name.endswith(“.json”):
json_name_list.append(文件名)
对于TQM中的json文件名(json文件名列表):
#读取json文件
#获取框和标签的列表
#填写字典,保存到字典
json\u path=os.path.join(文件夹路径、json\u文件名)
将tf.gfile.gfile(文件夹路径+“temp.jpeg”,“rb”)作为fid:
编码图像数据=fid.read()
打开(json_路径)作为json_文件:
json\u data=json.load(json\u文件\r)
im_width=json_数据[“imageWidth”]
im_高度=json_数据[“图像高度”]
image\u dictionary={'im\u height':im\u height,
“im_width”:im_width,
'im_name':字节(json_文件_name.replace(“.json”、“.jpg”)、'utf-8'),
“encoded_image”:encoded_image_数据,#image.tostring(),
“x_分钟”:[],
“x_最大值”:[],
“y_mins”:[],
“y_maxs”:[],
“类别标签”:[],
“类\u int”:[]
对于json_数据[“形状”]中的labelme_检测:
点=labelme_检测[“点”]
如果len(点)>0:
类别标签=标签检测[“标签”]
#使用原始宽度和高度计算相对点(框位于原始图像上)
图像字典['x_mins'].追加(最小值(点[0][0],点[1][0])/im\u宽度)
图像字典['x_maxs'].追加(最大值(点[0][0],点[1][0])/im\u宽度)
图像字典['y_mins'].追加(最小值(点[0][1],点[1][1])/im高度)
图像字典['y\u maxs'].追加(最大值(点[0][1],点[1][1])/im\u高度)
字节\标签=字节(类别\标签“utf-8”)
图像\u字典['classes\u label']。追加(字节\u label)
图像字典['classes\u ints'].追加(classes\u dict[classes\u label]+1)
图像字典。附加(图像字典)
返回图像字典
# ..
# ..
#主要
示例=转换文件夹列表(args.source,classes\u dict)
# ..
# ..
# ..
对于范围内的i(len(示例)):
#例如,在示例中:
tf_示例=创建_tf_示例(示例[i])
eval_writer.write(tf_示例.SerializeToString())

确实是数据,为了修复错误,我用它将数据转换为tf记录
    ...
    
    ...
    
    tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node Dataset_map_transform_and_pad_input_data_fn_423}} assertion failed: [[0.576413691][0.335303724][0.766369045]...] [[0.155026451][0.439418][0.299206346]...]     [[{{node Assert/AssertGuard/Assert}}]]      [[IteratorGetNext]]
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):   File "./object_detection/model_main.py", line 108, in <module>
        tf.app.run()   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/platform/app.py", line 40, in run
        _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/absl/app.py", line 299, in run
        _run_main(main, args)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
        sys.exit(main(argv))   File "./object_detection/model_main.py", line 104, in main
        tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 473, in train_and_evaluate
        return executor.run()   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 613, in run
        return self.run_local()   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/training.py", line 714, in run_local
        saving_listeners=saving_listeners)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 370, in train
        loss = self._train_model(input_fn, hooks, saving_listeners)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1161, in _train_model
        return self._train_model_default(input_fn, hooks, saving_listeners)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1195, in _train_model_default
        saving_listeners)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1494, in _train_with_estimator_spec
        _, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/training/monitored_session.py", line 754, in run
        run_metadata=run_metadata)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/training/monitored_session.py", line 1259, in run
        run_metadata=run_metadata)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/training/monitored_session.py", line 1360, in run
        raise six.reraise(*original_exc_info)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/six.py", line 703, in reraise
        raise value   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/training/monitored_session.py", line 1345, in run
        return self._sess.run(*args, **kwargs)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/training/monitored_session.py", line 1418, in run
        run_metadata=run_metadata)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/training/monitored_session.py", line 1176, in run
        return self._sess.run(*args, **kwargs)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 956, in run
        run_metadata_ptr)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
        feed_dict_tensor, options, run_metadata)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
        run_metadata)   File "/home/mai/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
        raise type(e)(node_def, op, message) 
tensorflow.python.framework.errors_impl.InvalidArgumentError:  assertion failed: [[0.576413691][0.335303724][0.766369045]...] [[0.155026451][0.439418][0.299206346]...]      [[{{node Assert/AssertGuard/Assert}}]]      [[IteratorGetNext]]
# SSD with Mobilenet v2 configuration for MSCOCO Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.

model {
  ssd {
    num_classes: 5
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    anchor_generator {
      ssd_anchor_generator {
        num_layers: 6
        min_scale: 0.2
        max_scale: 0.95
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        aspect_ratios: 3.0
        aspect_ratios: 0.3333
      }
    }
    image_resizer {
      fixed_shape_resizer {
        height: 300
        width: 300
      }
    }
    box_predictor {
      convolutional_box_predictor {
        min_depth: 0
        max_depth: 0
        num_layers_before_predictor: 0
        use_dropout: false
        dropout_keep_probability: 0.8
        kernel_size: 1
        box_code_size: 4
        apply_sigmoid_to_scores: false
        conv_hyperparams {
          activation: RELU_6,
          regularizer {
            l2_regularizer {
              weight: 0.00004
            }
          }
          initializer {
            truncated_normal_initializer {
              stddev: 0.03
              mean: 0.0
            }
          }
          batch_norm {
            train: true,
            scale: true,
            center: true,
            decay: 0.9997,
            epsilon: 0.001,
          }
        }
      }
    }
    feature_extractor {
      type: 'ssd_mobilenet_v2'
      min_depth: 16
      depth_multiplier: 1.0
      conv_hyperparams {
        activation: RELU_6,
        regularizer {
          l2_regularizer {
            weight: 0.00004
          }
        }
        initializer {
          truncated_normal_initializer {
            stddev: 0.03
            mean: 0.0
          }
        }
        batch_norm {
          train: true,
          scale: true,
          center: true,
          decay: 0.9997,
          epsilon: 0.001,
        }
      }
    }
    loss {
      classification_loss {
        weighted_sigmoid {
        }
      }
      localization_loss {
        weighted_smooth_l1 {
        }
      }
      hard_example_miner {
        num_hard_examples: 3000
        iou_threshold: 0.99
        loss_type: CLASSIFICATION
        max_negatives_per_positive: 3
        min_negatives_per_image: 3
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    normalize_loss_by_num_matches: true
    post_processing {
      batch_non_max_suppression {
        score_threshold: 1e-8
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 100
      }
      score_converter: SIGMOID
    }
  }
}

train_config: {
  batch_size: 32
  optimizer {
    rms_prop_optimizer: {
      learning_rate: {
        exponential_decay_learning_rate {
          initial_learning_rate: 0.004
          decay_steps: 800720
          decay_factor: 0.95
        }
      }
      momentum_optimizer_value: 0.9
      decay: 0.9
      epsilon: 1.0
    }
  }
  fine_tune_checkpoint: "/home/mai/Downloads/ssdlite_mobilenet_v2_coco_2018_05_09/checkpoints/model.ckpt"
  from_detection_checkpoint: true # added 
  fine_tune_checkpoint_type:  "detection"
  # Note: The below line limits the training process to 200K steps, which we
  # empirically found to be sufficient enough to train the pets dataset. This
  # effectively bypasses the learning rate schedule (the learning rate will
  # never decay). Remove the below line to train indefinitely.
  num_steps: 10000
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    ssd_random_crop {
    }
  }
}

train_input_reader: {
  tf_record_input: "pathto/train_608.record"
  }
  label_map_path: "pathto/vehicle_label_map.pbtxt"
}

eval_config: {
  num_examples: 100
  # Note: The below line limits the evaluation process to 10 evaluations.
  # Remove the below line to evaluate indefinitely.
  max_evals: 10
  metrics_set : "coco_detection_metrics"
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "pathto/frames/eval_608.record"
  }
  label_map_path: "pathto/vehicle_label_map.pbtxt"
  shuffle: false
  num_readers: 1
}
# and given pbtxt 

item {
  name: "car"
  id: 1
  display_name: "car"
}
item {
  name: "motorbike"
  id: 2
  display_name: "motorbike"
}
item {
  name: "bus"
  id: 3
  display_name: "bus"
}
item {
  name: "truck"
  id: 4
  display_name: "truck"
}
item {
  name: "van"
  id: 5
  display_name: "van"
}