Car Damage Detection for iOS 11.0

Prepare Training Data

inputs: raw image files, annotation files
outputs: train.txt

  1. Generate your Annotation files and Image files:
    Using graphical image annotation tool: LabelImg v1.7.0 (https://github.com/tzutalin/labelImg/releases/tag/v1.7.0)
    Annotations are saved as XML files in PASCAL VOC format.

path to raw images: ./car_dataset_web/JPEGImages
path to annotation files: ./car_dataset_web/Annotations
All damage types are labeled as ‘crack’.

  1. Generate your training dataset.
    Using ./car_dataset_web/preprocessing.ipynb to generate train.txt, which include the paths of raw images and the annotation information. We will use train.txt to train our model.
    Here is the format of train.txt:

One row for one image;
Row format: image_file_path box1 box2 ... boxN;
Box format: x_min,y_min,x_max,y_max,class_id (no space).

Here is an example:

  path/to/img1.jpg 50,100,150,200,0 30,50,200,120,3
  path/to/img2.jpg 120,300,250,600,2

Start Training

inputs: train.txt
outputs: Keras model weights

  1. Open folder ./keras-yolo3.

  2. Modify train.py: annotation_path, classes_path , anchor path , log_dir. The model weights file is saved at ./logs

  3. Start training: python train.py

  4. The model weights file are saved at line 86, you can modify the save path if you want:

  model.save_weights(log_dir + 'trained_weights_tiny_final.h5')

Test (Optional)

Test all the images located at './car_dataset_web/JPEGImages, the detection results are saved at '/car_dataset_web/PredictedTinyImages/'. You can modify the path easily in yolo_video.py(line 8, line 22).
Usage: python yolo_video.py --images

Covert Keras model into CoreML model

inputs: Keras model weights file: ./result/trained_weights_tiny_final.h5
outputs: CoreML model file: your_model_name.mlmodel

  1. Open ./keras2ml.ipynb , run it step by step.
  2. You can change the save path at :
  coreml_model.save('your_model_name.mlmodel')

Load model into our App

  1. Open ./YOLOv3-CoreML/YOLOv3-CoreML/YOLOv3-CoreML.xcodeproj
  2. Add the .mlmodel model file into YOLOv3-CoreML.xcodeproj
  3. Open YOLO.swift, modify the model class at line 23:
  let model = your_model_name()
  1. Adjust the confidence threshold and IoU Threshold (Optional)
    Open YOLO.swift, line 11, 12
  let confidenceThreshold: Float = 0.2
  let iouThreshold: Float = 0.2
  1. Connect your iPhone with Macbook, follow this reference blog to test the App: https://blog.csdn.net/zhenggaoxing/article/details/79042382
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容