Go to file
Glenn Jocher 83c6eba700
Update README.md
2019-03-20 13:26:46 +02:00
cfg updates 2018-12-17 22:43:55 +01:00
data updates 2019-02-26 03:28:21 +01:00
utils updates 2019-03-19 15:50:15 +02:00
weights updates 2019-02-09 18:52:02 +01:00
.gitignore updates 2019-02-26 03:18:15 +01:00
LICENSE Initial commit 2018-08-26 10:51:39 +02:00
README.md Update README.md 2019-03-20 13:26:46 +02:00
detect.py multi_gpu multi_scale 2019-03-19 10:38:32 +02:00
models.py multi_gpu multi_scale 2019-03-19 15:48:52 +02:00
requirements.txt updates 2019-02-08 17:17:48 +01:00
test.py multi_gpu multi_scale 2019-03-19 10:38:32 +02:00
train.py multi_gpu multi_scale 2019-03-19 15:43:10 +02:00

README.md

v2.2 v3.0

Introduction

This directory contains python software and an iOS App developed by Ultralytics LLC, and is freely available for redistribution under the GPL-3.0 license. For more information please visit https://www.ultralytics.com.

Description

The https://github.com/ultralytics/yolov3 repo contains inference and training code for YOLOv3 in PyTorch. The code works on Linux, MacOS and Windows. Training is done on the COCO dataset by default: https://cocodataset.org/#home. Credit to Joseph Redmon for YOLO (https://pjreddie.com/darknet/yolo/) and to Erik Lindernoren for the PyTorch implementation this work is based on (https://github.com/eriklindernoren/PyTorch-YOLOv3).

Requirements

Python 3.7 or later with the following pip3 install -U -r requirements.txt packages:

  • numpy
  • torch >= 1.0.0
  • opencv-python

Tutorials

Training

Start Training: Run train.py to begin training after downloading COCO data with data/get_coco_dataset.sh.

Resume Training: Run train.py --resume resumes training from the latest checkpoint weights/latest.pt.

Each epoch trains on 120,000 images from the train and validate COCO sets, and tests on 5000 images from the COCO validate set. Default training settings produce loss plots below, with training speed of 0.6 s/batch on a 1080 Ti (18 epochs/day) or 0.45 s/batch on a 2080 Ti.

from utils import utils; utils.plot_results() Alt

Image Augmentation

datasets.py applies random OpenCV-powered (https://opencv.org/) augmentation to the input images in accordance with the following specifications. Augmentation is applied only during training, not during inference. Bounding boxes are automatically tracked and updated with the images. 416 x 416 examples pictured below.

Augmentation Description
Translation +/- 10% (vertical and horizontal)
Rotation +/- 5 degrees
Shear +/- 2 degrees (vertical and horizontal)
Scale +/- 10%
Reflection 50% probability (horizontal-only)
HSV Saturation +/- 50%
HSV Intensity +/- 50%

Inference

Run detect.py to apply trained weights to an image, such as zidane.jpg from the data/samples folder:

YOLOv3: detect.py --cfg cfg/yolov3.cfg --weights weights/yolov3.pt

YOLOv3-tiny: detect.py --cfg cfg/yolov3-tiny.cfg --weights weights/yolov3-tiny.pt

Webcam

Run detect.py with webcam=True to show a live webcam feed.

Pretrained Weights

Darknet format:

PyTorch format:

mAP

Run test.py --save-json --conf-thres 0.005 to test the official YOLOv3 weights weights/yolov3.weights against the 5000 validation images. Compare to .579 at 608 x 608 reported in darknet (https://arxiv.org/abs/1804.02767).

Run test.py --weights weights/latest.pt to validate against the latest training results. Hyperparameter settings and loss equation changes affect these results significantly, and additional trade studies may be needed to further improve this.

sudo rm -rf yolov3 && git clone https://github.com/ultralytics/yolov3
# bash yolov3/data/get_coco_dataset.sh
sudo rm -rf cocoapi && git clone https://github.com/cocodataset/cocoapi && cd cocoapi/PythonAPI && make && cd ../.. && cp -r cocoapi/PythonAPI/pycocotools yolov3
cd yolov3 && python3 test.py --save-json --conf-thres 0.005

...

Namespace(batch_size=32, cfg='cfg/yolov3.cfg', conf_thres=0.005, data_cfg='cfg/coco.data', img_size=416, iou_thres=0.5, nms_thres=0.45, save_json=True, weights='weights/yolov3.weights')

loading annotations into memory...
Done (t=4.17s)
creating index...
index created!
Loading and preparing results...
DONE (t=1.75s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=39.30s).
Accumulating evaluation results...
DONE (t=4.63s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.307
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.545
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.309
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.140
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.333
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.453
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.266
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.396
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.415
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.222
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.449
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.575

Contact

For questions or comments please contact Glenn Jocher at glenn.jocher@ultralytics.com or visit us at https://contact.ultralytics.com.