Go to file
Glenn Jocher 378f08c6d5 updates 2020-03-05 12:30:11 -08:00
.github/ISSUE_TEMPLATE Update issue templates 2019-12-03 12:36:26 -08:00
cfg updates 2020-02-27 13:30:23 -08:00
data updates 2019-12-22 11:08:02 -08:00
utils updates 2020-03-05 10:20:08 -08:00
weights updates 2019-12-06 17:33:32 -08:00
.dockerignore save git info in docker images 2020-02-12 14:27:31 -08:00
.gitignore updates 2020-02-25 20:04:05 -08:00
Dockerfile updates 2020-02-12 15:19:06 -08:00
LICENSE Initial commit 2018-08-26 10:51:39 +02:00
README.md updates 2020-02-29 01:15:23 -08:00
detect.py updates 2020-02-25 22:58:26 -08:00
examples.ipynb Update examples.ipynb 2019-08-19 12:30:42 +02:00
models.py updates 2020-03-04 14:55:56 -08:00
requirements.txt updates 2020-01-23 12:24:52 -08:00
test.py updates 2020-03-04 12:17:37 -08:00
train.py updates 2020-03-05 12:30:11 -08:00

README.md

Introduction

This directory contains PyTorch YOLOv3 software developed by Ultralytics LLC, and is freely available for redistribution under the GPL-3.0 license. For more information please visit https://www.ultralytics.com.

Description

The https://github.com/ultralytics/yolov3 repo contains inference and training code for YOLOv3 in PyTorch. The code works on Linux, MacOS and Windows. Training is done on the COCO dataset by default: https://cocodataset.org/#home. Credit to Joseph Redmon for YOLO: https://pjreddie.com/darknet/yolo/.

Requirements

Python 3.7 or later with all of the pip install -U -r requirements.txt packages including:

  • torch >= 1.4
  • opencv-python
  • Pillow

All dependencies are included in the associated docker images. Docker requirements are:

  • Nvidia Driver >= 440.44
  • Docker Engine - CE >= 19.03

Tutorials

Jupyter Notebook

Our Jupyter notebook provides quick training, inference and testing examples.

Training

Start Training: python3 train.py to begin training after downloading COCO data with data/get_coco_dataset.sh. Each epoch trains on 117,263 images from the train and validate COCO sets, and tests on 5000 images from the COCO validate set.

Resume Training: python3 train.py --resume to resume training from weights/last.pt.

Plot Training: from utils import utils; utils.plot_results() plots training results from coco_16img.data, coco_64img.data, 2 example datasets available in the data/ folder, which train and test on the first 16 and 64 images of the COCO2014-trainval dataset.

Image Augmentation

datasets.py applies random OpenCV-powered (https://opencv.org/) augmentation to the input images in accordance with the following specifications. Augmentation is applied only during training, not during inference. Bounding boxes are automatically tracked and updated with the images. 416 x 416 examples pictured below.

Augmentation Description
Translation +/- 10% (vertical and horizontal)
Rotation +/- 5 degrees
Shear +/- 2 degrees (vertical and horizontal)
Scale +/- 10%
Reflection 50% probability (horizontal-only)
HSV Saturation +/- 50%
HSV Intensity +/- 50%

Speed

https://cloud.google.com/deep-learning-vm/
Machine type: preemptible n1-standard-16 (16 vCPUs, 60 GB memory)
CPU platform: Intel Skylake
GPUs: K80 ($0.20/hr), T4 ($0.35/hr), V100 ($0.83/hr) CUDA with Nvidia Apex FP16/32
HDD: 1 TB SSD
Dataset: COCO train 2014 (117,263 images)
Model: yolov3-spp.cfg
Command: python3 train.py --img 416 --batch 32 --accum 2

GPU n --batch --accum img/s epoch
time
epoch
cost
K80 1 32 x 2 11 175 min $0.58
T4 1
2
32 x 2
64 x 1
41
61
48 min
32 min
$0.28
$0.36
V100 1
2
32 x 2
64 x 1
122
178
16 min
11 min
$0.23
$0.31
2080Ti 1
2
32 x 2
64 x 1
81
140
24 min
14 min
-
-

Inference

detect.py runs inference on any sources:

python3 detect.py --source ...
  • Image: --source file.jpg
  • Video: --source file.mp4
  • Directory: --source dir/
  • Webcam: --source 0
  • RTSP stream: --source rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa
  • HTTP stream: --source http://wmccpinetop.axiscam.net/mjpg/video.mjpg

To run a specific models:

YOLOv3: python3 detect.py --cfg cfg/yolov3.cfg --weights yolov3.weights

YOLOv3-tiny: python3 detect.py --cfg cfg/yolov3-tiny.cfg --weights yolov3-tiny.weights

YOLOv3-SPP: python3 detect.py --cfg cfg/yolov3-spp.cfg --weights yolov3-spp.weights

Pretrained Weights

Download from: https://drive.google.com/open?id=1LezFG5g3BCW6iYaV89B2i64cqEUZD7e0

Darknet Conversion

$ git clone https://github.com/ultralytics/yolov3 && cd yolov3

# convert darknet cfg/weights to pytorch model
$ python3  -c "from models import *; convert('cfg/yolov3-spp.cfg', 'weights/yolov3-spp.weights')"
Success: converted 'weights/yolov3-spp.weights' to 'converted.pt'

# convert cfg/pytorch model to darknet weights
$ python3  -c "from models import *; convert('cfg/yolov3-spp.cfg', 'weights/yolov3-spp.pt')"
Success: converted 'weights/yolov3-spp.pt' to 'converted.weights'

mAP

$ python3 test.py --cfg yolov3-spp.cfg --weights yolov3-spp-ultralytics.pt
Size COCO mAP
@0.5...0.95
COCO mAP
@0.5
YOLOv3-tiny
YOLOv3
YOLOv3-SPP
YOLOv3-SPP-ultralytics
320 14.0
28.7
30.5
36.6
29.1
51.8
52.3
56.0
YOLOv3-tiny
YOLOv3
YOLOv3-SPP
YOLOv3-SPP-ultralytics
416 16.0
31.2
33.9
40.4
33.0
55.4
56.9
60.2
YOLOv3-tiny
YOLOv3
YOLOv3-SPP
YOLOv3-SPP-ultralytics
512 16.6
32.7
35.6
41.6
34.9
57.7
59.5
61.7
YOLOv3-tiny
YOLOv3
YOLOv3-SPP
YOLOv3-SPP-ultralytics
608 16.6
33.1
37.0
42.1
35.4
58.2
60.7
61.7
$ python3 test.py --cfg yolov3-spp.cfg --weights yolov3-spp-ultralytics.pt --img 608

Namespace(batch_size=32, cfg='yolov3-spp', conf_thres=0.001, data='data/coco2014.data', device='', img_size=608, iou_thres=0.6, save_json=True, single_cls=False, task='test', weights='last82.pt')
Using CUDA device0 _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', total_memory=16280MB)

               Class    Images   Targets         P         R   mAP@0.5        F1: 100% 157/157 [03:12<00:00,  1.50it/s]
                 all     5e+03  3.51e+04    0.0573     0.871     0.611     0.106
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.419
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.618
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.448
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.247
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.462
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.534
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.341
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.557
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.606
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.440
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.649
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.735

Reproduce Our Results

This command trains yolov3-spp.cfg from scratch to our mAP above. Training takes about one week on a 2080Ti.

$ python3 train.py --weights '' --cfg yolov3-spp.cfg --epochs 273 --batch 16 --accum 4 --multi

Reproduce Our Environment

To access an up-to-date working environment (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled), consider a:

Citation

DOI

Contact

Issues should be raised directly in the repository. For additional questions or comments please email Glenn Jocher at glenn.jocher@ultralytics.com or visit us at https://contact.ultralytics.com.