Go to file
Glenn Jocher d381757b8f Merge remote-tracking branch 'origin/master'
# Conflicts:
#	README.md
2018-09-01 13:35:09 +02:00
cfg updates 2018-08-26 19:40:30 +02:00
checkpoints updates 2018-08-26 11:24:09 +02:00
data updates 2018-09-01 13:34:50 +02:00
utils updates 2018-09-01 13:17:21 +02:00
.gitignore updates 2018-09-01 13:34:50 +02:00
LICENSE Initial commit 2018-08-26 10:51:39 +02:00
README.md updates 2018-09-01 13:34:05 +02:00
detect.py updates 2018-08-26 19:33:37 +02:00
models.py updates 2018-08-27 00:01:41 +02:00
requirements.txt Initial commit 2018-08-26 10:51:39 +02:00
test.py updates 2018-08-26 20:24:37 +02:00
train.py updates 2018-08-27 00:00:25 +02:00

README.md

Introduction

This directory contains software developed by Ultralytics LLC. For more information on Ultralytics projects please visit: http://www.ultralytics.com  

Description

The https://github.com/ultralytics/yolov3 repo contains inference and training code for YOLOv3 in PyTorch. Training is done on the COCO dataset by default: https://cocodataset.org/#home. Credit to Joseph Redmon for YOLO (https://pjreddie.com/darknet/yolo/) and to Erik Lindernoren for the pytorch implementation this work is based on (https://github.com/eriklindernoren/PyTorch-YOLOv3).

Requirements

Python 3.6 or later with the following pip3 install -U -r requirements.txt packages:

  • numpy
  • torch
  • opencv-python

Training

Run train.py to begin training after downloading COCO data with data/get_coco_dataset.sh. Each epoch trains on 120,000 images from the train and validate COCO sets, and tests on 5000 images from the COCO validate set. An Nvidia GTX 1080 Ti will process ~10 epochs/day with full augmentation, or ~15 epochs/day without input image augmentation. Loss plots for the bounding boxes, objectness and class confidence should appear similar to results shown here (coming soon) Alt

Augmentation

datasets.py applies random augmentation to the input images in accordance with the following specifications. Augmentation is applied only during training, not during inference. Bounding boxes are automatically tracked and updated with the images. Examples pictured below.

  • Translation: +/- 20% X and Y
  • Rotation: +/- 5 degrees
  • Skew: +/- 3 degrees
  • Scale: +/- 20%
  • Reflection: 50% probability left-right
  • Saturation: +/- 50%
  • Intensity: +/- 50%

Alt

Inference

Checkpoints will be saved in /checkpoints directory. Run detect.py to apply trained weights to an image, such as zidane.jpg from the data/samples folder, shown here. Alt

Testing

Run test.py to test the latest checkpoint on the 5000 validation images. Joseph Redmon's official YOLOv3 weights produce a mAP of .581 using this PyTorch implementation, compared to .579 in darknet (https://arxiv.org/abs/1804.02767).

Contact

For questions or comments please contact Glenn Jocher at glenn.jocher@ultralytics.com or visit us at http://www.ultralytics.com/contact