car-detection-bayes/README.md

59 lines
3.1 KiB
Markdown
Raw Normal View History

2018-08-26 08:51:39 +00:00
<img src="https://storage.googleapis.com/ultralytics/UltralyticsLogoName1000×676.png" width="200">
# Introduction
2018-09-01 12:10:53 +00:00
This directory contains software developed by Ultralytics LLC, and **is freely available for redistribution under the GPL-3.0 license**. For more information on Ultralytics projects please visit:
2018-09-02 10:40:29 +00:00
http://www.ultralytics.com
2018-08-26 08:51:39 +00:00
# Description
2018-08-31 17:23:46 +00:00
The https://github.com/ultralytics/yolov3 repo contains inference and training code for YOLOv3 in PyTorch. Training is done on the COCO dataset by default: https://cocodataset.org/#home. **Credit to Joseph Redmon for YOLO** (https://pjreddie.com/darknet/yolo/) and to **Erik Lindernoren for the pytorch implementation** this work is based on (https://github.com/eriklindernoren/PyTorch-YOLOv3).
2018-08-26 08:51:39 +00:00
# Requirements
Python 3.6 or later with the following `pip3 install -U -r requirements.txt` packages:
- `numpy`
- `torch`
- `opencv-python`
2018-08-26 09:05:13 +00:00
# Training
2018-08-26 08:51:39 +00:00
2018-09-01 16:47:24 +00:00
**Start Training:** Run `train.py` to begin training after downloading COCO data with `data/get_coco_dataset.sh` and specifying COCO path on line 37 (local) or line 39 (cloud).
2018-09-01 16:35:28 +00:00
2018-09-01 16:48:03 +00:00
**Resume Training:** Run `train.py -resume 1` to resume training from the most recently saved checkpoint `latest.pt`.
2018-09-01 16:35:28 +00:00
2018-09-01 16:37:40 +00:00
Each epoch trains on 120,000 images from the train and validate COCO sets, and tests on 5000 images from the COCO validate set. An Nvidia GTX 1080 Ti will process ~10 epochs/day with full augmentation, or ~15 epochs/day without input image augmentation. Loss plots for the bounding boxes, objectness and class confidence should appear similar to results shown here (results in progress to 160 epochs, will update).
2018-09-01 16:47:08 +00:00
2018-09-01 11:34:05 +00:00
![Alt](https://github.com/ultralytics/yolov3/blob/master/data/coco_training_loss.png "coco training loss")
2018-09-01 11:43:07 +00:00
## Image Augmentation
2018-09-01 11:34:05 +00:00
2018-09-01 16:57:18 +00:00
`datasets.py` applies random OpenCV-powered (https://opencv.org/) augmentation to the input images in accordance with the following specifications. Augmentation is applied **only** during training, not during inference. Bounding boxes are automatically tracked and updated with the images. 416 x 416 examples pictured below.
2018-09-01 11:41:34 +00:00
Augmentation | Description
--- | ---
2018-09-01 12:10:06 +00:00
Translation | +/- 20% (vertical and horizontal)
2018-09-01 12:04:42 +00:00
Rotation | +/- 5 degrees
2018-09-01 12:10:06 +00:00
Shear | +/- 3 degrees (vertical and horizontal)
2018-09-01 12:04:42 +00:00
Scale | +/- 20%
2018-09-01 12:10:06 +00:00
Reflection | 50% probability (horizontal-only)
2018-09-01 12:04:42 +00:00
H**S**V Saturation | +/- 50%
HS**V** Intensity | +/- 50%
2018-09-01 11:34:05 +00:00
2018-09-01 11:37:33 +00:00
![Alt](https://github.com/ultralytics/yolov3/blob/master/data/coco_augmentation_examples.jpg "coco image augmentation")
2018-08-26 08:51:39 +00:00
2018-08-26 09:05:13 +00:00
# Inference
2018-09-01 16:35:28 +00:00
Checkpoints are saved in `/checkpoints` directory. Run `detect.py` to apply trained weights to an image, such as `zidane.jpg` from the `data/samples` folder, shown here.
2018-09-01 16:48:53 +00:00
2018-09-01 11:34:05 +00:00
![Alt](https://github.com/ultralytics/yolov3/blob/master/data/zidane_result.jpg "inference example")
2018-08-26 08:51:39 +00:00
2018-08-26 09:05:13 +00:00
# Testing
2018-08-26 09:39:43 +00:00
Run `test.py` to test the latest checkpoint on the 5000 validation images. Joseph Redmon's official YOLOv3 weights produce a mAP of .581 using this PyTorch implementation, compared to .579 in darknet (https://arxiv.org/abs/1804.02767).
2018-08-26 08:51:39 +00:00
# Contact
2018-08-26 09:35:56 +00:00
For questions or comments please contact Glenn Jocher at glenn.jocher@ultralytics.com or visit us at http://www.ultralytics.com/contact