car-detection-bayes/README.md

53 lines
2.8 KiB
Markdown
Raw Normal View History

2018-08-26 08:51:39 +00:00
<img src="https://storage.googleapis.com/ultralytics/UltralyticsLogoName1000×676.png" width="200">
# Introduction
2018-09-01 12:10:06 +00:00
This directory contains software developed by Ultralytics LLC, and **is freely available for redistribution under the GPL-3.0 license**.. For more information on Ultralytics projects please visit:
2018-08-26 08:51:39 +00:00
http://www.ultralytics.com  
# Description
2018-08-31 17:23:46 +00:00
The https://github.com/ultralytics/yolov3 repo contains inference and training code for YOLOv3 in PyTorch. Training is done on the COCO dataset by default: https://cocodataset.org/#home. **Credit to Joseph Redmon for YOLO** (https://pjreddie.com/darknet/yolo/) and to **Erik Lindernoren for the pytorch implementation** this work is based on (https://github.com/eriklindernoren/PyTorch-YOLOv3).
2018-08-26 08:51:39 +00:00
# Requirements
Python 3.6 or later with the following `pip3 install -U -r requirements.txt` packages:
- `numpy`
- `torch`
- `opencv-python`
2018-08-26 09:05:13 +00:00
# Training
2018-08-26 08:51:39 +00:00
2018-08-26 09:33:36 +00:00
Run `train.py` to begin training after downloading COCO data with `data/get_coco_dataset.sh`. Each epoch trains on 120,000 images from the train and validate COCO sets, and tests on 5000 images from the COCO validate set. An Nvidia GTX 1080 Ti will process ~10 epochs/day with full augmentation, or ~15 epochs/day without input image augmentation. Loss plots for the bounding boxes, objectness and class confidence should appear similar to results shown here (coming soon)
2018-09-01 11:34:05 +00:00
![Alt](https://github.com/ultralytics/yolov3/blob/master/data/coco_training_loss.png "coco training loss")
2018-09-01 11:43:07 +00:00
## Image Augmentation
2018-09-01 11:34:05 +00:00
2018-09-01 12:04:42 +00:00
`datasets.py` applies random augmentation to the input images in accordance with the following specifications. Augmentation is applied **only** during training, not during inference. Bounding boxes are automatically tracked and updated with the images. 416 x 416 examples pictured below.
2018-09-01 11:41:34 +00:00
Augmentation | Description
--- | ---
2018-09-01 12:10:06 +00:00
Translation | +/- 20% (vertical and horizontal)
2018-09-01 12:04:42 +00:00
Rotation | +/- 5 degrees
2018-09-01 12:10:06 +00:00
Shear | +/- 3 degrees (vertical and horizontal)
2018-09-01 12:04:42 +00:00
Scale | +/- 20%
2018-09-01 12:10:06 +00:00
Reflection | 50% probability (horizontal-only)
2018-09-01 12:04:42 +00:00
H**S**V Saturation | +/- 50%
HS**V** Intensity | +/- 50%
2018-09-01 11:34:05 +00:00
2018-09-01 11:37:33 +00:00
![Alt](https://github.com/ultralytics/yolov3/blob/master/data/coco_augmentation_examples.jpg "coco image augmentation")
2018-08-26 08:51:39 +00:00
2018-08-26 09:05:13 +00:00
# Inference
2018-08-26 08:51:39 +00:00
Checkpoints will be saved in `/checkpoints` directory. Run `detect.py` to apply trained weights to an image, such as `zidane.jpg` from the `data/samples` folder, shown here.
2018-09-01 11:34:05 +00:00
![Alt](https://github.com/ultralytics/yolov3/blob/master/data/zidane_result.jpg "inference example")
2018-08-26 08:51:39 +00:00
2018-08-26 09:05:13 +00:00
# Testing
2018-08-26 09:39:43 +00:00
Run `test.py` to test the latest checkpoint on the 5000 validation images. Joseph Redmon's official YOLOv3 weights produce a mAP of .581 using this PyTorch implementation, compared to .579 in darknet (https://arxiv.org/abs/1804.02767).
2018-08-26 08:51:39 +00:00
# Contact
2018-08-26 09:35:56 +00:00
For questions or comments please contact Glenn Jocher at glenn.jocher@ultralytics.com or visit us at http://www.ultralytics.com/contact