update README.md
This commit is contained in:
parent
0671f04e1f
commit
936ac746ce
63
README.md
63
README.md
|
@ -16,30 +16,30 @@
|
||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
# Introduction
|
|
||||||
|
|
||||||
This directory contains PyTorch YOLOv3 software developed by Ultralytics LLC, and **is freely available for redistribution under the GPL-3.0 license**. For more information please visit https://www.ultralytics.com.
|
## Introduction
|
||||||
|
|
||||||
# Description
|
|
||||||
|
|
||||||
The https://github.com/ultralytics/yolov3 repo contains inference and training code for YOLOv3 in PyTorch. The code works on Linux, MacOS and Windows. Training is done on the COCO dataset by default: https://cocodataset.org/#home. **Credit to Joseph Redmon for YOLO:** https://pjreddie.com/darknet/yolo/.
|
The https://github.com/ultralytics/yolov3 repo contains inference and training code for YOLOv3 in PyTorch. The code works on Linux, MacOS and Windows. Training is done on the COCO dataset by default: https://cocodataset.org/#home. **Credit to Joseph Redmon for YOLO:** https://pjreddie.com/darknet/yolo/.
|
||||||
|
|
||||||
# Requirements
|
|
||||||
|
|
||||||
Python 3.7 or later with all `pip install -U -r requirements.txt` packages including `torch >= 1.5`. Docker images come with all dependencies preinstalled. Docker requirements are:
|
## Requirements
|
||||||
- Nvidia Driver >= 440.44
|
|
||||||
- Docker Engine - CE >= 19.03
|
|
||||||
|
|
||||||
# Tutorials
|
Python 3.7 or later with all `requirements.txt` dependencies installed, including `torch >= 1.5`. To install run:
|
||||||
|
```bash
|
||||||
|
$ pip install -U -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Tutorials
|
||||||
|
|
||||||
|
* <a href="https://colab.research.google.com/github/ultralytics/yolov3/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
|
||||||
* [Train Custom Data](https://github.com/ultralytics/yolov3/wiki/Train-Custom-Data) < highly recommended!!
|
* [Train Custom Data](https://github.com/ultralytics/yolov3/wiki/Train-Custom-Data) < highly recommended!!
|
||||||
* [Train Single Class](https://github.com/ultralytics/yolov3/wiki/Example:-Train-Single-Class)
|
|
||||||
* [Google Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov3/blob/master/tutorial.ipynb) with quick training, inference and testing examples
|
|
||||||
* [GCP Quickstart](https://github.com/ultralytics/yolov3/wiki/GCP-Quickstart)
|
* [GCP Quickstart](https://github.com/ultralytics/yolov3/wiki/GCP-Quickstart)
|
||||||
* [Docker Quickstart Guide](https://github.com/ultralytics/yolov3/wiki/Docker-Quickstart)
|
* [Docker Quickstart Guide](https://github.com/ultralytics/yolov3/wiki/Docker-Quickstart)
|
||||||
* [A TensorRT Implementation of YOLOv3 and YOLOv4](https://github.com/wang-xinyu/tensorrtx/tree/master/yolov3-spp)
|
* [A TensorRT Implementation of YOLOv3 and YOLOv4](https://github.com/wang-xinyu/tensorrtx/tree/master/yolov3-spp)
|
||||||
|
|
||||||
# Training
|
|
||||||
|
## Training
|
||||||
|
|
||||||
**Start Training:** `python3 train.py` to begin training after downloading COCO data with `data/get_coco2017.sh`. Each epoch trains on 117,263 images from the train and validate COCO sets, and tests on 5000 images from the COCO validate set.
|
**Start Training:** `python3 train.py` to begin training after downloading COCO data with `data/get_coco2017.sh`. Each epoch trains on 117,263 images from the train and validate COCO sets, and tests on 5000 images from the COCO validate set.
|
||||||
|
|
||||||
|
@ -49,13 +49,15 @@ Python 3.7 or later with all `pip install -U -r requirements.txt` packages inclu
|
||||||
|
|
||||||
<img src="https://user-images.githubusercontent.com/26833433/78175826-599d4800-7410-11ea-87d4-f629071838f6.png" width="900">
|
<img src="https://user-images.githubusercontent.com/26833433/78175826-599d4800-7410-11ea-87d4-f629071838f6.png" width="900">
|
||||||
|
|
||||||
## Image Augmentation
|
|
||||||
|
### Image Augmentation
|
||||||
|
|
||||||
`datasets.py` applies OpenCV-powered (https://opencv.org/) augmentation to the input image. We use a **mosaic dataloader** to increase image variability during training.
|
`datasets.py` applies OpenCV-powered (https://opencv.org/) augmentation to the input image. We use a **mosaic dataloader** to increase image variability during training.
|
||||||
|
|
||||||
<img src="https://user-images.githubusercontent.com/26833433/80769557-6e015d00-8b02-11ea-9c4b-69310eb2b962.jpg" width="900">
|
<img src="https://user-images.githubusercontent.com/26833433/80769557-6e015d00-8b02-11ea-9c4b-69310eb2b962.jpg" width="900">
|
||||||
|
|
||||||
## Speed
|
|
||||||
|
### Speed
|
||||||
|
|
||||||
https://cloud.google.com/deep-learning-vm/
|
https://cloud.google.com/deep-learning-vm/
|
||||||
**Machine type:** preemptible [n1-standard-8](https://cloud.google.com/compute/docs/machine-types) (8 vCPUs, 30 GB memory)
|
**Machine type:** preemptible [n1-standard-8](https://cloud.google.com/compute/docs/machine-types) (8 vCPUs, 30 GB memory)
|
||||||
|
@ -73,6 +75,7 @@ T4 |1<br>2| 32 x 2<br>64 x 1 | 41<br>61 | 48 min<br>32 min | $0.09<br>$0.11
|
||||||
V100 |1<br>2| 32 x 2<br>64 x 1 | 122<br>**178** | 16 min<br>**11 min** | **$0.21**<br>$0.28
|
V100 |1<br>2| 32 x 2<br>64 x 1 | 122<br>**178** | 16 min<br>**11 min** | **$0.21**<br>$0.28
|
||||||
2080Ti |1<br>2| 32 x 2<br>64 x 1 | 81<br>140 | 24 min<br>14 min | -<br>-
|
2080Ti |1<br>2| 32 x 2<br>64 x 1 | 81<br>140 | 24 min<br>14 min | -<br>-
|
||||||
|
|
||||||
|
|
||||||
# Inference
|
# Inference
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -96,10 +99,11 @@ python3 detect.py --source ...
|
||||||
<img src="https://user-images.githubusercontent.com/26833433/64067833-51d5b500-cc2f-11e9-8208-6fe197809131.jpg" width="500">
|
<img src="https://user-images.githubusercontent.com/26833433/64067833-51d5b500-cc2f-11e9-8208-6fe197809131.jpg" width="500">
|
||||||
|
|
||||||
|
|
||||||
# Pretrained Weights
|
## Pretrained Checkpoints
|
||||||
|
|
||||||
Download from: [https://drive.google.com/open?id=1LezFG5g3BCW6iYaV89B2i64cqEUZD7e0](https://drive.google.com/open?id=1LezFG5g3BCW6iYaV89B2i64cqEUZD7e0)
|
Download from: [https://drive.google.com/open?id=1LezFG5g3BCW6iYaV89B2i64cqEUZD7e0](https://drive.google.com/open?id=1LezFG5g3BCW6iYaV89B2i64cqEUZD7e0)
|
||||||
|
|
||||||
|
|
||||||
## Darknet Conversion
|
## Darknet Conversion
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -114,7 +118,8 @@ $ python3 -c "from models import *; convert('cfg/yolov3-spp.cfg', 'weights/yolo
|
||||||
Success: converted 'weights/yolov3-spp.pt' to 'weights/yolov3-spp.weights'
|
Success: converted 'weights/yolov3-spp.pt' to 'weights/yolov3-spp.weights'
|
||||||
```
|
```
|
||||||
|
|
||||||
# mAP
|
|
||||||
|
## mAP
|
||||||
|
|
||||||
<i></i> |Size |COCO mAP<br>@0.5...0.95 |COCO mAP<br>@0.5
|
<i></i> |Size |COCO mAP<br>@0.5...0.95 |COCO mAP<br>@0.5
|
||||||
--- | --- | --- | ---
|
--- | --- | --- | ---
|
||||||
|
@ -153,7 +158,7 @@ Speed: 17.5/2.3/19.9 ms inference/NMS/total per 640x640 image at batch-size 16
|
||||||
<!-- Speed: 11.4/2.2/13.6 ms inference/NMS/total per 608x608 image at batch-size 1 -->
|
<!-- Speed: 11.4/2.2/13.6 ms inference/NMS/total per 608x608 image at batch-size 1 -->
|
||||||
|
|
||||||
|
|
||||||
# Reproduce Our Results
|
## Reproduce Our Results
|
||||||
|
|
||||||
Run commands below. Training takes about one week on a 2080Ti per model.
|
Run commands below. Training takes about one week on a 2080Ti per model.
|
||||||
```bash
|
```bash
|
||||||
|
@ -162,17 +167,31 @@ $ python train.py --data coco2014.data --weights '' --batch-size 32 --cfg yolov3
|
||||||
```
|
```
|
||||||
<img src="https://user-images.githubusercontent.com/26833433/80831822-57a9de80-8ba0-11ea-9684-c47afb0432dc.png" width="900">
|
<img src="https://user-images.githubusercontent.com/26833433/80831822-57a9de80-8ba0-11ea-9684-c47afb0432dc.png" width="900">
|
||||||
|
|
||||||
# Reproduce Our Environment
|
|
||||||
|
## Reproduce Our Environment
|
||||||
|
|
||||||
To access an up-to-date working environment (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled), consider a:
|
To access an up-to-date working environment (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled), consider a:
|
||||||
|
|
||||||
- **GCP** Deep Learning VM with $300 free credit offer: See our [GCP Quickstart Guide](https://github.com/ultralytics/yolov3/wiki/GCP-Quickstart)
|
- **GCP** Deep Learning VM with $300 free credit offer: See our [GCP Quickstart Guide](https://github.com/ultralytics/yolov3/wiki/GCP-Quickstart)
|
||||||
- **Google Colab Notebook** with 12 hours of free GPU time: [Google Colab Notebook](https://colab.sandbox.google.com/github/ultralytics/yolov3/blob/master/tutorial.ipynb)
|
- **Google Colab Notebook** with 12 hours of free GPU time. <a href="https://colab.research.google.com/github/ultralytics/yolov3/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
|
||||||
- **Docker Image** from https://hub.docker.com/r/ultralytics/yolov3. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov3/wiki/Docker-Quickstart)
|
- **Docker Image** https://hub.docker.com/r/ultralytics/yolov5. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov3/wiki/Docker-Quickstart)
|
||||||
|
|
||||||
|
|
||||||
# Citation
|
# Citation
|
||||||
|
|
||||||
[![DOI](https://zenodo.org/badge/146165888.svg)](https://zenodo.org/badge/latestdoi/146165888)
|
[![DOI](https://zenodo.org/badge/146165888.svg)](https://zenodo.org/badge/latestdoi/146165888)
|
||||||
|
|
||||||
# Contact
|
|
||||||
|
|
||||||
**Issues should be raised directly in the repository.** For business inquiries or professional support requests please visit us at https://contact.ultralytics.com.
|
## About Us
|
||||||
|
|
||||||
|
Ultralytics is a U.S.-based particle physics and AI startup with over 6 years of expertise supporting government, academic and business clients. We offer a wide range of vision AI services, spanning from simple expert advice up to delivery of fully customized, end-to-end production solutions, including:
|
||||||
|
- **Cloud-based AI** surveillance systems operating on **hundreds of HD video streams in realtime.**
|
||||||
|
- **Edge AI** integrated into custom iOS and Android apps for realtime **30 FPS video inference.**
|
||||||
|
- **Custom data training**, hyperparameter evolution, and model exportation to any destination.
|
||||||
|
|
||||||
|
For business inquiries and professional support requests please visit us at https://www.ultralytics.com.
|
||||||
|
|
||||||
|
|
||||||
|
## Contact
|
||||||
|
|
||||||
|
**Issues should be raised directly in the repository.** For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.
|
||||||
|
|
Loading…
Reference in New Issue