diff --git a/README.md b/README.md index 4dc3fa8e..04348198 100755 --- a/README.md +++ b/README.md @@ -19,11 +19,12 @@ Python 3.6 or later with the following `pip3 install -U -r requirements.txt` pac # Training -Run `train.py` to begin training after downloading COCO data with `data/get_coco_dataset.sh` and specifying COCO path on line 37 (local) or line 39 (cloud). +***Start Training:*** Run `train.py` to begin training after downloading COCO data with `data/get_coco_dataset.sh` and specifying COCO path on line 37 (local) or line 39 (cloud). -Run `train.py -resume 1` to resume training from the most recently saved checkpoint `checkpoints/latest.pt`. +***Resume Training:*** Run `train.py -resume 1` to resume training from the most recently saved checkpoint `checkpoints/latest.pt`. Each epoch trains on 120,000 images from the train and validate COCO sets, and tests on 5000 images from the COCO validate set. An Nvidia GTX 1080 Ti will process ~10 epochs/day with full augmentation, or ~15 epochs/day without input image augmentation. Loss plots for the bounding boxes, objectness and class confidence should appear similar to results shown here (results in progress to 160 epochs, will update). + ![Alt](https://github.com/ultralytics/yolov3/blob/master/data/coco_training_loss.png "coco training loss") ## Image Augmentation diff --git a/train.py b/train.py index c70b5af7..cdfd3428 100644 --- a/train.py +++ b/train.py @@ -93,6 +93,7 @@ def main(opt): for epoch in range(opt.epochs): epoch += start_epoch + # Random input # img_size = random.choice(range(10, 20)) * 32 # dataloader = ListDataset(train_path, batch_size=opt.batch_size, img_size=img_size) # print('Running image size %g' % img_size)