diff --git a/README.md b/README.md index 0fe1230..a88a537 100644 --- a/README.md +++ b/README.md @@ -1,76 +1,230 @@ -[![Ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml) +
+

+ + +

-## Install +[English](README.md) | [简体中文](README.zh-CN.md) +
-```bash -pip install ultralytics -``` +
+ Ultralytics CI + YOLOv8 Citation + Docker Pulls +
+ Run on Gradient + Open In Colab + Open In Kaggle +
+
-Development +[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO object detection and image segmentation model developed by [Ultralytics](https://ultralytics.com). YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. -``` -git clone https://github.com/ultralytics/ultralytics -cd ultralytics -pip install -e . -``` +The YOLOv8 models are designed to be fast, accurate, and easy to use, making them an excellent choice for a wide range of object detection, image segmentation and image classification tasks. + +Whether you are a seasoned machine learning practitioner or new to the field, we hope that the resources on this page will help you get the most out of YOLOv8. -## Usage +
+ + + + + + + + + + + + + + + + + + + + +
+
-### 1. CLI +##
Documentation
-To simply use the latest Ultralytics YOLO models +See below for quickstart intallation and usage example, and see the [YOLOv8 Docs](https://docs.ultralytics.com) for full documentation on training, validation, prediction and deployment. + +
+Install + +Pip install the ultralytics package including all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a +[**Python>=3.7.0**](https://www.python.org/) environment, including +[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/). ```bash -yolo task=detect mode=train model=yolov8n.yaml args=... - classify predict yolov8n-cls.yaml args=... - segment val yolov8n-seg.yaml args=... - export yolov8n.pt format=onnx +pip install ultralytics ``` -### 2. Python SDK +
-To use pythonic interface of Ultralytics YOLO model +
+Usage + +YOLOv8 may be used in a python environment: ```python from ultralytics import YOLO -model = YOLO("yolov8n.yaml") # create a new model from scratch -model = YOLO( - "yolov8n.pt" -) # load a pretrained model (recommended for best training results) -results = model.train(data="coco128.yaml", epochs=100, imgsz=640) -results = model.val() -results = model.predict(source="bus.jpg") -success = model.export(format="onnx") +model = YOLO("yolov8n.pt") # load a pretrained YOLOv8n model + +model.train(data="coco128.yaml") # train the model +model.val() # evaluate model performance on the validation set +model.predict(source="https://ultralytics.com/images/bus.jpg") # predict on an image +model.export(format="onnx") # export the model to ONNX format +``` + +Or with CLI `yolo` commands: + +```bash +yolo task=detect mode=train model=yolov8n.pt args... + classify predict yolov8n-cls.yaml args... + segment val yolov8n-seg.yaml args... + export yolov8n.pt format=onnx args... ``` -## Models - -| Model | size
(pixels) | mAPval
50-95 | Speed
CPU
(ms) | Speed
T4 GPU
(ms) | params
(M) | FLOPs
(B) | -| ------------------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------- | ---------------------------- | ------------------ | ----------------- | -| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5n.pt) | 640 | 28.0 | - | - | **1.9** | **4.5** | -| [YOLOv6n](url) | 640 | 35.9 | - | - | 4.3 | 11.1 | -| **[YOLOv8n](url)** | 640 | **37.3** | - | - | 3.2 | 8.9 | -| | | | | | | | -| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s.pt) | 640 | 37.4 | - | - | 7.2 | 16.5 | -| [YOLOv6s](url) | 640 | 43.5 | - | - | 17.2 | 44.2 | -| **[YOLOv8s](url)** | 640 | **44.9** | - | - | 11.2 | 28.8 | -| | | | | | | | -| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5m.pt) | 640 | 45.4 | - | - | 21.2 | 49.0 | -| [YOLOv6m](url) | 640 | 49.5 | - | - | 34.3 | 82.2 | -| **[YOLOv8m](url)** | 640 | **50.2** | - | - | 25.9 | 79.3 | -| | | | | | | | -| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5l.pt) | 640 | 49.0 | - | - | 46.5 | 109.1 | -| [YOLOv6l](url) | 640 | 52.5 | - | - | 58.5 | 144.0 | -| [YOLOv7](url) | 640 | 51.2 | - | - | 36.9 | 104.7 | -| **[YOLOv8l](url)** | 640 | **52.9** | - | - | 43.7 | 165.7 | -| | | | | | | | -| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x.pt) | 640 | 50.7 | - | - | 86.7 | 205.7 | -| [YOLOv7-X](url) | 640 | 52.9 | - | - | 71.3 | 189.9 | -| **[YOLOv8x](url)** | 640 | **53.9** | - | - | 68.2 | 258.5 | -| | | | | | | | -| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x6.pt) | 1280 | 55.0 | - | - | 140.7 | 839.2 | -| [YOLOv7-E6E](url) | 1280 | 56.8 | - | - | 151.7 | 843.2 | -| **[YOLOv8x6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x6.pt)**
+TTA | 1280 | -
- | -
- | -
- | 97.4 | 1047.2
- | - -If you're looking to modify YOLO for R&D or to build on top of it, refer to [Using Trainer](<>) Guide on our docs. +[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/yolo/v8/models) download automatically from the latest +Ultralytics [release](https://github.com/ultralytics/ultralytics/releases). + +
+ +##
Checkpoints
+ +All YOLOv8 pretrained models are available here. Detection and Segmentation models are pretrained on the COCO dataset, while Classification models are pretrained on the ImageNet dataset. + +[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/yolo/v8/models) download automatically from the latest +Ultralytics [release](https://github.com/ultralytics/ultralytics/releases) on first use. + +
Detection + +| Model | size
(pixels) | mAPval
50-95 | Speed
CPU
(ms) | Speed
T4 GPU
(ms) | params
(M) | FLOPs
(B) | +| ----------------------------------------------------------------------------------------- | --------------------- | -------------------- | ------------------------- | ---------------------------- | ------------------ | ----------------- | +| [YOLOv8n](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n.pt) | 640 | 37.3 | - | - | 3.2 | 8.7 | +| [YOLOv8s](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s.pt) | 640 | 44.9 | - | - | 11.2 | 28.6 | +| [YOLOv8m](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m.pt) | 640 | 50.2 | - | - | 25.9 | 78.9 | +| [YOLOv8l](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l.pt) | 640 | 52.9 | - | - | 43.7 | 165.2 | +| [YOLOv8x](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x.pt) | 640 | 53.9 | - | - | 68.2 | 257.8 | + +- **mAPval** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset. +
Reproduce by `yolo mode=val task=detect data=coco.yaml device=0` +- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. +
Reproduce by `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu` + +
+ +
Segmentation + +| Model | size
(pixels) | mAPbox
50-95 | mAPmask
50-95 | Speed
CPU
(ms) | Speed
T4 GPU
(ms) | params
(M) | FLOPs
(B) | +| --------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------- | ---------------------------- | ------------------ | ----------------- | +| [YOLOv8n](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | - | - | 3.4 | 12.6 | +| [YOLOv8s](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | - | - | 11.8 | 42.6 | +| [YOLOv8m](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | - | - | 27.3 | 110.2 | +| [YOLOv8l](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | - | - | 46.0 | 220.5 | +| [YOLOv8x](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | - | - | 71.8 | 344.1 | + +- **mAPval** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset. +
Reproduce by `yolo mode=val task=detect data=coco.yaml device=0` +- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. +
Reproduce by `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu` + +
+ +
Classification + +| Model | size
(pixels) | acc
top1 | acc
top5 | Speed
CPU
(ms) | Speed
T4 GPU
(ms) | params
(M) | FLOPs
(B) at 640 | +| --------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------- | ---------------------------- | ------------------ | ------------------------ | +| [YOLOv8n](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | - | - | 2.7 | 4.3 | +| [YOLOv8s](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | - | - | 6.4 | 13.5 | +| [YOLOv8m](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | - | - | 17.0 | 42.7 | +| [YOLOv8l](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | - | - | 37.5 | 99.7 | +| [YOLOv8x](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | - | - | 57.4 | 154.8 | + +- **mAPval** values are for single-model single-scale on [ImageNet](https://www.image-net.org/) dataset. +
Reproduce by `yolo mode=val task=detect data=coco.yaml device=0` +- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. +
Reproduce by `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu` + +
+ +##
Integrations
+ +
+ + +
+
+ +
+ + + + + + + + + + + +
+ +| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW | +| :--------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: | +| Label and export your custom datasets directly to YOLOv8 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLOv8 using [ClearML](https://cutt.ly/yolov5-readme-clearml) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet2) lets you save YOLOv8 models, resume training, and interactively visualise and debug predictions | Run YOLOv8 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) | + +##
Ultralytics HUB
+ +[Ultralytics HUB](https://bit.ly/ultralytics_hub) is our ⭐ **NEW** no-code solution to visualize datasets, train YOLOv8 🚀 models, and deploy to the real world in a seamless experience. Get started for **Free** now! Also run YOLOv8 models on your iOS or Android device by downloading the [Ultralytics App](https://ultralytics.com/app_install)! + + + + +##
Contribute
+ +We love your input! YOLOv5 and YOLOv8 would not be possible without help from our community. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out our [Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experience. Thank you 🙏 to all our contributors! + + + + + +##
License
+ +YOLOv8 is available under two different licenses: + +- **GPL-3.0 License**: See [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for details. +- **Enterprise License**: Provides greater flexibility for commercial product development without the open-source requirements of GPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and applications. Request an Enterprise License at [Ultralytics Licensing](https://ultralytics.com/license). + +##
Contact
+ +For YOLOv8 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For professional support please [Contact Us](https://ultralytics.com/contact). + +
+
+ + + + + + + + + + + + + + + + + + + + +
diff --git a/docs/config.md b/docs/config.md index e8ed27e..57078eb 100644 --- a/docs/config.md +++ b/docs/config.md @@ -101,7 +101,7 @@ given task. | Key | Value | Description | |----------------|----------------------|-------------------------------------------------| | source | `ultralytics/assets` | Input source. Accepts image, folder, video, url | -| view_img | `False` | View the prediction images | +| show | `False` | View the prediction images | | save_txt | `False` | Save the results in a txt file | | save_conf | `False` | Save the condidence scores | | save_crop | `Fasle` | | @@ -136,7 +136,7 @@ validation dataset and to detect and prevent overfitting. | dnn | `False` | Use OpenCV DNN for ONNX inference | | plots | `False` | | -### Export settings +### Export Export settings for YOLO models refer to the various configurations and options used to save or export the model for use in other environments or platforms. These settings can affect the model's performance, size, diff --git a/docs/hub.md b/docs/hub.md new file mode 100644 index 0000000..d29814b --- /dev/null +++ b/docs/hub.md @@ -0,0 +1,80 @@ +# Ultralytics HUB + +
+ + +
+ + CI CPU +
+ + + +[Ultralytics HUB](https://hub.ultralytics.com) is a new no-code online tool developed +by [Ultralytics](https://ultralytics.com), the creators of the popular [YOLOv5](https://github.com/ultralytics/yolov5) +object detection and image segmentation models. With Ultralytics HUB, users can easily train and deploy YOLOv5 models +without any coding or technical expertise. + +Ultralytics HUB is designed to be user-friendly and intuitive, with a drag-and-drop interface that allows users to +easily upload their data and select their model configurations. It also offers a range of pre-trained models and +templates to choose from, making it easy for users to get started with training their own models. Once a model is +trained, it can be easily deployed and used for real-time object detection and image segmentation tasks. Overall, +Ultralytics HUB is an essential tool for anyone looking to use YOLOv5 for their object detection and image segmentation +projects. + +**[Get started now](https://hub.ultralytics.com)** and experience the power and simplicity of Ultralytics HUB for yourself. Sign up for a free account and +start building, training, and deploying YOLOv5 and YOLOv8 models today. + + +## 1. Upload a Dataset + +Ultralytics HUB datasets are just like YOLOv5 🚀 datasets, they use the same structure and the same label formats to keep everything simple. + +When you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML inside the dataset root directory** as in the example shown below, and then zip for upload to https://hub.ultralytics.com/. Your **dataset YAML, directory and zip** should all share the same name. For example, if your dataset is called 'coco6' as in our example [ultralytics/hub/coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip), then you should have a coco6.yaml inside your coco6/ directory, which should zip to create coco6.zip for upload: + +```bash +zip -r coco6.zip coco6 +``` + +The example [coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip) dataset in this repository can be downloaded and unzipped to see exactly how to structure your custom dataset. + +

+ +The dataset YAML is the same standard YOLOv5 YAML format. See the [YOLOv5 Train Custom Data tutorial](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) for full details. +```yaml +# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] +path: # dataset root dir (leave empty for HUB) +train: images/train # train images (relative to 'path') 8 images +val: images/val # val images (relative to 'path') 8 images +test: # test images (optional) + +# Classes +names: + 0: person + 1: bicycle + 2: car + 3: motorcycle + ... +``` + +After zipping your dataset, sign in to [Ultralytics HUB](https://bit.ly/ultralytics_hub) and click the Datasets tab. Click 'Upload Dataset' to upload, scan and visualize your new dataset before training new YOLOv5 models on it! + +HUB Dataset Upload + + +## 2. Train a Model + +Connect to the Ultralytics HUB notebook and use your model API key to begin training! Open In Colab + + +## 3. Deploy to Real World + +Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run models directly on your mobile device by downloading the [Ultralytics App](https://ultralytics.com/app_install)! + + +Ultralytics mobile app + + +## ❓ Issues + +If you are a new [Ultralytics HUB](https://bit.ly/ultralytics_hub) user and have questions or comments, you are in the right place! Please raise a [New Issue](https://github.com/ultralytics/hub/issues/new/choose) and let us know what we can do to make your life better 😃! diff --git a/docs/sdk.md b/docs/sdk.md index f061192..fb93290 100644 --- a/docs/sdk.md +++ b/docs/sdk.md @@ -35,7 +35,7 @@ This is the simplest way of simply using yolo models in a python environment. It model = YOLO("model.pt") model.predict(source="0") # accepts all formats - img/folder/vid.*(mp4/format). 0 for webcam - model.predict(source="folder", view_img=True) # Display preds. Accepts all yolo predict arguments + model.predict(source="folder", show=True) # Display preds. Accepts all yolo predict arguments ``` diff --git a/mkdocs.yml b/mkdocs.yml index 859c83e..f71d4bd 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -1,5 +1,5 @@ -site_name: Ultralytics YOLO Docs -repo_url: https://github.com/ultralytics/yolov5 +site_name: Ultralytics Docs +repo_url: https://github.com/ultralytics/ultralytics repo_name: Ultralytics theme: @@ -82,6 +82,7 @@ nav: - Python Interface: sdk.md - Configuration: config.md - Customization Guide: engine.md + - Ultralytics HUB: hub.md - iOS and Android App: app.md - Reference: - Python Model interface: reference/model.md diff --git a/ultralytics/yolo/v8/classify/train.py b/ultralytics/yolo/v8/classify/train.py index 9b2eedc..3cd9438 100644 --- a/ultralytics/yolo/v8/classify/train.py +++ b/ultralytics/yolo/v8/classify/train.py @@ -7,6 +7,7 @@ from ultralytics.yolo import v8 from ultralytics.yolo.data import build_classification_dataloader from ultralytics.yolo.engine.trainer import BaseTrainer from ultralytics.yolo.utils import DEFAULT_CONFIG +from ultralytics.yolo.utils.torch_utils import strip_optimizer class ClassificationTrainer(BaseTrainer): @@ -117,7 +118,16 @@ class ClassificationTrainer(BaseTrainer): pass def final_eval(self): - pass + for f in self.last, self.best: + if f.exists(): + strip_optimizer(f) # strip optimizers + # TODO: validate best.pt after training completes + # if f is self.best: + # self.console.info(f'\nValidating {f}...') + # self.validator.args.save_json = True + # self.metrics = self.validator(model=f) + # self.metrics.pop('fitness', None) + # self.run_callbacks('on_fit_epoch_end') @hydra.main(version_base=None, config_path=str(DEFAULT_CONFIG.parent), config_name=DEFAULT_CONFIG.name)