Release 8.0.5 PR (#279)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: Izam Mohammed <106471909+izam-mohammed@users.noreply.github.com>
Co-authored-by: Yue WANG 王跃 <92371174+yuewangg@users.noreply.github.com>
Co-authored-by: Thibaut Lucas <thibautlucas13@gmail.com>
single_channel
Laughing 2 years ago committed by GitHub
parent 9552827157
commit c42e44a021
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -18,7 +18,11 @@
</div> </div>
<br> <br>
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics), developed by [Ultralytics](https://ultralytics.com), is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection, image segmentation and image classification tasks. [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics), developed by [Ultralytics](https://ultralytics.com),
is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces
new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and
easy to use, making it an excellent choice for a wide range of object detection, image segmentation and image
classification tasks.
To request an Enterprise License please complete the form at [Ultralytics Licensing](https://ultralytics.com/license). To request an Enterprise License please complete the form at [Ultralytics Licensing](https://ultralytics.com/license).
@ -50,12 +54,14 @@ To request an Enterprise License please complete the form at [Ultralytics Licens
## <div align="center">Documentation</div> ## <div align="center">Documentation</div>
See below for quickstart intallation and usage example, and see the [YOLOv8 Docs](https://docs.ultralytics.com) for full documentation on training, validation, prediction and deployment. See below for a quickstart installation and usage example, and see the [YOLOv8 Docs](https://docs.ultralytics.com) for full
documentation on training, validation, prediction and deployment.
<details open> <details open>
<summary>Install</summary> <summary>Install</summary>
Pip install the ultralytics package including all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a Pip install the ultralytics package including
all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a
[**Python>=3.7.0**](https://www.python.org/) environment, including [**Python>=3.7.0**](https://www.python.org/) environment, including
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/). [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
@ -74,7 +80,9 @@ YOLOv8 may be used directly in the Command Line Interface (CLI) with a `yolo` co
yolo task=detect mode=predict model=yolov8n.pt source="https://ultralytics.com/images/bus.jpg" yolo task=detect mode=predict model=yolov8n.pt source="https://ultralytics.com/images/bus.jpg"
``` ```
`yolo` can be used for a variety of tasks and modes and accepts additional arguments, i.e. `imgsz=640`. See a full list of available `yolo` [arguments](https://docs.ultralytics.com/config/) in the YOLOv8 [Docs](https://docs.ultralytics.com). `yolo` can be used for a variety of tasks and modes and accepts additional arguments, i.e. `imgsz=640`. See a full list
of available `yolo` [arguments](https://docs.ultralytics.com/config/) in the
YOLOv8 [Docs](https://docs.ultralytics.com).
```bash ```bash
yolo task=detect mode=train model=yolov8n.pt args... yolo task=detect mode=train model=yolov8n.pt args...
@ -83,7 +91,8 @@ yolo task=detect mode=train model=yolov8n.pt args...
export yolov8n.pt format=onnx args... export yolov8n.pt format=onnx args...
``` ```
YOLOv8 may also be used directly in a Python environment, and accepts the same [arguments](https://docs.ultralytics.com/config/) as in the CLI example above: YOLOv8 may also be used directly in a Python environment, and accepts the
same [arguments](https://docs.ultralytics.com/config/) as in the CLI example above:
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
@ -96,7 +105,7 @@ model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
results = model.train(data="coco128.yaml", epochs=3) # train the model results = model.train(data="coco128.yaml", epochs=3) # train the model
results = model.val() # evaluate model performance on the validation set results = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
success = YOLO("yolov8n.pt").export(format="onnx") # export a model to ONNX format success = model.export(format="onnx") # export the model to ONNX format
``` ```
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
@ -104,7 +113,9 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases).
### Known Issues / TODOs ### Known Issues / TODOs
We are still working on several parts of YOLOv8! We aim to have these completed soon to bring the YOLOv8 feature set up to par with YOLOv5, including export and inference to all the same formats. We are also writing a YOLOv8 paper which we will submit to [arxiv.org](https://arxiv.org) once complete. We are still working on several parts of YOLOv8! We aim to have these completed soon to bring the YOLOv8 feature set up
to par with YOLOv5, including export and inference to all the same formats. We are also writing a YOLOv8 paper which we
will submit to [arxiv.org](https://arxiv.org) once complete.
- [ ] TensorFlow exports - [ ] TensorFlow exports
- [ ] DDP resume - [ ] DDP resume
@ -112,15 +123,18 @@ We are still working on several parts of YOLOv8! We aim to have these completed
</details> </details>
## <div align="center">Checkpoints</div> ## <div align="center">Models</div>
All YOLOv8 pretrained models are available here. Detection and Segmentation models are pretrained on the COCO dataset, while Classification models are pretrained on the ImageNet dataset. All YOLOv8 pretrained models are available here. Detection and Segmentation models are pretrained on the COCO dataset,
while Classification models are pretrained on the ImageNet dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use. Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
<details open><summary>Detection</summary> <details open><summary>Detection</summary>
See [Detection Docs](https://docs.ultralytics.com/tasks/detection/) for usage examples with these models.
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) | | Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- | | ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 | | [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
@ -131,16 +145,19 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases) on first u
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset. - **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
<br>Reproduce by `yolo mode=val task=detect data=coco.yaml device=0` <br>Reproduce by `yolo mode=val task=detect data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. - **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu` <br>Reproduce by `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu`
</details> </details>
<details><summary>Segmentation</summary> <details><summary>Segmentation</summary>
See [Segmentation Docs](https://docs.ultralytics.com/tasks/segmentation/) for usage examples with these models.
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) | | Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- | | ---------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.11 | 1.21 | 3.4 | 12.6 | | [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 | | [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 | | [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 | | [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
@ -148,13 +165,16 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases) on first u
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset. - **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
<br>Reproduce by `yolo mode=val task=segment data=coco.yaml device=0` <br>Reproduce by `yolo mode=val task=segment data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. - **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo mode=val task=segment data=coco128-seg.yaml batch=1 device=0/cpu` <br>Reproduce by `yolo mode=val task=segment data=coco128-seg.yaml batch=1 device=0/cpu`
</details> </details>
<details><summary>Classification</summary> <details><summary>Classification</summary>
See [Classification Docs](https://docs.ultralytics.com/tasks/classification/) for usage examples with these models.
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 | | Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| ---------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ | | ---------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 | | [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 |
@ -165,7 +185,8 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases) on first u
- **acc** values are model accuracies on the [ImageNet](https://www.image-net.org/) dataset validation set. - **acc** values are model accuracies on the [ImageNet](https://www.image-net.org/) dataset validation set.
<br>Reproduce by `yolo mode=val task=classify data=path/to/ImageNet device=0` <br>Reproduce by `yolo mode=val task=classify data=path/to/ImageNet device=0`
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. - **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo mode=val task=classify data=path/to/ImageNet batch=1 device=0/cpu` <br>Reproduce by `yolo mode=val task=classify data=path/to/ImageNet batch=1 device=0/cpu`
</details> </details>
@ -194,18 +215,23 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases) on first u
| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW | | Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
| :--------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: | | :--------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: |
| Label and export your custom datasets directly to YOLOv8 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLOv8 using [ClearML](https://cutt.ly/yolov5-readme-clearml) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet2) lets you save YOLOv8 models, resume training, and interactively visualise and debug predictions | Run YOLOv8 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) | | Label and export your custom datasets directly to YOLOv8 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLOv8 using [ClearML](https://cutt.ly/yolov5-readme-clearml) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet2) lets you save YOLOv8 models, resume training, and interactively visualize and debug predictions | Run YOLOv8 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) |
## <div align="center">Ultralytics HUB</div> ## <div align="center">Ultralytics HUB</div>
[Ultralytics HUB](https://bit.ly/ultralytics_hub) is our ⭐ **NEW** no-code solution to visualize datasets, train YOLOv8 🚀 models, and deploy to the real world in a seamless experience. Get started for **Free** now! Also run YOLOv8 models on your iOS or Android device by downloading the [Ultralytics App](https://ultralytics.com/app_install)! [Ultralytics HUB](https://bit.ly/ultralytics_hub) is our ⭐ **NEW** no-code solution to visualize datasets, train YOLOv8
🚀 models, and deploy to the real world in a seamless experience. Get started for **Free** now! Also run YOLOv8 models on
your iOS or Android device by downloading the [Ultralytics App](https://ultralytics.com/app_install)!
<a align="center" href="https://bit.ly/ultralytics_hub" target="_blank"> <a align="center" href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a> <img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
## <div align="center">Contribute</div> ## <div align="center">Contribute</div>
We love your input! YOLOv5 and YOLOv8 would not be possible without help from our community. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out our [Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experience. Thank you 🙏 to all our contributors! We love your input! YOLOv5 and YOLOv8 would not be possible without help from our community. Please see
our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out
our [Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback
on your experience. Thank you 🙏 to all our contributors!
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 --> <!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
@ -216,11 +242,14 @@ We love your input! YOLOv5 and YOLOv8 would not be possible without help from ou
YOLOv8 is available under two different licenses: YOLOv8 is available under two different licenses:
- **GPL-3.0 License**: See [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for details. - **GPL-3.0 License**: See [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for details.
- **Enterprise License**: Provides greater flexibility for commercial product development without the open-source requirements of GPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and applications. Request an Enterprise License at [Ultralytics Licensing](https://ultralytics.com/license). - **Enterprise License**: Provides greater flexibility for commercial product development without the open-source
requirements of GPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and
applications. Request an Enterprise License at [Ultralytics Licensing](https://ultralytics.com/license).
## <div align="center">Contact</div> ## <div align="center">Contact</div>
For YOLOv8 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For professional support please [Contact Us](https://ultralytics.com/contact). For YOLOv8 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues).
For professional support please [Contact Us](https://ultralytics.com/contact).
<br> <br>
<div align="center"> <div align="center">

@ -92,7 +92,7 @@ model = YOLO("yolov8n.pt") # 加载预训练模型(推荐用于训练)
results = model.train(data="coco128.yaml", epochs=3) # 训练模型 results = model.train(data="coco128.yaml", epochs=3) # 训练模型
results = model.val() # 在验证集上评估模型性能 results = model.val() # 在验证集上评估模型性能
results = model("https://ultralytics.com/images/bus.jpg") # 预测图像 results = model("https://ultralytics.com/images/bus.jpg") # 预测图像
success = YOLO("yolov8n.pt").export(format="onnx") # 将模型导出为 ONNX 格式 success = model.export(format="onnx") # 将模型导出为 ONNX 格式
``` ```
[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) 会从 Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自动下载。 [模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) 会从 Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自动下载。
@ -134,16 +134,16 @@ success = YOLO("yolov8n.pt").export(format="onnx") # 将模型导出为 ONNX
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | 推理速度<br><sup>CPU ONNX<br>(ms) | 推理速度<br><sup>A100 TensorRT<br>(ms) | 参数量<br><sup>(M) | FLOPs<br><sup>(B) | | 模型 | 尺寸<br><sup>(像素) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | 推理速度<br><sup>CPU ONNX<br>(ms) | 推理速度<br><sup>A100 TensorRT<br>(ms) | 参数量<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------- | --------------- | -------------------- | --------------------- | ----------------------------- | ---------------------------------- | --------------- | ----------------- | | ---------------------------------------------------------------------------------------- | --------------- | -------------------- | --------------------- | ----------------------------- | ---------------------------------- | --------------- | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.11 | 1.21 | 3.4 | 12.6 | | [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 | | [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 | | [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 | | [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 | | [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
- **mAP<sup>val</sup>** 结果都在 [COCO val2017](http://cocodataset.org) 数据集上,使用单模型单尺度测试得到。 - **mAP<sup>val</sup>** 结果都在 [COCO val2017](http://cocodataset.org) 数据集上,使用单模型单尺度测试得到。
<br>复现命令 `yolo mode=val task=detect data=coco.yaml device=0` <br>复现命令 `yolo mode=val task=segment data=coco.yaml device=0`
- **推理速度**使用 COCO 验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。 - **推理速度**使用 COCO 验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。
<br>复现命令 `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu` <br>复现命令 `yolo mode=val task=segment data=coco128-seg.yaml batch=1 device=0/cpu`
</details> </details>
@ -158,9 +158,9 @@ success = YOLO("yolov8n.pt").export(format="onnx") # 将模型导出为 ONNX
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 | | [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 |
- **acc** 都在 [ImageNet](https://www.image-net.org/) 数据集上,使用单模型单尺度测试得到。 - **acc** 都在 [ImageNet](https://www.image-net.org/) 数据集上,使用单模型单尺度测试得到。
<br>复现命令 `yolo mode=val task=detect data=coco.yaml device=0` <br>复现命令 `yolo mode=val task=classify data=path/to/ImageNet device=0`
- **推理速度**使用 ImageNet 验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。 - **推理速度**使用 ImageNet 验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。
<br>复现命令 `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu` <br>复现命令 `yolo mode=val task=classify data=path/to/ImageNet batch=1 device=0/cpu`
</details> </details>

@ -0,0 +1 @@
docs.ultralytics.com

@ -7,8 +7,9 @@
</div> </div>
Welcome to the Ultralytics HUB app for demonstrating YOLOv5 and YOLOv8 models! In this app, available on the [Apple App Welcome to the Ultralytics HUB app for demonstrating YOLOv5 and YOLOv8 models! In this app, available on the [Apple App
Store](https://apps.apple.com/xk/app/ultralytics/id1583935240) and the [Google Play Store](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app), you will be able to see the power and capabilities of YOLOv5, a state-of-the-art object Store](https://apps.apple.com/xk/app/ultralytics/id1583935240) and the
detection model developed by Ultralytics. [Google Play Store](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app), you will be able
to see the power and capabilities of YOLOv5, a state-of-the-art object detection model developed by Ultralytics.
**To install simply scan the QR code above**. The App currently features YOLOv5 models, with YOLOv8 models coming soon. **To install simply scan the QR code above**. The App currently features YOLOv5 models, with YOLOv8 models coming soon.

@ -1,7 +1,8 @@
## CLI Basics If you want to train, validate or run inference on models and don't need to make any modifications to the code, using
If you want to train, validate or run inference on models and don't need to make any modifications to the code, using YOLO command line interface is the easiest way to get started. YOLO command line interface is the easiest way to get started.
!!! tip "Syntax" !!! tip "Syntax"
```bash ```bash
yolo task=detect mode=train model=yolov8n.yaml epochs=1 ... yolo task=detect mode=train model=yolov8n.yaml epochs=1 ...
... ... ... ... ... ...
@ -9,60 +10,76 @@ If you want to train, validate or run inference on models and don't need to make
classify val yolov8n-cls.pt classify val yolov8n-cls.pt
``` ```
The experiment arguments can be overridden directly by pass `arg=val` covered in the next section. You can run any supported task by setting `task` and `mode` in cli. The experiment arguments can be overridden directly by pass `arg=val` covered in the next section. You can run any
supported task by setting `task` and `mode` in cli.
=== "Training" === "Training"
| | `task` | snippet | | | `task` | snippet |
| ----------- | ------------- | ----------------------------------------------------------- | |------------------|------------|------------------------------------------------------------|
| Detection | `detect` | <pre><code>yolo task=detect mode=train </code></pre> | | Detection | `detect` | <pre><code>yolo task=detect mode=train </code></pre> |
| Instance Segment | `segment` | <pre><code>yolo task=segment mode=train </code></pre> | | Instance Segment | `segment` | <pre><code>yolo task=segment mode=train </code></pre> |
| Classification| `classify` | <pre><code>yolo task=classify mode=train </code></pre> | | Classification | `classify` | <pre><code>yolo task=classify mode=train </code></pre> |
=== "Prediction" === "Prediction"
| | `task` | snippet | | | `task` | snippet |
| ----------- | ------------- | ------------------------------------------------------------ | |------------------|------------|--------------------------------------------------------------|
| Detection | `detect` | <pre><code>yolo task=detect mode=predict </code></pre> | | Detection | `detect` | <pre><code>yolo task=detect mode=predict </code></pre> |
| Instance Segment | `segment` | <pre><code>yolo task=segment mode=predict </code></pre>| | Instance Segment | `segment` | <pre><code>yolo task=segment mode=predict </code></pre> |
| Classification| `classify` | <pre><code>yolo task=classify mode=predict </code></pre>| | Classification | `classify` | <pre><code>yolo task=classify mode=predict </code></pre> |
=== "Validation" === "Validation"
| | `task` | snippet | | | `task` | snippet |
| ----------- | ------------- | ------------------------------------------------------------- | |------------------|------------|-----------------------------------------------------------|
| Detection | `detect` | <pre><code>yolo task=detect mode=val </code></pre> | | Detection | `detect` | <pre><code>yolo task=detect mode=val </code></pre> |
| Instance Segment | `segment` | <pre><code>yolo task=segment mode=val </code></pre> | | Instance Segment | `segment` | <pre><code>yolo task=segment mode=val </code></pre> |
| Classification| `classify` | <pre><code>yolo task=classify mode=val </code></pre> | | Classification | `classify` | <pre><code>yolo task=classify mode=val </code></pre> |
!!! note "" !!! note ""
<b>Note:</b> The arguments don't require `'--'` prefix. These are reserved for special commands covered later <b>Note:</b> The arguments don't require `'--'` prefix. These are reserved for special commands covered later
--- ---
## Overriding default config arguments ## Overriding default config arguments
All global default arguments can be overriden by simply passing them as arguments in the cli. All global default arguments can be overriden by simply passing them as arguments in the cli.
!!! tip "" !!! tip ""
=== "Syntax" === "Syntax"
```yolo task= ... mode= ... {++ arg=val ++}``` ```bash
yolo task= ... mode= ... {++ arg=val ++}
```
=== "Example" === "Example"
Perform detection training for `10 epochs` with `learning_rate` of `0.01` Perform detection training for `10 epochs` with `learning_rate` of `0.01`
``` ```bash
yolo task=detect mode=train {++ epochs=10 lr0=0.01 ++} yolo task=detect mode=train {++ epochs=10 lr0=0.01 ++}
``` ```
--- ---
## Overriding default config file ## Overriding default config file
You can override config file entirely by passing a new file. You can create a copy of default config file in your current working dir as follows:
You can override config file entirely by passing a new file. You can create a copy of default config file in your
current working dir as follows:
```bash ```bash
yolo task=init yolo task=init
``` ```
You can then use `cfg=name.yaml` command to pass the new config file You can then use `cfg=name.yaml` command to pass the new config file
```bash ```bash
yolo cfg=default.yaml yolo cfg=default.yaml
``` ```
??? example ??? example
=== "Command" === "Command"
``` ```bash
yolo task=init yolo task=init
yolo cfg=default.yaml yolo cfg=default.yaml
``` ```

@ -1,16 +1,22 @@
Both the Ultralytics YOLO command-line and python interfaces are simply a high-level abstraction on the base engine executors. Let's take a look at the Trainer engine. Both the Ultralytics YOLO command-line and python interfaces are simply a high-level abstraction on the base engine
executors. Let's take a look at the Trainer engine.
## BaseTrainer ## BaseTrainer
BaseTrainer contains the generic boilerplate training routine. It can be customized for any task based over overidding the required functions or operations as long the as correct formats are followed. For example you can support your own custom model and dataloder by just overriding these functions:
* `get_model(cfg, weights)` - The function that builds a the model to be trained BaseTrainer contains the generic boilerplate training routine. It can be customized for any task based over overidding
the required functions or operations as long the as correct formats are followed. For example, you can support your own
custom model and dataloder by just overriding these functions:
* `get_model(cfg, weights)` - The function that builds the model to be trained
* `get_dataloder()` - The function that builds the dataloder * `get_dataloder()` - The function that builds the dataloder
More details and source code can be found in [`BaseTrainer` Reference](reference/base_trainer.md) More details and source code can be found in [`BaseTrainer` Reference](reference/base_trainer.md)
## DetectionTrainer ## DetectionTrainer
Here's how you can use the YOLOv8 `DetectionTrainer` and customize it. Here's how you can use the YOLOv8 `DetectionTrainer` and customize it.
```python ```python
from ultralytics.yolo.v8 import DetectionTrainer from ultralytics.yolo.v8.detect import DetectionTrainer
trainer = DetectionTrainer(overrides={...}) trainer = DetectionTrainer(overrides={...})
trainer.train() trainer.train()
@ -18,25 +24,32 @@ trained_model = trainer.best # get best model
``` ```
### Customizing the DetectionTrainer ### Customizing the DetectionTrainer
Let's customize the trainer **to train a custom detection model** that is not supported directly. You can do this by simply overloading the existing the `get_model` functionality:
Let's customize the trainer **to train a custom detection model** that is not supported directly. You can do this by
simply overloading the existing the `get_model` functionality:
```python ```python
from ultralytics.yolo.v8 import DetectionTrainer from ultralytics.yolo.v8.detect import DetectionTrainer
class CustomTrainer(DetectionTrainer): class CustomTrainer(DetectionTrainer):
def get_model(self, cfg, weights): def get_model(self, cfg, weights):
... ...
trainer = CustomTrainer(overrides={...}) trainer = CustomTrainer(overrides={...})
trainer.train() trainer.train()
``` ```
You now realize that you need to customize the trainer further to: You now realize that you need to customize the trainer further to:
* Customize the `loss function`. * Customize the `loss function`.
* Add `callback` that uploads model to your google drive after every 10 `epochs` * Add `callback` that uploads model to your Google Drive after every 10 `epochs`
Here's how you can do it: Here's how you can do it:
```python ```python
from ultralytics.yolo.v8 import DetectionTrainer from ultralytics.yolo.v8.detect import DetectionTrainer
class CustomTrainer(DetectionTrainer): class CustomTrainer(DetectionTrainer):
def get_model(self, cfg, weights): def get_model(self, cfg, weights):
@ -49,11 +62,13 @@ class CustomTrainer(DetectionTrainer):
... ...
return loss, loss_items # see Reference-> Trainer for details on the expected format return loss, loss_items # see Reference-> Trainer for details on the expected format
# callback to upload model weights # callback to upload model weights
def log_model(trainer): def log_model(trainer):
last_weight_path = trainer.last last_weight_path = trainer.last
... ...
trainer = CustomTrainer(overrides={...}) trainer = CustomTrainer(overrides={...})
trainer.add_callback("on_train_epoch_end", log_model) # Adds to existing callback trainer.add_callback("on_train_epoch_end", log_model) # Adds to existing callback
trainer.train() trainer.train()
@ -62,5 +77,7 @@ trainer.train()
To know more about Callback triggering events and entry point, checkout our Callbacks guide # TODO To know more about Callback triggering events and entry point, checkout our Callbacks guide # TODO
## Other engine components ## Other engine components
There are other componenets that can be customized similarly like `Validators` and `Predictiors`
To know more about their implementation details, go to Reference There are other componenets that can be customized similarly like `Validators` and `Predictors`
See Reference section for more information on these.

@ -22,25 +22,33 @@ trained, it can be easily deployed and used for real-time object detection and i
Ultralytics HUB is an essential tool for anyone looking to use YOLOv5 for their object detection and image segmentation Ultralytics HUB is an essential tool for anyone looking to use YOLOv5 for their object detection and image segmentation
projects. projects.
**[Get started now](https://hub.ultralytics.com)** and experience the power and simplicity of Ultralytics HUB for yourself. Sign up for a free account and **[Get started now](https://hub.ultralytics.com)** and experience the power and simplicity of Ultralytics HUB for
yourself. Sign up for a free account and
start building, training, and deploying YOLOv5 and YOLOv8 models today. start building, training, and deploying YOLOv5 and YOLOv8 models today.
## 1. Upload a Dataset ## 1. Upload a Dataset
Ultralytics HUB datasets are just like YOLOv5 🚀 datasets, they use the same structure and the same label formats to keep everything simple. Ultralytics HUB datasets are just like YOLOv5 🚀 datasets, they use the same structure and the same label formats to keep
everything simple.
When you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML inside the dataset root directory** as in the example shown below, and then zip for upload to https://hub.ultralytics.com/. Your **dataset YAML, directory and zip** should all share the same name. For example, if your dataset is called 'coco6' as in our example [ultralytics/hub/coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip), then you should have a coco6.yaml inside your coco6/ directory, which should zip to create coco6.zip for upload: When you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML inside the dataset root directory**
as in the example shown below, and then zip for upload to https://hub.ultralytics.com/. Your **dataset YAML, directory
and zip** should all share the same name. For example, if your dataset is called 'coco6' as in our
example [ultralytics/hub/coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip), then you should have a
coco6.yaml inside your coco6/ directory, which should zip to create coco6.zip for upload:
```bash ```bash
zip -r coco6.zip coco6 zip -r coco6.zip coco6
``` ```
The example [coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip) dataset in this repository can be downloaded and unzipped to see exactly how to structure your custom dataset. The example [coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip) dataset in this repository can be
downloaded and unzipped to see exactly how to structure your custom dataset.
<p align="center"><img width="80%" src="https://user-images.githubusercontent.com/26833433/201424843-20fa081b-ad4b-4d6c-a095-e810775908d8.png" title="COCO6" /></p> <p align="center"><img width="80%" src="https://user-images.githubusercontent.com/26833433/201424843-20fa081b-ad4b-4d6c-a095-e810775908d8.png" title="COCO6" /></p>
The dataset YAML is the same standard YOLOv5 YAML format. See the [YOLOv5 Train Custom Data tutorial](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) for full details. The dataset YAML is the same standard YOLOv5 YAML format. See
the [YOLOv5 Train Custom Data tutorial](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) for full details.
```yaml ```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: # dataset root dir (leave empty for HUB) path: # dataset root dir (leave empty for HUB)
@ -57,24 +65,26 @@ names:
... ...
``` ```
After zipping your dataset, sign in to [Ultralytics HUB](https://bit.ly/ultralytics_hub) and click the Datasets tab. Click 'Upload Dataset' to upload, scan and visualize your new dataset before training new YOLOv5 models on it! After zipping your dataset, sign in to [Ultralytics HUB](https://bit.ly/ultralytics_hub) and click the Datasets tab.
Click 'Upload Dataset' to upload, scan and visualize your new dataset before training new YOLOv5 models on it!
<img width="100%" alt="HUB Dataset Upload" src="https://user-images.githubusercontent.com/26833433/198611715-540c9856-49d7-4069-a2fd-7c9eb70e772e.png"> <img width="100%" alt="HUB Dataset Upload" src="https://user-images.githubusercontent.com/26833433/198611715-540c9856-49d7-4069-a2fd-7c9eb70e772e.png">
## 2. Train a Model ## 2. Train a Model
Connect to the Ultralytics HUB notebook and use your model API key to begin training! <a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> Connect to the Ultralytics HUB notebook and use your model API key to begin
training! <a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
## 3. Deploy to Real World ## 3. Deploy to Real World
Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run models directly on your mobile device by downloading the [Ultralytics App](https://ultralytics.com/app_install)! Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run
models directly on your mobile device by downloading the [Ultralytics App](https://ultralytics.com/app_install)!
<a align="center" href="https://ultralytics.com/app_install" target="_blank"> <a align="center" href="https://ultralytics.com/app_install" target="_blank">
<img width="100%" alt="Ultralytics mobile app" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-app.png"></a> <img width="100%" alt="Ultralytics mobile app" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-app.png"></a>
## ❓ Issues ## ❓ Issues
If you are a new [Ultralytics HUB](https://bit.ly/ultralytics_hub) user and have questions or comments, you are in the right place! Please raise a [New Issue](https://github.com/ultralytics/hub/issues/new/choose) and let us know what we can do to make your life better 😃! If you are a new [Ultralytics HUB](https://bit.ly/ultralytics_hub) user and have questions or comments, you are in the
right place! Please raise a [New Issue](https://github.com/ultralytics/hub/issues/new/choose) and let us know what we
can do to make your life better 😃!

@ -15,17 +15,20 @@
# Welcome to Ultralytics YOLOv8 # Welcome to Ultralytics YOLOv8
Welcome to the Ultralytics YOLOv8 documentation landing page! [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO (You Welcome to the Ultralytics YOLOv8 documentation landing
Only Look Once) object detection and image segmentation model developed by [Ultralytics](https://ultralytics.com). This page serves as the starting page! [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO (You Only Look
point for exploring the various resources available to help you get started with YOLOv8 and understand its features and Once) object detection and image segmentation model developed by [Ultralytics](https://ultralytics.com). This page
capabilities. serves as the starting point for exploring the various resources available to help you get started with YOLOv8 and
understand its features and capabilities.
The YOLOv8 model is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of The YOLOv8 model is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of
object detection and image segmentation tasks. It can be trained on large datasets and is capable of running on a object detection and image segmentation tasks. It can be trained on large datasets and is capable of running on a
variety of hardware platforms, from CPUs to GPUs. variety of hardware platforms, from CPUs to GPUs.
Whether you are a seasoned machine learning practitioner or new to the field, we hope that the resources on this page Whether you are a seasoned machine learning practitioner or new to the field, we hope that the resources on this page
will help you get the most out of YOLOv8. For any bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For professional support please [Contact Us](https://ultralytics.com/contact). will help you get the most out of YOLOv8. For any bugs and feature requests please
visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For professional support
please [Contact Us](https://ultralytics.com/contact).
## A Brief History of YOLO ## A Brief History of YOLO
@ -40,8 +43,8 @@ backbone network, adding a feature pyramid, and making use of focal loss.
In 2020, YOLOv4 was released which introduced a number of innovations such as the use of Mosaic data augmentation, a new In 2020, YOLOv4 was released which introduced a number of innovations such as the use of Mosaic data augmentation, a new
anchor-free detection head, and a new loss function. anchor-free detection head, and a new loss function.
In 2021, Ultralytics released [YOLOv5](https://github.com/ultralytics/yolov5), which further improved the model's performance and added new features such as In 2021, Ultralytics released [YOLOv5](https://github.com/ultralytics/yolov5), which further improved the model's
support for panoptic segmentation and object tracking. performance and added new features such as support for panoptic segmentation and object tracking.
YOLO has been widely used in a variety of applications, including autonomous vehicles, security and surveillance, and YOLO has been widely used in a variety of applications, including autonomous vehicles, security and surveillance, and
medical imaging. It has also been used to win several competitions, such as the COCO Object Detection Challenge and the medical imaging. It has also been used to win several competitions, such as the COCO Object Detection Challenge and the
@ -55,9 +58,10 @@ For more information about the history and development of YOLO, you can refer to
## Ultralytics YOLOv8 ## Ultralytics YOLOv8
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO object detection and image segmentation model developed by [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO object detection and
Ultralytics. YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO image segmentation model developed by Ultralytics. YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds
versions and introduces new features and improvements to further boost performance and flexibility. upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and
flexibility.
One key feature of YOLOv8 is its extensibility. It is designed as a framework that supports all previous versions of One key feature of YOLOv8 is its extensibility. It is designed as a framework that supports all previous versions of
YOLO, making it easy to switch between different versions and compare their performance. This makes YOLOv8 an ideal YOLO, making it easy to switch between different versions and compare their performance. This makes YOLOv8 an ideal

@ -0,0 +1,140 @@
This is the simplest way of simply using YOLOv8 models in a Python environment. It can be imported from
the `ultralytics` module.
!!! example "Train"
=== "From pretrained(recommanded)"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt") # pass any model type
model.train(epochs=5)
```
=== "From scratch"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.yaml")
model.train(data="coco128.yaml", epochs=5)
```
=== "Resume"
```python
TODO: Resume feature is under development and should be released soon.
```
!!! example "Val"
=== "Val after training"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.yaml")
model.train(data="coco128.yaml", epochs=5)
model.val() # It'll automatically evaluate the data you trained.
```
=== "Val independently"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
# It'll use the data yaml file in model.pt if you don't set data.
model.val()
# or you can set the data you want to val
model.val(data="coco128.yaml")
```
!!! example "Predict"
=== "From source"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
model.predict(source="0") # accepts all formats - img/folder/vid.*(mp4/format). 0 for webcam
model.predict(source="folder", show=True) # Display preds. Accepts all yolo predict arguments
```
=== "From image/ndarray/tensor"
```python
# TODO, still working on it.
```
=== "Return outputs"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
outputs = model.predict(source="0", return_outputs=True) # treat predict as a Python generator
for output in outputs:
# each output here is a dict.
# for detection
print(output["det"]) # np.ndarray, (N, 6), xyxy, score, cls
# for segmentation
print(output["det"]) # np.ndarray, (N, 6), xyxy, score, cls
print(output["segment"]) # List[np.ndarray] * N, bounding coordinates of masks
# for classify
print(output["prob"]) # np.ndarray, (num_class, ), cls prob
```
!!! note "Export and Deployment"
=== "Export, Fuse & info"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
model.fuse()
model.info(verbose=True) # Print model information
model.export(format=) # TODO:
```
=== "Deployment"
More functionality coming soon
To know more about using `YOLO` models, refer Model class Reference
[Model reference](reference/model.md){ .md-button .md-button--primary}
---
### Using Trainers
`YOLO` model class is a high-level wrapper on the Trainer classes. Each YOLO task has its own trainer that inherits
from `BaseTrainer`.
!!! tip "Detection Trainer Example"
```python
from ultralytics.yolo import v8 import DetectionTrainer, DetectionValidator, DetectionPredictor
# trainer
trainer = DetectionTrainer(overrides={})
trainer.train()
trained_model = trainer.best
# Validator
val = DetectionValidator(args=...)
val(model=trained_model)
# predictor
pred = DetectionPredictor(overrides={})
pred(source=SOURCE, model=trained_model)
# resume from last weight
overrides["resume"] = trainer.last
trainer = detect.DetectionTrainer(overrides=overrides)
```
You can easily customize Trainers to support custom tasks or explore R&D ideas.
Learn more about Customizing `Trainers`, `Validators` and `Predictors` to suit your project needs in the Customization
Section.
[Customization tutorials](engine.md){ .md-button .md-button--primary}

@ -1,24 +1,31 @@
## Installation ## Install
Install YOLOv8 via the `ultralytics` pip package for the latest stable release or by cloning the [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics) repository for the most up-to-date version. Install YOLOv8 via the `ultralytics` pip package for the latest stable release or by cloning
the [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics) repository for the most
up-to-date version.
!!! note "pip install (recommended)" !!! example "Pip install method (recommended)"
```
```bash
pip install ultralytics pip install ultralytics
``` ```
!!! note "git clone"
``` !!! example "Git clone method (for development)"
```bash
git clone https://github.com/ultralytics/ultralytics git clone https://github.com/ultralytics/ultralytics
cd ultralytics cd ultralytics
pip install -e '.[dev]' pip install -e '.[dev]'
``` ```
See contributing section to know more about contributing to the project See contributing section to know more about contributing to the project
## Use with CLI
## CLI The YOLO command line interface (CLI) lets you simply train, validate or infer models on various tasks and versions.
The command line YOLO interface lets you simply train, validate or infer models on various tasks and versions.
CLI requires no customization or code. You can simply run all tasks from the terminal with the `yolo` command. CLI requires no customization or code. You can simply run all tasks from the terminal with the `yolo` command.
!!! note
!!! example
=== "Syntax" === "Syntax"
```bash ```bash
yolo task=detect mode=train model=yolov8n.yaml args... yolo task=detect mode=train model=yolov8n.yaml args...
@ -35,22 +42,32 @@ CLI requires no customization or code. You can simply run all tasks from the ter
```bash ```bash
yolo task=detect mode=train model=yolov8n.pt data=coco128.yaml device=\'0,1,2,3\' yolo task=detect mode=train model=yolov8n.pt data=coco128.yaml device=\'0,1,2,3\'
``` ```
[CLI Guide](cli.md){ .md-button .md-button--primary} [CLI Guide](cli.md){ .md-button .md-button--primary}
## Python API ## Use with Python
The Python API allows users to easily use YOLOv8 in their Python projects. It provides functions for loading and running the model, as well as for processing the model's output. The interface is designed to be easy to use, so that users can quickly implement object detection in their projects.
Python usage allows users to easily use YOLOv8 inside their Python projects. It provides functions for loading and
running the model, as well as for processing the model's output. The interface is designed to be easy to use, so that
users can quickly implement object detection in their projects.
Overall, the Python interface is a useful tool for anyone looking to incorporate object detection, segmentation or classification into their Python projects using YOLOv8. Overall, the Python interface is a useful tool for anyone looking to incorporate object detection, segmentation or
classification into their Python projects using YOLOv8.
!!! example
!!! note
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
model = YOLO('yolov8n.yaml') # build a new model from scratch # Load a model
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for best training results) model = YOLO("yolov8n.yaml") # build a new model from scratch
results = model.train(data='coco128.yaml') # train the model model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Use the model
results = model.train(data="coco128.yaml", epochs=3) # train the model
results = model.val() # evaluate model performance on the validation set results = model.val() # evaluate model performance on the validation set
results = model.predict(source='bus.jpg') # predict on an image results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
success = model.export(format='onnx') # export the model to ONNX format success = model.export(format="onnx") # export the model to ONNX format
``` ```
[API Guide](sdk.md){ .md-button .md-button--primary}
[Python Guide](python.md){.md-button .md-button--primary}

@ -1,5 +1,8 @@
All task Predictors are inherited from `BasePredictors` class that contains the model validation routine boilerplate. You can override any function of these Trainers to suit your needs. All task Predictors are inherited from `BasePredictors` class that contains the model validation routine boilerplate.
You can override any function of these Trainers to suit your needs.
--- ---
### BasePredictor API Reference ### BasePredictor API Reference
:::ultralytics.yolo.engine.predictor.BasePredictor :::ultralytics.yolo.engine.predictor.BasePredictor

@ -1,5 +1,8 @@
All task Trainers are inherited from `BaseTrainer` class that contains the model training and optimzation routine boilerplate. You can override any function of these Trainers to suit your needs. All task Trainers are inherited from `BaseTrainer` class that contains the model training and optimzation routine
boilerplate. You can override any function of these Trainers to suit your needs.
--- ---
### BaseTrainer API Reference ### BaseTrainer API Reference
:::ultralytics.yolo.engine.trainer.BaseTrainer :::ultralytics.yolo.engine.trainer.BaseTrainer

@ -1,5 +1,8 @@
All task Validators are inherited from `BaseValidator` class that contains the model validation routine boilerplate. You can override any function of these Trainers to suit your needs. All task Validators are inherited from `BaseValidator` class that contains the model validation routine boilerplate. You
can override any function of these Trainers to suit your needs.
--- ---
### BaseValidator API Reference ### BaseValidator API Reference
:::ultralytics.yolo.engine.validator.BaseValidator :::ultralytics.yolo.engine.validator.BaseValidator

@ -1,2 +1,3 @@
### Exporter API Reference ### Exporter API Reference
:::ultralytics.yolo.engine.exporter.Exporter :::ultralytics.yolo.engine.exporter.Exporter

@ -1,4 +1,5 @@
# nn Module # nn Module
Ultralytics nn module contains 3 main components: Ultralytics nn module contains 3 main components:
1. **AutoBackend**: A module that can run inference on all popular model formats 1. **AutoBackend**: A module that can run inference on all popular model formats
@ -6,10 +7,13 @@ Ultralytics nn module contains 3 main components:
3. **modules**: Optimized and reusable neural network blocks built on PyTorch. 3. **modules**: Optimized and reusable neural network blocks built on PyTorch.
## AutoBackend ## AutoBackend
:::ultralytics.nn.autobackend.AutoBackend :::ultralytics.nn.autobackend.AutoBackend
## BaseModel ## BaseModel
:::ultralytics.nn.tasks.BaseModel :::ultralytics.nn.tasks.BaseModel
## Modules ## Modules
TODO TODO

@ -1,159 +1,205 @@
This module contains optimized deep learning related operations used in the Ultralytics YOLO framework This module contains optimized deep learning related operations used in the Ultralytics YOLO framework
## Non-max suppression ## Non-max suppression
:::ultralytics.ops.non_max_suppression :::ultralytics.ops.non_max_suppression
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## Scale boxes ## Scale boxes
:::ultralytics.ops.scale_boxes :::ultralytics.ops.scale_boxes
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## Scale image ## Scale image
:::ultralytics.ops.scale_image :::ultralytics.ops.scale_image
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## clip boxes ## clip boxes
:::ultralytics.ops.clip_boxes :::ultralytics.ops.clip_boxes
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
# Box Format Conversion # Box Format Conversion
## xyxy2xywh ## xyxy2xywh
:::ultralytics.ops.xyxy2xywh :::ultralytics.ops.xyxy2xywh
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## xywh2xyxy ## xywh2xyxy
:::ultralytics.ops.xywh2xyxy :::ultralytics.ops.xywh2xyxy
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## xywhn2xyxy ## xywhn2xyxy
:::ultralytics.ops.xywhn2xyxy :::ultralytics.ops.xywhn2xyxy
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## xyxy2xywhn ## xyxy2xywhn
:::ultralytics.ops.xyxy2xywhn :::ultralytics.ops.xyxy2xywhn
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## xyn2xy ## xyn2xy
:::ultralytics.ops.xyn2xy :::ultralytics.ops.xyn2xy
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## xywh2ltwh ## xywh2ltwh
:::ultralytics.ops.xywh2ltwh :::ultralytics.ops.xywh2ltwh
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## xyxy2ltwh ## xyxy2ltwh
:::ultralytics.ops.xyxy2ltwh :::ultralytics.ops.xyxy2ltwh
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## ltwh2xywh ## ltwh2xywh
:::ultralytics.ops.ltwh2xywh :::ultralytics.ops.ltwh2xywh
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## ltwh2xyxy ## ltwh2xyxy
:::ultralytics.ops.ltwh2xyxy :::ultralytics.ops.ltwh2xyxy
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## segment2box ## segment2box
:::ultralytics.ops.segment2box :::ultralytics.ops.segment2box
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
# Mask Operations # Mask Operations
## resample_segments ## resample_segments
:::ultralytics.ops.resample_segments :::ultralytics.ops.resample_segments
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## crop_mask ## crop_mask
:::ultralytics.ops.crop_mask :::ultralytics.ops.crop_mask
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## process_mask_upsample ## process_mask_upsample
:::ultralytics.ops.process_mask_upsample :::ultralytics.ops.process_mask_upsample
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## process_mask ## process_mask
:::ultralytics.ops.process_mask :::ultralytics.ops.process_mask
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## process_mask_native ## process_mask_native
:::ultralytics.ops.process_mask_native :::ultralytics.ops.process_mask_native
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## scale_segments ## scale_segments
:::ultralytics.ops.scale_segments :::ultralytics.ops.scale_segments
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## masks2segments ## masks2segments
:::ultralytics.ops.masks2segments :::ultralytics.ops.masks2segments
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---
## clip_segments ## clip_segments
:::ultralytics.ops.clip_segments :::ultralytics.ops.clip_segments
handler: python handler: python
options: options:
show_source: false show_source: false
show_root_toc_entry: false show_root_toc_entry: false
--- ---

@ -1,91 +0,0 @@
## Using YOLO models
This is the simplest way of simply using yolo models in a python environment. It can be imported from the `ultralytics` module.
!!! example "Usage"
=== "Training"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.yaml")
model(img_tensor) # Or model.forward(). inference.
model.train(data="coco128.yaml", epochs=5)
```
=== "Training pretrained"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt") # pass any model type
model(...) # inference
model.train(epochs=5)
```
=== "Resume Training"
```python
from ultralytics import YOLO
model = YOLO()
model.resume(task="detect") # resume last detection training
model.resume(model="last.pt") # resume from a given model/run
```
=== "Visualize/save Predictions"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
model.predict(source="0") # accepts all formats - img/folder/vid.*(mp4/format). 0 for webcam
model.predict(source="folder", show=True) # Display preds. Accepts all yolo predict arguments
```
!!! note "Export and Deployment"
=== "Export, Fuse & info"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
model.fuse()
model.info(verbose=True) # Print model information
model.export(format=) # TODO:
```
=== "Deployment"
More functionality coming soon
To know more about using `YOLO` models, refer Model class Reference
[Model reference](reference/model.md){ .md-button .md-button--primary}
---
### Using Trainers
`YOLO` model class is a high-level wrapper on the Trainer classes. Each YOLO task has its own trainer that inherits from `BaseTrainer`.
!!! tip "Detection Trainer Example"
```python
from ultralytics.yolo import v8 import DetectionTrainer, DetectionValidator, DetectionPredictor
# trainer
trainer = DetectionTrainer(overrides={})
trainer.train()
trained_model = trainer.best
# Validator
val = DetectionValidator(args=...)
val(model=trained_model)
# predictor
pred = DetectionPredictor(overrides={})
pred(source=SOURCE, model=trained_model)
# resume from last weight
overrides["resume"] = trainer.last
trainer = detect.DetectionTrainer(overrides=overrides)
```
You can easily customize Trainers to support custom tasks or explore R&D ideas.
Learn more about Customizing `Trainers`, `Validators` and `Predictors` to suit your project needs in the Customization Section.
[Customization tutorials](engine.md){ .md-button .md-button--primary}

@ -0,0 +1,133 @@
Image classification is the simplest of the three tasks and involves classifying an entire image into one of a set of
predefined classes.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
The output of an image classifier is a single class label and a confidence score. Image
classification is useful when you need to know only what class an image belongs to and don't need to know where objects
of that class are located or what their exact shape is.
!!! tip "Tip"
YOLOv8 _classification_ models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on ImageNet.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8/cls){.md-button .md-button--primary}
## Train
Train YOLOv8n-cls on the MNIST160 dataset for 100 epochs at image size 64. For a full list of available arguments
see the [Configuration](../config.md) page.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.yaml") # build a new model from scratch
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="mnist160", epochs=100, imgsz=64)
```
=== "CLI"
```bash
yolo task=classify mode=train data=mnist160 model=yolov8n-cls.pt epochs=100 imgsz=64
```
## Val
Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains
it's training `data` and arguments as model attributes.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
results = model.val() # no arguments needed, dataset and settings remembered
```
=== "CLI"
```bash
yolo task=classify mode=val model=yolov8n-cls.pt # val official model
yolo task=classify mode=val model=path/to/best.pt # val custom model
```
## Predict
Use a trained YOLOv8n-cls model to run predictions on images.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
```
=== "CLI"
```bash
yolo task=classify mode=predict model=yolov8n-cls.pt source="https://ultralytics.com/images/bus.jpg" # predict with official model
yolo task=classify mode=predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg" # predict with custom model
```
## Export
Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained
# Export the model
model.export(format="onnx")
```
=== "CLI"
```bash
yolo mode=export model=yolov8n-cls.pt format=onnx # export official model
yolo mode=export model=path/to/best.pt format=onnx # export custom trained model
```
Available YOLOv8-cls export formats include:
| Format | `format=` | Model |
|----------------------------------------------------------------------------|---------------|-------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n-cls.pt` |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-cls.torchscript` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-cls.onnx` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-cls_openvino_model/` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-cls.engine` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-cls.mlmodel` |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-cls_saved_model/` |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-cls.pb` |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-cls.tflite` |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-cls_edgetpu.tflite` |
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` |

@ -0,0 +1,132 @@
Object detection is a task that involves identifying the location and class of objects in an image or video stream.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
The output of an object detector is a set of bounding boxes that enclose the objects in the image, along with class
labels
and confidence scores for each box. Object detection is a good choice when you need to identify objects of interest in a
scene, but don't need to know exactly where the object is or its exact shape.
!!! tip "Tip"
YOLOv8 _detection_ models have no suffix and are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on COCO.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8){ .md-button .md-button--primary}
## Train
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see
the [Configuration](../config.md) page.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco128.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo task=detect mode=train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640
```
## Val
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's
training `data` and arguments as model attributes.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
results = model.val() # no arguments needed, dataset and settings remembered
```
=== "CLI"
```bash
yolo task=detect mode=val model=yolov8n.pt # val official model
yolo task=detect mode=val model=path/to/best.pt # val custom model
```
## Predict
Use a trained YOLOv8n model to run predictions on images.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
```
=== "CLI"
```bash
yolo task=detect mode=predict model=yolov8n.pt source="https://ultralytics.com/images/bus.jpg" # predict with official model
yolo task=detect mode=predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg" # predict with custom model
```
## Export
Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained
# Export the model
model.export(format="onnx")
```
=== "CLI"
```bash
yolo mode=export model=yolov8n.pt format=onnx # export official model
yolo mode=export model=path/to/best.pt format=onnx # export custom trained model
```
Available YOLOv8 export formats include:
| Format | `format=` | Model |
|----------------------------------------------------------------------------|--------------------|---------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` |
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` |

@ -0,0 +1,135 @@
Instance segmentation goes a step further than object detection and involves identifying individual objects in an image
and segmenting them from the rest of the image.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
The output of an instance segmentation model is a set of masks or
contours that outline each object in the image, along with class labels and confidence scores for each object. Instance
segmentation is useful when you need to know not only where objects are in an image, but also what their exact shape is.
!!! tip "Tip"
YOLOv8 _segmentation_ models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on COCO.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8/seg){.md-button .md-button--primary}
## Train
Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. For a full list of available
arguments see the [Configuration](../config.md) page.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.yaml") # build a new model from scratch
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco128-seg.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo task=segment mode=train data=coco128-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
```
## Val
Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the `model`
retains it's training `data` and arguments as model attributes.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
results = model.val() # no arguments needed, dataset and settings remembered
```
=== "CLI"
```bash
yolo task=segment mode=val model=yolov8n-seg.pt # val official model
yolo task=segment mode=val model=path/to/best.pt # val custom model
```
## Predict
Use a trained YOLOv8n-seg model to run predictions on images.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
```
=== "CLI"
```bash
yolo task=segment mode=predict model=yolov8n-seg.pt source="https://ultralytics.com/images/bus.jpg" # predict with official model
yolo task=segment mode=predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg" # predict with custom model
```
## Export
Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained
# Export the model
model.export(format="onnx")
```
=== "CLI"
```bash
yolo mode=export model=yolov8n-seg.pt format=onnx # export official model
yolo mode=export model=path/to/best.pt format=onnx # export custom trained model
```
Available YOLOv8-seg export formats include:
| Format | `format=` | Model |
|----------------------------------------------------------------------------|---------------|-------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n-seg.pt` |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-seg.torchscript` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-seg.onnx` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-seg_openvino_model/` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-seg.engine` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-seg.mlmodel` |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-seg_saved_model/` |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-seg.pb` |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-seg.tflite` |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-seg_edgetpu.tflite` |
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` |

@ -13,14 +13,14 @@ theme:
palette: palette:
# Palette toggle for light mode # Palette toggle for light mode
- scheme: default - scheme: default
primary: grey # primary: grey
toggle: toggle:
icon: material/brightness-7 icon: material/brightness-7
name: Switch to dark mode name: Switch to dark mode
# Palette toggle for dark mode # Palette toggle for dark mode
- scheme: slate - scheme: slate
primary: black # primary: black
toggle: toggle:
icon: material/brightness-4 icon: material/brightness-4
name: Switch to light mode name: Switch to light mode
@ -35,6 +35,7 @@ theme:
- navigation.top - navigation.top
- navigation.expand - navigation.expand
- navigation.footer - navigation.footer
- content.tabs.link # all code tabs change simultaneously
extra_css: extra_css:
- stylesheets/style.css - stylesheets/style.css
@ -75,8 +76,13 @@ plugins:
# Primary navigation # Primary navigation
nav: nav:
- Quickstart: quickstart.md - Quickstart: quickstart.md
- Tasks:
- Detection: tasks/detection.md
- Segmentation: tasks/segmentation.md
- Classification: tasks/classification.md
- Usage:
- CLI: cli.md - CLI: cli.md
- Python Interface: sdk.md - Python: python.md
- Configuration: config.md - Configuration: config.md
- Customization Guide: engine.md - Customization Guide: engine.md
- Ultralytics HUB: hub.md - Ultralytics HUB: hub.md

@ -27,7 +27,7 @@ def test_detect():
# predictor # predictor
pred = detect.DetectionPredictor(overrides={"imgsz": [640, 640]}) pred = detect.DetectionPredictor(overrides={"imgsz": [640, 640]})
i = 0 i = 0
for _ in pred(source=SOURCE, model="yolov8n.pt"): for _ in pred(source=SOURCE, model="yolov8n.pt", return_outputs=True):
i += 1 i += 1
assert i == 2, "predictor test failed" assert i == 2, "predictor test failed"
@ -60,7 +60,7 @@ def test_segment():
# predictor # predictor
pred = segment.SegmentationPredictor(overrides={"imgsz": [640, 640]}) pred = segment.SegmentationPredictor(overrides={"imgsz": [640, 640]})
i = 0 i = 0
for _ in pred(source=SOURCE, model="yolov8n-seg.pt"): for _ in pred(source=SOURCE, model="yolov8n-seg.pt", return_outputs=True):
i += 1 i += 1
assert i == 2, "predictor test failed" assert i == 2, "predictor test failed"
@ -94,6 +94,6 @@ def test_classify():
# predictor # predictor
pred = classify.ClassificationPredictor(overrides={"imgsz": [640, 640]}) pred = classify.ClassificationPredictor(overrides={"imgsz": [640, 640]})
i = 0 i = 0
for _ in pred(source=SOURCE, model=trained_model): for _ in pred(source=SOURCE, model=trained_model, return_outputs=True):
i += 1 i += 1
assert i == 2, "predictor test failed" assert i == 2, "predictor test failed"

@ -32,7 +32,7 @@ def test_model_fuse():
def test_predict_dir(): def test_predict_dir():
model = YOLO(MODEL) model = YOLO(MODEL)
model.predict(source=ROOT / "assets", return_outputs=False) model.predict(source=ROOT / "assets")
def test_val(): def test_val():
@ -98,3 +98,11 @@ def test_export_paddle():
def test_all_model_yamls(): def test_all_model_yamls():
for m in list((ROOT / 'models').rglob('*.yaml')): for m in list((ROOT / 'models').rglob('*.yaml')):
YOLO(m.name) YOLO(m.name)
def test_workflow():
model = YOLO(MODEL)
model.train(data="coco128.yaml", epochs=1, imgsz=32)
model.val()
model.predict(SOURCE)
model.export(format="onnx", opset=12) # export a model to ONNX format

@ -177,6 +177,7 @@ class Exporter:
for p in model.parameters(): for p in model.parameters():
p.requires_grad = False p.requires_grad = False
model.eval() model.eval()
model.float()
model = model.fuse() model = model.fuse()
for k, m in model.named_modules(): for k, m in model.named_modules():
if isinstance(m, (Detect, Segment)): if isinstance(m, (Detect, Segment)):

@ -111,7 +111,7 @@ class YOLO:
self.model.fuse() self.model.fuse()
@smart_inference_mode() @smart_inference_mode()
def predict(self, source, return_outputs=True, **kwargs): def predict(self, source, return_outputs=False, **kwargs):
""" """
Visualize prediction. Visualize prediction.
@ -191,6 +191,9 @@ class YOLO:
self.trainer.model = self.trainer.get_model(weights=self.model if self.ckpt else None, cfg=self.model.yaml) self.trainer.model = self.trainer.get_model(weights=self.model if self.ckpt else None, cfg=self.model.yaml)
self.model = self.trainer.model self.model = self.trainer.model
self.trainer.train() self.trainer.train()
# update model and configs after training
self.model, _ = attempt_load_one_weight(str(self.trainer.best))
self.overrides = self.model.args
def to(self, device): def to(self, device):
""" """

@ -105,7 +105,7 @@ class BasePredictor:
def postprocess(self, preds, img, orig_img): def postprocess(self, preds, img, orig_img):
return preds return preds
def setup(self, source=None, model=None, return_outputs=True): def setup(self, source=None, model=None, return_outputs=False):
# source # source
source = str(source if source is not None else self.args.source) source = str(source if source is not None else self.args.source)
is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS) is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
@ -161,7 +161,7 @@ class BasePredictor:
return model return model
@smart_inference_mode() @smart_inference_mode()
def __call__(self, source=None, model=None, return_outputs=True): def __call__(self, source=None, model=None, return_outputs=False):
self.run_callbacks("on_predict_start") self.run_callbacks("on_predict_start")
model = self.model if self.done_setup else self.setup(source, model, return_outputs) model = self.model if self.done_setup else self.setup(source, model, return_outputs)
model.eval() model.eval()

@ -24,7 +24,7 @@ class DetectionValidator(BaseValidator):
self.data_dict = yaml_load(check_file(self.args.data), append_filename=True) if self.args.data else None self.data_dict = yaml_load(check_file(self.args.data), append_filename=True) if self.args.data else None
self.is_coco = False self.is_coco = False
self.class_map = None self.class_map = None
self.metrics = DetMetrics(save_dir=self.save_dir, plot=self.args.plots) self.metrics = DetMetrics(save_dir=self.save_dir)
self.iouv = torch.linspace(0.5, 0.95, 10) # iou vector for mAP@0.5:0.95 self.iouv = torch.linspace(0.5, 0.95, 10) # iou vector for mAP@0.5:0.95
self.niou = self.iouv.numel() self.niou = self.iouv.numel()
@ -34,8 +34,7 @@ class DetectionValidator(BaseValidator):
for k in ["batch_idx", "cls", "bboxes"]: for k in ["batch_idx", "cls", "bboxes"]:
batch[k] = batch[k].to(self.device) batch[k] = batch[k].to(self.device)
nb, _, height, width = batch["img"].shape nb = len(batch["img"])
batch["bboxes"] *= torch.tensor((width, height, width, height), device=self.device) # to pixels
self.lb = [torch.cat([batch["cls"], batch["bboxes"]], dim=-1)[batch["batch_idx"] == i] self.lb = [torch.cat([batch["cls"], batch["bboxes"]], dim=-1)[batch["batch_idx"] == i]
for i in range(nb)] if self.args.save_hybrid else [] # for autolabelling for i in range(nb)] if self.args.save_hybrid else [] # for autolabelling
@ -50,6 +49,7 @@ class DetectionValidator(BaseValidator):
self.nc = head.nc self.nc = head.nc
self.names = model.names self.names = model.names
self.metrics.names = self.names self.metrics.names = self.names
self.metrics.plot = self.args.plots
self.confusion_matrix = ConfusionMatrix(nc=self.nc) self.confusion_matrix = ConfusionMatrix(nc=self.nc)
self.seen = 0 self.seen = 0
self.jdict = [] self.jdict = []
@ -95,7 +95,9 @@ class DetectionValidator(BaseValidator):
# Evaluate # Evaluate
if nl: if nl:
tbox = ops.xywh2xyxy(bbox) # target boxes height, width = batch["img"].shape[2:]
tbox = ops.xywh2xyxy(bbox) * torch.tensor(
(width, height, width, height), device=self.device) # target boxes
ops.scale_boxes(batch["img"][si].shape[1:], tbox, shape, ops.scale_boxes(batch["img"][si].shape[1:], tbox, shape,
ratio_pad=batch["ratio_pad"][si]) # native-space labels ratio_pad=batch["ratio_pad"][si]) # native-space labels
labelsn = torch.cat((cls, tbox), 1) # native-space labels labelsn = torch.cat((cls, tbox), 1) # native-space labels

@ -22,7 +22,7 @@ class SegmentationValidator(DetectionValidator):
def __init__(self, dataloader=None, save_dir=None, pbar=None, logger=None, args=None): def __init__(self, dataloader=None, save_dir=None, pbar=None, logger=None, args=None):
super().__init__(dataloader, save_dir, pbar, logger, args) super().__init__(dataloader, save_dir, pbar, logger, args)
self.args.task = "segment" self.args.task = "segment"
self.metrics = SegmentMetrics(save_dir=self.save_dir, plot=self.args.plots) self.metrics = SegmentMetrics(save_dir=self.save_dir)
def preprocess(self, batch): def preprocess(self, batch):
batch = super().preprocess(batch) batch = super().preprocess(batch)
@ -31,13 +31,15 @@ class SegmentationValidator(DetectionValidator):
def init_metrics(self, model): def init_metrics(self, model):
head = model.model[-1] if self.training else model.model.model[-1] head = model.model[-1] if self.training else model.model.model[-1]
self.is_coco = self.data.get('val', '').endswith(f'coco{os.sep}val2017.txt') # is COCO dataset val = self.data.get('val', '') # validation path
self.is_coco = isinstance(val, str) and val.endswith(f'coco{os.sep}val2017.txt') # is COCO dataset
self.class_map = ops.coco80_to_coco91_class() if self.is_coco else list(range(1000)) self.class_map = ops.coco80_to_coco91_class() if self.is_coco else list(range(1000))
self.args.save_json |= self.is_coco and not self.training # run on final val if training COCO self.args.save_json |= self.is_coco and not self.training # run on final val if training COCO
self.nc = head.nc self.nc = head.nc
self.nm = head.nm if hasattr(head, "nm") else 32 self.nm = head.nm if hasattr(head, "nm") else 32
self.names = model.names self.names = model.names
self.metrics.names = self.names self.metrics.names = self.names
self.metrics.plot = self.args.plots
self.confusion_matrix = ConfusionMatrix(nc=self.nc) self.confusion_matrix = ConfusionMatrix(nc=self.nc)
self.plot_masks = [] self.plot_masks = []
self.seen = 0 self.seen = 0
@ -97,7 +99,9 @@ class SegmentationValidator(DetectionValidator):
# Evaluate # Evaluate
if nl: if nl:
tbox = ops.xywh2xyxy(bbox) # target boxes height, width = batch["img"].shape[2:]
tbox = ops.xywh2xyxy(bbox) * torch.tensor(
(width, height, width, height), device=self.device) # target boxes
ops.scale_boxes(batch["img"][si].shape[1:], tbox, shape, ops.scale_boxes(batch["img"][si].shape[1:], tbox, shape,
ratio_pad=batch["ratio_pad"][si]) # native-space labels ratio_pad=batch["ratio_pad"][si]) # native-space labels
labelsn = torch.cat((cls, tbox), 1) # native-space labels labelsn = torch.cat((cls, tbox), 1) # native-space labels

Loading…
Cancel
Save