Release 8.0.5 PR (#279)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: Izam Mohammed <106471909+izam-mohammed@users.noreply.github.com>
Co-authored-by: Yue WANG 王跃 <92371174+yuewangg@users.noreply.github.com>
Co-authored-by: Thibaut Lucas <thibautlucas13@gmail.com>
single_channel
Laughing 2 years ago committed by GitHub
parent 9552827157
commit c42e44a021
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -18,7 +18,11 @@
</div>
<br>
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics), developed by [Ultralytics](https://ultralytics.com), is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection, image segmentation and image classification tasks.
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics), developed by [Ultralytics](https://ultralytics.com),
is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces
new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and
easy to use, making it an excellent choice for a wide range of object detection, image segmentation and image
classification tasks.
To request an Enterprise License please complete the form at [Ultralytics Licensing](https://ultralytics.com/license).
@ -50,12 +54,14 @@ To request an Enterprise License please complete the form at [Ultralytics Licens
## <div align="center">Documentation</div>
See below for quickstart intallation and usage example, and see the [YOLOv8 Docs](https://docs.ultralytics.com) for full documentation on training, validation, prediction and deployment.
See below for a quickstart installation and usage example, and see the [YOLOv8 Docs](https://docs.ultralytics.com) for full
documentation on training, validation, prediction and deployment.
<details open>
<summary>Install</summary>
Pip install the ultralytics package including all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a
Pip install the ultralytics package including
all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a
[**Python>=3.7.0**](https://www.python.org/) environment, including
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
@ -74,7 +80,9 @@ YOLOv8 may be used directly in the Command Line Interface (CLI) with a `yolo` co
yolo task=detect mode=predict model=yolov8n.pt source="https://ultralytics.com/images/bus.jpg"
```
`yolo` can be used for a variety of tasks and modes and accepts additional arguments, i.e. `imgsz=640`. See a full list of available `yolo` [arguments](https://docs.ultralytics.com/config/) in the YOLOv8 [Docs](https://docs.ultralytics.com).
`yolo` can be used for a variety of tasks and modes and accepts additional arguments, i.e. `imgsz=640`. See a full list
of available `yolo` [arguments](https://docs.ultralytics.com/config/) in the
YOLOv8 [Docs](https://docs.ultralytics.com).
```bash
yolo task=detect mode=train model=yolov8n.pt args...
@ -83,7 +91,8 @@ yolo task=detect mode=train model=yolov8n.pt args...
export yolov8n.pt format=onnx args...
```
YOLOv8 may also be used directly in a Python environment, and accepts the same [arguments](https://docs.ultralytics.com/config/) as in the CLI example above:
YOLOv8 may also be used directly in a Python environment, and accepts the
same [arguments](https://docs.ultralytics.com/config/) as in the CLI example above:
```python
from ultralytics import YOLO
@ -96,7 +105,7 @@ model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
results = model.train(data="coco128.yaml", epochs=3) # train the model
results = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
success = YOLO("yolov8n.pt").export(format="onnx") # export a model to ONNX format
success = model.export(format="onnx") # export the model to ONNX format
```
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
@ -104,7 +113,9 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases).
### Known Issues / TODOs
We are still working on several parts of YOLOv8! We aim to have these completed soon to bring the YOLOv8 feature set up to par with YOLOv5, including export and inference to all the same formats. We are also writing a YOLOv8 paper which we will submit to [arxiv.org](https://arxiv.org) once complete.
We are still working on several parts of YOLOv8! We aim to have these completed soon to bring the YOLOv8 feature set up
to par with YOLOv5, including export and inference to all the same formats. We are also writing a YOLOv8 paper which we
will submit to [arxiv.org](https://arxiv.org) once complete.
- [ ] TensorFlow exports
- [ ] DDP resume
@ -112,15 +123,18 @@ We are still working on several parts of YOLOv8! We aim to have these completed
</details>
## <div align="center">Checkpoints</div>
## <div align="center">Models</div>
All YOLOv8 pretrained models are available here. Detection and Segmentation models are pretrained on the COCO dataset, while Classification models are pretrained on the ImageNet dataset.
All YOLOv8 pretrained models are available here. Detection and Segmentation models are pretrained on the COCO dataset,
while Classification models are pretrained on the ImageNet dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
<details open><summary>Detection</summary>
See [Detection Docs](https://docs.ultralytics.com/tasks/detection/) for usage examples with these models.
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
@ -131,16 +145,19 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases) on first u
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
<br>Reproduce by `yolo mode=val task=detect data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu`
</details>
<details><summary>Segmentation</summary>
See [Segmentation Docs](https://docs.ultralytics.com/tasks/segmentation/) for usage examples with these models.
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.11 | 1.21 | 3.4 | 12.6 |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
@ -148,13 +165,16 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases) on first u
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
<br>Reproduce by `yolo mode=val task=segment data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo mode=val task=segment data=coco128-seg.yaml batch=1 device=0/cpu`
</details>
<details><summary>Classification</summary>
See [Classification Docs](https://docs.ultralytics.com/tasks/classification/) for usage examples with these models.
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| ---------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 |
@ -165,7 +185,8 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases) on first u
- **acc** values are model accuracies on the [ImageNet](https://www.image-net.org/) dataset validation set.
<br>Reproduce by `yolo mode=val task=classify data=path/to/ImageNet device=0`
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
<br>Reproduce by `yolo mode=val task=classify data=path/to/ImageNet batch=1 device=0/cpu`
</details>
@ -194,18 +215,23 @@ Ultralytics [release](https://github.com/ultralytics/assets/releases) on first u
| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
| :--------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: |
| Label and export your custom datasets directly to YOLOv8 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLOv8 using [ClearML](https://cutt.ly/yolov5-readme-clearml) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet2) lets you save YOLOv8 models, resume training, and interactively visualise and debug predictions | Run YOLOv8 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) |
| Label and export your custom datasets directly to YOLOv8 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLOv8 using [ClearML](https://cutt.ly/yolov5-readme-clearml) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet2) lets you save YOLOv8 models, resume training, and interactively visualize and debug predictions | Run YOLOv8 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) |
## <div align="center">Ultralytics HUB</div>
[Ultralytics HUB](https://bit.ly/ultralytics_hub) is our ⭐ **NEW** no-code solution to visualize datasets, train YOLOv8 🚀 models, and deploy to the real world in a seamless experience. Get started for **Free** now! Also run YOLOv8 models on your iOS or Android device by downloading the [Ultralytics App](https://ultralytics.com/app_install)!
[Ultralytics HUB](https://bit.ly/ultralytics_hub) is our ⭐ **NEW** no-code solution to visualize datasets, train YOLOv8
🚀 models, and deploy to the real world in a seamless experience. Get started for **Free** now! Also run YOLOv8 models on
your iOS or Android device by downloading the [Ultralytics App](https://ultralytics.com/app_install)!
<a align="center" href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
## <div align="center">Contribute</div>
We love your input! YOLOv5 and YOLOv8 would not be possible without help from our community. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out our [Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experience. Thank you 🙏 to all our contributors!
We love your input! YOLOv5 and YOLOv8 would not be possible without help from our community. Please see
our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out
our [Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback
on your experience. Thank you 🙏 to all our contributors!
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
@ -216,11 +242,14 @@ We love your input! YOLOv5 and YOLOv8 would not be possible without help from ou
YOLOv8 is available under two different licenses:
- **GPL-3.0 License**: See [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for details.
- **Enterprise License**: Provides greater flexibility for commercial product development without the open-source requirements of GPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and applications. Request an Enterprise License at [Ultralytics Licensing](https://ultralytics.com/license).
- **Enterprise License**: Provides greater flexibility for commercial product development without the open-source
requirements of GPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and
applications. Request an Enterprise License at [Ultralytics Licensing](https://ultralytics.com/license).
## <div align="center">Contact</div>
For YOLOv8 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For professional support please [Contact Us](https://ultralytics.com/contact).
For YOLOv8 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues).
For professional support please [Contact Us](https://ultralytics.com/contact).
<br>
<div align="center">

@ -92,7 +92,7 @@ model = YOLO("yolov8n.pt") # 加载预训练模型(推荐用于训练)
results = model.train(data="coco128.yaml", epochs=3) # 训练模型
results = model.val() # 在验证集上评估模型性能
results = model("https://ultralytics.com/images/bus.jpg") # 预测图像
success = YOLO("yolov8n.pt").export(format="onnx") # 将模型导出为 ONNX 格式
success = model.export(format="onnx") # 将模型导出为 ONNX 格式
```
[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) 会从 Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自动下载。
@ -134,16 +134,16 @@ success = YOLO("yolov8n.pt").export(format="onnx") # 将模型导出为 ONNX
| 模型 | 尺寸<br><sup>(像素) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | 推理速度<br><sup>CPU ONNX<br>(ms) | 推理速度<br><sup>A100 TensorRT<br>(ms) | 参数量<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------- | --------------- | -------------------- | --------------------- | ----------------------------- | ---------------------------------- | --------------- | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.11 | 1.21 | 3.4 | 12.6 |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
- **mAP<sup>val</sup>** 结果都在 [COCO val2017](http://cocodataset.org) 数据集上,使用单模型单尺度测试得到。
<br>复现命令 `yolo mode=val task=detect data=coco.yaml device=0`
<br>复现命令 `yolo mode=val task=segment data=coco.yaml device=0`
- **推理速度**使用 COCO 验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。
<br>复现命令 `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu`
<br>复现命令 `yolo mode=val task=segment data=coco128-seg.yaml batch=1 device=0/cpu`
</details>
@ -158,9 +158,9 @@ success = YOLO("yolov8n.pt").export(format="onnx") # 将模型导出为 ONNX
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 |
- **acc** 都在 [ImageNet](https://www.image-net.org/) 数据集上,使用单模型单尺度测试得到。
<br>复现命令 `yolo mode=val task=detect data=coco.yaml device=0`
<br>复现命令 `yolo mode=val task=classify data=path/to/ImageNet device=0`
- **推理速度**使用 ImageNet 验证集图片推理时间进行平均得到,测试环境使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例。
<br>复现命令 `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu`
<br>复现命令 `yolo mode=val task=classify data=path/to/ImageNet batch=1 device=0/cpu`
</details>

@ -0,0 +1 @@
docs.ultralytics.com

@ -7,8 +7,9 @@
</div>
Welcome to the Ultralytics HUB app for demonstrating YOLOv5 and YOLOv8 models! In this app, available on the [Apple App
Store](https://apps.apple.com/xk/app/ultralytics/id1583935240) and the [Google Play Store](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app), you will be able to see the power and capabilities of YOLOv5, a state-of-the-art object
detection model developed by Ultralytics.
Store](https://apps.apple.com/xk/app/ultralytics/id1583935240) and the
[Google Play Store](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app), you will be able
to see the power and capabilities of YOLOv5, a state-of-the-art object detection model developed by Ultralytics.
**To install simply scan the QR code above**. The App currently features YOLOv5 models, with YOLOv8 models coming soon.

@ -1,7 +1,8 @@
## CLI Basics
If you want to train, validate or run inference on models and don't need to make any modifications to the code, using YOLO command line interface is the easiest way to get started.
If you want to train, validate or run inference on models and don't need to make any modifications to the code, using
YOLO command line interface is the easiest way to get started.
!!! tip "Syntax"
```bash
yolo task=detect mode=train model=yolov8n.yaml epochs=1 ...
... ... ...
@ -9,60 +10,76 @@ If you want to train, validate or run inference on models and don't need to make
classify val yolov8n-cls.pt
```
The experiment arguments can be overridden directly by pass `arg=val` covered in the next section. You can run any supported task by setting `task` and `mode` in cli.
The experiment arguments can be overridden directly by pass `arg=val` covered in the next section. You can run any
supported task by setting `task` and `mode` in cli.
=== "Training"
| | `task` | snippet |
| ----------- | ------------- | ----------------------------------------------------------- |
| Detection | `detect` | <pre><code>yolo task=detect mode=train </code></pre> |
| Instance Segment | `segment` | <pre><code>yolo task=segment mode=train </code></pre> |
| Classification| `classify` | <pre><code>yolo task=classify mode=train </code></pre> |
| | `task` | snippet |
|------------------|------------|------------------------------------------------------------|
| Detection | `detect` | <pre><code>yolo task=detect mode=train </code></pre> |
| Instance Segment | `segment` | <pre><code>yolo task=segment mode=train </code></pre> |
| Classification | `classify` | <pre><code>yolo task=classify mode=train </code></pre> |
=== "Prediction"
| | `task` | snippet |
| ----------- | ------------- | ------------------------------------------------------------ |
| Detection | `detect` | <pre><code>yolo task=detect mode=predict </code></pre> |
| Instance Segment | `segment` | <pre><code>yolo task=segment mode=predict </code></pre>|
| Classification| `classify` | <pre><code>yolo task=classify mode=predict </code></pre>|
| | `task` | snippet |
|------------------|------------|--------------------------------------------------------------|
| Detection | `detect` | <pre><code>yolo task=detect mode=predict </code></pre> |
| Instance Segment | `segment` | <pre><code>yolo task=segment mode=predict </code></pre> |
| Classification | `classify` | <pre><code>yolo task=classify mode=predict </code></pre> |
=== "Validation"
| | `task` | snippet |
| ----------- | ------------- | ------------------------------------------------------------- |
| Detection | `detect` | <pre><code>yolo task=detect mode=val </code></pre> |
| Instance Segment | `segment` | <pre><code>yolo task=segment mode=val </code></pre> |
| Classification| `classify` | <pre><code>yolo task=classify mode=val </code></pre> |
| | `task` | snippet |
|------------------|------------|-----------------------------------------------------------|
| Detection | `detect` | <pre><code>yolo task=detect mode=val </code></pre> |
| Instance Segment | `segment` | <pre><code>yolo task=segment mode=val </code></pre> |
| Classification | `classify` | <pre><code>yolo task=classify mode=val </code></pre> |
!!! note ""
<b>Note:</b> The arguments don't require `'--'` prefix. These are reserved for special commands covered later
---
## Overriding default config arguments
All global default arguments can be overriden by simply passing them as arguments in the cli.
!!! tip ""
=== "Syntax"
```yolo task= ... mode= ... {++ arg=val ++}```
```bash
yolo task= ... mode= ... {++ arg=val ++}
```
=== "Example"
Perform detection training for `10 epochs` with `learning_rate` of `0.01`
```
```bash
yolo task=detect mode=train {++ epochs=10 lr0=0.01 ++}
```
---
## Overriding default config file
You can override config file entirely by passing a new file. You can create a copy of default config file in your current working dir as follows:
You can override config file entirely by passing a new file. You can create a copy of default config file in your
current working dir as follows:
```bash
yolo task=init
```
You can then use `cfg=name.yaml` command to pass the new config file
```bash
yolo cfg=default.yaml
```
??? example
=== "Command"
```
```bash
yolo task=init
yolo cfg=default.yaml
```

@ -1,42 +1,55 @@
Both the Ultralytics YOLO command-line and python interfaces are simply a high-level abstraction on the base engine executors. Let's take a look at the Trainer engine.
Both the Ultralytics YOLO command-line and python interfaces are simply a high-level abstraction on the base engine
executors. Let's take a look at the Trainer engine.
## BaseTrainer
BaseTrainer contains the generic boilerplate training routine. It can be customized for any task based over overidding the required functions or operations as long the as correct formats are followed. For example you can support your own custom model and dataloder by just overriding these functions:
* `get_model(cfg, weights)` - The function that builds a the model to be trained
BaseTrainer contains the generic boilerplate training routine. It can be customized for any task based over overidding
the required functions or operations as long the as correct formats are followed. For example, you can support your own
custom model and dataloder by just overriding these functions:
* `get_model(cfg, weights)` - The function that builds the model to be trained
* `get_dataloder()` - The function that builds the dataloder
More details and source code can be found in [`BaseTrainer` Reference](reference/base_trainer.md)
More details and source code can be found in [`BaseTrainer` Reference](reference/base_trainer.md)
## DetectionTrainer
Here's how you can use the YOLOv8 `DetectionTrainer` and customize it.
```python
from ultralytics.yolo.v8 import DetectionTrainer
from ultralytics.yolo.v8.detect import DetectionTrainer
trainer = DetectionTrainer(overrides={...})
trainer.train()
trained_model = trainer.best # get best model
trained_model = trainer.best # get best model
```
### Customizing the DetectionTrainer
Let's customize the trainer **to train a custom detection model** that is not supported directly. You can do this by simply overloading the existing the `get_model` functionality:
Let's customize the trainer **to train a custom detection model** that is not supported directly. You can do this by
simply overloading the existing the `get_model` functionality:
```python
from ultralytics.yolo.v8 import DetectionTrainer
from ultralytics.yolo.v8.detect import DetectionTrainer
class CustomTrainer(DetectionTrainer):
def get_model(self, cfg, weights):
...
trainer = CustomTrainer(overrides={...})
trainer.train()
```
You now realize that you need to customize the trainer further to:
* Customize the `loss function`.
* Add `callback` that uploads model to your google drive after every 10 `epochs`
Here's how you can do it:
* Customize the `loss function`.
* Add `callback` that uploads model to your Google Drive after every 10 `epochs`
Here's how you can do it:
```python
from ultralytics.yolo.v8 import DetectionTrainer
from ultralytics.yolo.v8.detect import DetectionTrainer
class CustomTrainer(DetectionTrainer):
def get_model(self, cfg, weights):
@ -47,20 +60,24 @@ class CustomTrainer(DetectionTrainer):
imgs = batch["imgs"]
bboxes = batch["bboxes"]
...
return loss, loss_items # see Reference-> Trainer for details on the expected format
return loss, loss_items # see Reference-> Trainer for details on the expected format
# callback to upload model weights
def log_model(trainer):
last_weight_path = trainer.last
...
trainer = CustomTrainer(overrides={...})
trainer.add_callback("on_train_epoch_end", log_model) # Adds to existing callback
trainer.add_callback("on_train_epoch_end", log_model) # Adds to existing callback
trainer.train()
```
To know more about Callback triggering events and entry point, checkout our Callbacks guide # TODO
To know more about Callback triggering events and entry point, checkout our Callbacks guide # TODO
## Other engine components
There are other componenets that can be customized similarly like `Validators` and `Predictiors`
To know more about their implementation details, go to Reference
There are other componenets that can be customized similarly like `Validators` and `Predictors`
See Reference section for more information on these.

@ -22,25 +22,33 @@ trained, it can be easily deployed and used for real-time object detection and i
Ultralytics HUB is an essential tool for anyone looking to use YOLOv5 for their object detection and image segmentation
projects.
**[Get started now](https://hub.ultralytics.com)** and experience the power and simplicity of Ultralytics HUB for yourself. Sign up for a free account and
**[Get started now](https://hub.ultralytics.com)** and experience the power and simplicity of Ultralytics HUB for
yourself. Sign up for a free account and
start building, training, and deploying YOLOv5 and YOLOv8 models today.
## 1. Upload a Dataset
Ultralytics HUB datasets are just like YOLOv5 🚀 datasets, they use the same structure and the same label formats to keep everything simple.
Ultralytics HUB datasets are just like YOLOv5 🚀 datasets, they use the same structure and the same label formats to keep
everything simple.
When you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML inside the dataset root directory** as in the example shown below, and then zip for upload to https://hub.ultralytics.com/. Your **dataset YAML, directory and zip** should all share the same name. For example, if your dataset is called 'coco6' as in our example [ultralytics/hub/coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip), then you should have a coco6.yaml inside your coco6/ directory, which should zip to create coco6.zip for upload:
When you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML inside the dataset root directory**
as in the example shown below, and then zip for upload to https://hub.ultralytics.com/. Your **dataset YAML, directory
and zip** should all share the same name. For example, if your dataset is called 'coco6' as in our
example [ultralytics/hub/coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip), then you should have a
coco6.yaml inside your coco6/ directory, which should zip to create coco6.zip for upload:
```bash
zip -r coco6.zip coco6
```
The example [coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip) dataset in this repository can be downloaded and unzipped to see exactly how to structure your custom dataset.
The example [coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip) dataset in this repository can be
downloaded and unzipped to see exactly how to structure your custom dataset.
<p align="center"><img width="80%" src="https://user-images.githubusercontent.com/26833433/201424843-20fa081b-ad4b-4d6c-a095-e810775908d8.png" title="COCO6" /></p>
The dataset YAML is the same standard YOLOv5 YAML format. See the [YOLOv5 Train Custom Data tutorial](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) for full details.
The dataset YAML is the same standard YOLOv5 YAML format. See
the [YOLOv5 Train Custom Data tutorial](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) for full details.
```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: # dataset root dir (leave empty for HUB)
@ -57,24 +65,26 @@ names:
...
```
After zipping your dataset, sign in to [Ultralytics HUB](https://bit.ly/ultralytics_hub) and click the Datasets tab. Click 'Upload Dataset' to upload, scan and visualize your new dataset before training new YOLOv5 models on it!
After zipping your dataset, sign in to [Ultralytics HUB](https://bit.ly/ultralytics_hub) and click the Datasets tab.
Click 'Upload Dataset' to upload, scan and visualize your new dataset before training new YOLOv5 models on it!
<img width="100%" alt="HUB Dataset Upload" src="https://user-images.githubusercontent.com/26833433/198611715-540c9856-49d7-4069-a2fd-7c9eb70e772e.png">
## 2. Train a Model
Connect to the Ultralytics HUB notebook and use your model API key to begin training! <a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
Connect to the Ultralytics HUB notebook and use your model API key to begin
training! <a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
## 3. Deploy to Real World
Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run models directly on your mobile device by downloading the [Ultralytics App](https://ultralytics.com/app_install)!
Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run
models directly on your mobile device by downloading the [Ultralytics App](https://ultralytics.com/app_install)!
<a align="center" href="https://ultralytics.com/app_install" target="_blank">
<img width="100%" alt="Ultralytics mobile app" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-app.png"></a>
## ❓ Issues
If you are a new [Ultralytics HUB](https://bit.ly/ultralytics_hub) user and have questions or comments, you are in the right place! Please raise a [New Issue](https://github.com/ultralytics/hub/issues/new/choose) and let us know what we can do to make your life better 😃!
If you are a new [Ultralytics HUB](https://bit.ly/ultralytics_hub) user and have questions or comments, you are in the
right place! Please raise a [New Issue](https://github.com/ultralytics/hub/issues/new/choose) and let us know what we
can do to make your life better 😃!

@ -15,17 +15,20 @@
# Welcome to Ultralytics YOLOv8
Welcome to the Ultralytics YOLOv8 documentation landing page! [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO (You
Only Look Once) object detection and image segmentation model developed by [Ultralytics](https://ultralytics.com). This page serves as the starting
point for exploring the various resources available to help you get started with YOLOv8 and understand its features and
capabilities.
Welcome to the Ultralytics YOLOv8 documentation landing
page! [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO (You Only Look
Once) object detection and image segmentation model developed by [Ultralytics](https://ultralytics.com). This page
serves as the starting point for exploring the various resources available to help you get started with YOLOv8 and
understand its features and capabilities.
The YOLOv8 model is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of
object detection and image segmentation tasks. It can be trained on large datasets and is capable of running on a
variety of hardware platforms, from CPUs to GPUs.
Whether you are a seasoned machine learning practitioner or new to the field, we hope that the resources on this page
will help you get the most out of YOLOv8. For any bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For professional support please [Contact Us](https://ultralytics.com/contact).
will help you get the most out of YOLOv8. For any bugs and feature requests please
visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For professional support
please [Contact Us](https://ultralytics.com/contact).
## A Brief History of YOLO
@ -40,8 +43,8 @@ backbone network, adding a feature pyramid, and making use of focal loss.
In 2020, YOLOv4 was released which introduced a number of innovations such as the use of Mosaic data augmentation, a new
anchor-free detection head, and a new loss function.
In 2021, Ultralytics released [YOLOv5](https://github.com/ultralytics/yolov5), which further improved the model's performance and added new features such as
support for panoptic segmentation and object tracking.
In 2021, Ultralytics released [YOLOv5](https://github.com/ultralytics/yolov5), which further improved the model's
performance and added new features such as support for panoptic segmentation and object tracking.
YOLO has been widely used in a variety of applications, including autonomous vehicles, security and surveillance, and
medical imaging. It has also been used to win several competitions, such as the COCO Object Detection Challenge and the
@ -55,9 +58,10 @@ For more information about the history and development of YOLO, you can refer to
## Ultralytics YOLOv8
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO object detection and image segmentation model developed by
Ultralytics. YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO
versions and introduces new features and improvements to further boost performance and flexibility.
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO object detection and
image segmentation model developed by Ultralytics. YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds
upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and
flexibility.
One key feature of YOLOv8 is its extensibility. It is designed as a framework that supports all previous versions of
YOLO, making it easy to switch between different versions and compare their performance. This makes YOLOv8 an ideal

@ -0,0 +1,140 @@
This is the simplest way of simply using YOLOv8 models in a Python environment. It can be imported from
the `ultralytics` module.
!!! example "Train"
=== "From pretrained(recommanded)"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt") # pass any model type
model.train(epochs=5)
```
=== "From scratch"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.yaml")
model.train(data="coco128.yaml", epochs=5)
```
=== "Resume"
```python
TODO: Resume feature is under development and should be released soon.
```
!!! example "Val"
=== "Val after training"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.yaml")
model.train(data="coco128.yaml", epochs=5)
model.val() # It'll automatically evaluate the data you trained.
```
=== "Val independently"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
# It'll use the data yaml file in model.pt if you don't set data.
model.val()
# or you can set the data you want to val
model.val(data="coco128.yaml")
```
!!! example "Predict"
=== "From source"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
model.predict(source="0") # accepts all formats - img/folder/vid.*(mp4/format). 0 for webcam
model.predict(source="folder", show=True) # Display preds. Accepts all yolo predict arguments
```
=== "From image/ndarray/tensor"
```python
# TODO, still working on it.
```
=== "Return outputs"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
outputs = model.predict(source="0", return_outputs=True) # treat predict as a Python generator
for output in outputs:
# each output here is a dict.
# for detection
print(output["det"]) # np.ndarray, (N, 6), xyxy, score, cls
# for segmentation
print(output["det"]) # np.ndarray, (N, 6), xyxy, score, cls
print(output["segment"]) # List[np.ndarray] * N, bounding coordinates of masks
# for classify
print(output["prob"]) # np.ndarray, (num_class, ), cls prob
```
!!! note "Export and Deployment"
=== "Export, Fuse & info"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
model.fuse()
model.info(verbose=True) # Print model information
model.export(format=) # TODO:
```
=== "Deployment"
More functionality coming soon
To know more about using `YOLO` models, refer Model class Reference
[Model reference](reference/model.md){ .md-button .md-button--primary}
---
### Using Trainers
`YOLO` model class is a high-level wrapper on the Trainer classes. Each YOLO task has its own trainer that inherits
from `BaseTrainer`.
!!! tip "Detection Trainer Example"
```python
from ultralytics.yolo import v8 import DetectionTrainer, DetectionValidator, DetectionPredictor
# trainer
trainer = DetectionTrainer(overrides={})
trainer.train()
trained_model = trainer.best
# Validator
val = DetectionValidator(args=...)
val(model=trained_model)
# predictor
pred = DetectionPredictor(overrides={})
pred(source=SOURCE, model=trained_model)
# resume from last weight
overrides["resume"] = trainer.last
trainer = detect.DetectionTrainer(overrides=overrides)
```
You can easily customize Trainers to support custom tasks or explore R&D ideas.
Learn more about Customizing `Trainers`, `Validators` and `Predictors` to suit your project needs in the Customization
Section.
[Customization tutorials](engine.md){ .md-button .md-button--primary}

@ -1,24 +1,31 @@
## Installation
## Install
Install YOLOv8 via the `ultralytics` pip package for the latest stable release or by cloning the [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics) repository for the most up-to-date version.
Install YOLOv8 via the `ultralytics` pip package for the latest stable release or by cloning
the [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics) repository for the most
up-to-date version.
!!! note "pip install (recommended)"
```
!!! example "Pip install method (recommended)"
```bash
pip install ultralytics
```
!!! note "git clone"
```
!!! example "Git clone method (for development)"
```bash
git clone https://github.com/ultralytics/ultralytics
cd ultralytics
pip install -e '.[dev]'
```
See contributing section to know more about contributing to the project
## Use with CLI
## CLI
The command line YOLO interface lets you simply train, validate or infer models on various tasks and versions.
The YOLO command line interface (CLI) lets you simply train, validate or infer models on various tasks and versions.
CLI requires no customization or code. You can simply run all tasks from the terminal with the `yolo` command.
!!! note
!!! example
=== "Syntax"
```bash
yolo task=detect mode=train model=yolov8n.yaml args...
@ -35,22 +42,32 @@ CLI requires no customization or code. You can simply run all tasks from the ter
```bash
yolo task=detect mode=train model=yolov8n.pt data=coco128.yaml device=\'0,1,2,3\'
```
[CLI Guide](cli.md){ .md-button .md-button--primary}
## Python API
The Python API allows users to easily use YOLOv8 in their Python projects. It provides functions for loading and running the model, as well as for processing the model's output. The interface is designed to be easy to use, so that users can quickly implement object detection in their projects.
## Use with Python
Python usage allows users to easily use YOLOv8 inside their Python projects. It provides functions for loading and
running the model, as well as for processing the model's output. The interface is designed to be easy to use, so that
users can quickly implement object detection in their projects.
Overall, the Python interface is a useful tool for anyone looking to incorporate object detection, segmentation or classification into their Python projects using YOLOv8.
Overall, the Python interface is a useful tool for anyone looking to incorporate object detection, segmentation or
classification into their Python projects using YOLOv8.
!!! example
!!! note
```python
from ultralytics import YOLO
model = YOLO('yolov8n.yaml') # build a new model from scratch
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for best training results)
results = model.train(data='coco128.yaml') # train the model
results = model.val() # evaluate model performance on the validation set
results = model.predict(source='bus.jpg') # predict on an image
success = model.export(format='onnx') # export the model to ONNX format
# Load a model
model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Use the model
results = model.train(data="coco128.yaml", epochs=3) # train the model
results = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
success = model.export(format="onnx") # export the model to ONNX format
```
[API Guide](sdk.md){ .md-button .md-button--primary}
[Python Guide](python.md){.md-button .md-button--primary}

@ -1,5 +1,8 @@
All task Predictors are inherited from `BasePredictors` class that contains the model validation routine boilerplate. You can override any function of these Trainers to suit your needs.
All task Predictors are inherited from `BasePredictors` class that contains the model validation routine boilerplate.
You can override any function of these Trainers to suit your needs.
---
### BasePredictor API Reference
:::ultralytics.yolo.engine.predictor.BasePredictor

@ -1,5 +1,8 @@
All task Trainers are inherited from `BaseTrainer` class that contains the model training and optimzation routine boilerplate. You can override any function of these Trainers to suit your needs.
All task Trainers are inherited from `BaseTrainer` class that contains the model training and optimzation routine
boilerplate. You can override any function of these Trainers to suit your needs.
---
### BaseTrainer API Reference
:::ultralytics.yolo.engine.trainer.BaseTrainer

@ -1,5 +1,8 @@
All task Validators are inherited from `BaseValidator` class that contains the model validation routine boilerplate. You can override any function of these Trainers to suit your needs.
All task Validators are inherited from `BaseValidator` class that contains the model validation routine boilerplate. You
can override any function of these Trainers to suit your needs.
---
### BaseValidator API Reference
:::ultralytics.yolo.engine.validator.BaseValidator

@ -1,2 +1,3 @@
### Exporter API Reference
:::ultralytics.yolo.engine.exporter.Exporter

@ -1,4 +1,5 @@
# nn Module
Ultralytics nn module contains 3 main components:
1. **AutoBackend**: A module that can run inference on all popular model formats
@ -6,10 +7,13 @@ Ultralytics nn module contains 3 main components:
3. **modules**: Optimized and reusable neural network blocks built on PyTorch.
## AutoBackend
:::ultralytics.nn.autobackend.AutoBackend
## BaseModel
:::ultralytics.nn.tasks.BaseModel
## Modules
TODO

@ -1,159 +1,205 @@
This module contains optimized deep learning related operations used in the Ultralytics YOLO framework
## Non-max suppression
:::ultralytics.ops.non_max_suppression
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## Scale boxes
:::ultralytics.ops.scale_boxes
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## Scale image
:::ultralytics.ops.scale_image
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## clip boxes
:::ultralytics.ops.clip_boxes
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
# Box Format Conversion
## xyxy2xywh
:::ultralytics.ops.xyxy2xywh
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## xywh2xyxy
:::ultralytics.ops.xywh2xyxy
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## xywhn2xyxy
:::ultralytics.ops.xywhn2xyxy
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## xyxy2xywhn
:::ultralytics.ops.xyxy2xywhn
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## xyn2xy
:::ultralytics.ops.xyn2xy
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## xywh2ltwh
:::ultralytics.ops.xywh2ltwh
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## xyxy2ltwh
:::ultralytics.ops.xyxy2ltwh
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## ltwh2xywh
:::ultralytics.ops.ltwh2xywh
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## ltwh2xyxy
:::ultralytics.ops.ltwh2xyxy
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## segment2box
:::ultralytics.ops.segment2box
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
# Mask Operations
## resample_segments
:::ultralytics.ops.resample_segments
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## crop_mask
:::ultralytics.ops.crop_mask
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## process_mask_upsample
:::ultralytics.ops.process_mask_upsample
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## process_mask
:::ultralytics.ops.process_mask
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## process_mask_native
:::ultralytics.ops.process_mask_native
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## scale_segments
:::ultralytics.ops.scale_segments
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## masks2segments
:::ultralytics.ops.masks2segments
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---
## clip_segments
:::ultralytics.ops.clip_segments
handler: python
options:
show_source: false
show_root_toc_entry: false
handler: python
options:
show_source: false
show_root_toc_entry: false
---

@ -1,91 +0,0 @@
## Using YOLO models
This is the simplest way of simply using yolo models in a python environment. It can be imported from the `ultralytics` module.
!!! example "Usage"
=== "Training"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.yaml")
model(img_tensor) # Or model.forward(). inference.
model.train(data="coco128.yaml", epochs=5)
```
=== "Training pretrained"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt") # pass any model type
model(...) # inference
model.train(epochs=5)
```
=== "Resume Training"
```python
from ultralytics import YOLO
model = YOLO()
model.resume(task="detect") # resume last detection training
model.resume(model="last.pt") # resume from a given model/run
```
=== "Visualize/save Predictions"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
model.predict(source="0") # accepts all formats - img/folder/vid.*(mp4/format). 0 for webcam
model.predict(source="folder", show=True) # Display preds. Accepts all yolo predict arguments
```
!!! note "Export and Deployment"
=== "Export, Fuse & info"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
model.fuse()
model.info(verbose=True) # Print model information
model.export(format=) # TODO:
```
=== "Deployment"
More functionality coming soon
To know more about using `YOLO` models, refer Model class Reference
[Model reference](reference/model.md){ .md-button .md-button--primary}
---
### Using Trainers
`YOLO` model class is a high-level wrapper on the Trainer classes. Each YOLO task has its own trainer that inherits from `BaseTrainer`.
!!! tip "Detection Trainer Example"
```python
from ultralytics.yolo import v8 import DetectionTrainer, DetectionValidator, DetectionPredictor
# trainer
trainer = DetectionTrainer(overrides={})
trainer.train()
trained_model = trainer.best
# Validator
val = DetectionValidator(args=...)
val(model=trained_model)
# predictor
pred = DetectionPredictor(overrides={})
pred(source=SOURCE, model=trained_model)
# resume from last weight
overrides["resume"] = trainer.last
trainer = detect.DetectionTrainer(overrides=overrides)
```
You can easily customize Trainers to support custom tasks or explore R&D ideas.
Learn more about Customizing `Trainers`, `Validators` and `Predictors` to suit your project needs in the Customization Section.
[Customization tutorials](engine.md){ .md-button .md-button--primary}

@ -0,0 +1,133 @@
Image classification is the simplest of the three tasks and involves classifying an entire image into one of a set of
predefined classes.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
The output of an image classifier is a single class label and a confidence score. Image
classification is useful when you need to know only what class an image belongs to and don't need to know where objects
of that class are located or what their exact shape is.
!!! tip "Tip"
YOLOv8 _classification_ models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on ImageNet.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8/cls){.md-button .md-button--primary}
## Train
Train YOLOv8n-cls on the MNIST160 dataset for 100 epochs at image size 64. For a full list of available arguments
see the [Configuration](../config.md) page.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.yaml") # build a new model from scratch
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="mnist160", epochs=100, imgsz=64)
```
=== "CLI"
```bash
yolo task=classify mode=train data=mnist160 model=yolov8n-cls.pt epochs=100 imgsz=64
```
## Val
Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains
it's training `data` and arguments as model attributes.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
results = model.val() # no arguments needed, dataset and settings remembered
```
=== "CLI"
```bash
yolo task=classify mode=val model=yolov8n-cls.pt # val official model
yolo task=classify mode=val model=path/to/best.pt # val custom model
```
## Predict
Use a trained YOLOv8n-cls model to run predictions on images.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
```
=== "CLI"
```bash
yolo task=classify mode=predict model=yolov8n-cls.pt source="https://ultralytics.com/images/bus.jpg" # predict with official model
yolo task=classify mode=predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg" # predict with custom model
```
## Export
Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained
# Export the model
model.export(format="onnx")
```
=== "CLI"
```bash
yolo mode=export model=yolov8n-cls.pt format=onnx # export official model
yolo mode=export model=path/to/best.pt format=onnx # export custom trained model
```
Available YOLOv8-cls export formats include:
| Format | `format=` | Model |
|----------------------------------------------------------------------------|---------------|-------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n-cls.pt` |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-cls.torchscript` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-cls.onnx` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-cls_openvino_model/` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-cls.engine` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-cls.mlmodel` |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-cls_saved_model/` |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-cls.pb` |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-cls.tflite` |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-cls_edgetpu.tflite` |
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` |

@ -0,0 +1,132 @@
Object detection is a task that involves identifying the location and class of objects in an image or video stream.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
The output of an object detector is a set of bounding boxes that enclose the objects in the image, along with class
labels
and confidence scores for each box. Object detection is a good choice when you need to identify objects of interest in a
scene, but don't need to know exactly where the object is or its exact shape.
!!! tip "Tip"
YOLOv8 _detection_ models have no suffix and are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on COCO.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8){ .md-button .md-button--primary}
## Train
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see
the [Configuration](../config.md) page.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco128.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo task=detect mode=train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640
```
## Val
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's
training `data` and arguments as model attributes.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
results = model.val() # no arguments needed, dataset and settings remembered
```
=== "CLI"
```bash
yolo task=detect mode=val model=yolov8n.pt # val official model
yolo task=detect mode=val model=path/to/best.pt # val custom model
```
## Predict
Use a trained YOLOv8n model to run predictions on images.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
```
=== "CLI"
```bash
yolo task=detect mode=predict model=yolov8n.pt source="https://ultralytics.com/images/bus.jpg" # predict with official model
yolo task=detect mode=predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg" # predict with custom model
```
## Export
Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained
# Export the model
model.export(format="onnx")
```
=== "CLI"
```bash
yolo mode=export model=yolov8n.pt format=onnx # export official model
yolo mode=export model=path/to/best.pt format=onnx # export custom trained model
```
Available YOLOv8 export formats include:
| Format | `format=` | Model |
|----------------------------------------------------------------------------|--------------------|---------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` |
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` |

@ -0,0 +1,135 @@
Instance segmentation goes a step further than object detection and involves identifying individual objects in an image
and segmenting them from the rest of the image.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
The output of an instance segmentation model is a set of masks or
contours that outline each object in the image, along with class labels and confidence scores for each object. Instance
segmentation is useful when you need to know not only where objects are in an image, but also what their exact shape is.
!!! tip "Tip"
YOLOv8 _segmentation_ models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on COCO.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8/seg){.md-button .md-button--primary}
## Train
Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. For a full list of available
arguments see the [Configuration](../config.md) page.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.yaml") # build a new model from scratch
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco128-seg.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo task=segment mode=train data=coco128-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
```
## Val
Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the `model`
retains it's training `data` and arguments as model attributes.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
results = model.val() # no arguments needed, dataset and settings remembered
```
=== "CLI"
```bash
yolo task=segment mode=val model=yolov8n-seg.pt # val official model
yolo task=segment mode=val model=path/to/best.pt # val custom model
```
## Predict
Use a trained YOLOv8n-seg model to run predictions on images.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
```
=== "CLI"
```bash
yolo task=segment mode=predict model=yolov8n-seg.pt source="https://ultralytics.com/images/bus.jpg" # predict with official model
yolo task=segment mode=predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg" # predict with custom model
```
## Export
Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained
# Export the model
model.export(format="onnx")
```
=== "CLI"
```bash
yolo mode=export model=yolov8n-seg.pt format=onnx # export official model
yolo mode=export model=path/to/best.pt format=onnx # export custom trained model
```
Available YOLOv8-seg export formats include:
| Format | `format=` | Model |
|----------------------------------------------------------------------------|---------------|-------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n-seg.pt` |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-seg.torchscript` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-seg.onnx` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-seg_openvino_model/` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-seg.engine` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-seg.mlmodel` |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-seg_saved_model/` |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-seg.pb` |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-seg.tflite` |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-seg_edgetpu.tflite` |
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` |

@ -13,14 +13,14 @@ theme:
palette:
# Palette toggle for light mode
- scheme: default
primary: grey
# primary: grey
toggle:
icon: material/brightness-7
name: Switch to dark mode
# Palette toggle for dark mode
- scheme: slate
primary: black
# primary: black
toggle:
icon: material/brightness-4
name: Switch to light mode
@ -35,6 +35,7 @@ theme:
- navigation.top
- navigation.expand
- navigation.footer
- content.tabs.link # all code tabs change simultaneously
extra_css:
- stylesheets/style.css
@ -75,10 +76,15 @@ plugins:
# Primary navigation
nav:
- Quickstart: quickstart.md
- CLI: cli.md
- Python Interface: sdk.md
- Configuration: config.md
- Customization Guide: engine.md
- Tasks:
- Detection: tasks/detection.md
- Segmentation: tasks/segmentation.md
- Classification: tasks/classification.md
- Usage:
- CLI: cli.md
- Python: python.md
- Configuration: config.md
- Customization Guide: engine.md
- Ultralytics HUB: hub.md
- iOS and Android App: app.md
- Reference:

@ -27,7 +27,7 @@ def test_detect():
# predictor
pred = detect.DetectionPredictor(overrides={"imgsz": [640, 640]})
i = 0
for _ in pred(source=SOURCE, model="yolov8n.pt"):
for _ in pred(source=SOURCE, model="yolov8n.pt", return_outputs=True):
i += 1
assert i == 2, "predictor test failed"
@ -60,7 +60,7 @@ def test_segment():
# predictor
pred = segment.SegmentationPredictor(overrides={"imgsz": [640, 640]})
i = 0
for _ in pred(source=SOURCE, model="yolov8n-seg.pt"):
for _ in pred(source=SOURCE, model="yolov8n-seg.pt", return_outputs=True):
i += 1
assert i == 2, "predictor test failed"
@ -94,6 +94,6 @@ def test_classify():
# predictor
pred = classify.ClassificationPredictor(overrides={"imgsz": [640, 640]})
i = 0
for _ in pred(source=SOURCE, model=trained_model):
for _ in pred(source=SOURCE, model=trained_model, return_outputs=True):
i += 1
assert i == 2, "predictor test failed"

@ -32,7 +32,7 @@ def test_model_fuse():
def test_predict_dir():
model = YOLO(MODEL)
model.predict(source=ROOT / "assets", return_outputs=False)
model.predict(source=ROOT / "assets")
def test_val():
@ -98,3 +98,11 @@ def test_export_paddle():
def test_all_model_yamls():
for m in list((ROOT / 'models').rglob('*.yaml')):
YOLO(m.name)
def test_workflow():
model = YOLO(MODEL)
model.train(data="coco128.yaml", epochs=1, imgsz=32)
model.val()
model.predict(SOURCE)
model.export(format="onnx", opset=12) # export a model to ONNX format

@ -177,6 +177,7 @@ class Exporter:
for p in model.parameters():
p.requires_grad = False
model.eval()
model.float()
model = model.fuse()
for k, m in model.named_modules():
if isinstance(m, (Detect, Segment)):

@ -111,7 +111,7 @@ class YOLO:
self.model.fuse()
@smart_inference_mode()
def predict(self, source, return_outputs=True, **kwargs):
def predict(self, source, return_outputs=False, **kwargs):
"""
Visualize prediction.
@ -191,6 +191,9 @@ class YOLO:
self.trainer.model = self.trainer.get_model(weights=self.model if self.ckpt else None, cfg=self.model.yaml)
self.model = self.trainer.model
self.trainer.train()
# update model and configs after training
self.model, _ = attempt_load_one_weight(str(self.trainer.best))
self.overrides = self.model.args
def to(self, device):
"""

@ -105,7 +105,7 @@ class BasePredictor:
def postprocess(self, preds, img, orig_img):
return preds
def setup(self, source=None, model=None, return_outputs=True):
def setup(self, source=None, model=None, return_outputs=False):
# source
source = str(source if source is not None else self.args.source)
is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
@ -161,7 +161,7 @@ class BasePredictor:
return model
@smart_inference_mode()
def __call__(self, source=None, model=None, return_outputs=True):
def __call__(self, source=None, model=None, return_outputs=False):
self.run_callbacks("on_predict_start")
model = self.model if self.done_setup else self.setup(source, model, return_outputs)
model.eval()

@ -24,7 +24,7 @@ class DetectionValidator(BaseValidator):
self.data_dict = yaml_load(check_file(self.args.data), append_filename=True) if self.args.data else None
self.is_coco = False
self.class_map = None
self.metrics = DetMetrics(save_dir=self.save_dir, plot=self.args.plots)
self.metrics = DetMetrics(save_dir=self.save_dir)
self.iouv = torch.linspace(0.5, 0.95, 10) # iou vector for mAP@0.5:0.95
self.niou = self.iouv.numel()
@ -34,8 +34,7 @@ class DetectionValidator(BaseValidator):
for k in ["batch_idx", "cls", "bboxes"]:
batch[k] = batch[k].to(self.device)
nb, _, height, width = batch["img"].shape
batch["bboxes"] *= torch.tensor((width, height, width, height), device=self.device) # to pixels
nb = len(batch["img"])
self.lb = [torch.cat([batch["cls"], batch["bboxes"]], dim=-1)[batch["batch_idx"] == i]
for i in range(nb)] if self.args.save_hybrid else [] # for autolabelling
@ -50,6 +49,7 @@ class DetectionValidator(BaseValidator):
self.nc = head.nc
self.names = model.names
self.metrics.names = self.names
self.metrics.plot = self.args.plots
self.confusion_matrix = ConfusionMatrix(nc=self.nc)
self.seen = 0
self.jdict = []
@ -95,7 +95,9 @@ class DetectionValidator(BaseValidator):
# Evaluate
if nl:
tbox = ops.xywh2xyxy(bbox) # target boxes
height, width = batch["img"].shape[2:]
tbox = ops.xywh2xyxy(bbox) * torch.tensor(
(width, height, width, height), device=self.device) # target boxes
ops.scale_boxes(batch["img"][si].shape[1:], tbox, shape,
ratio_pad=batch["ratio_pad"][si]) # native-space labels
labelsn = torch.cat((cls, tbox), 1) # native-space labels

@ -22,7 +22,7 @@ class SegmentationValidator(DetectionValidator):
def __init__(self, dataloader=None, save_dir=None, pbar=None, logger=None, args=None):
super().__init__(dataloader, save_dir, pbar, logger, args)
self.args.task = "segment"
self.metrics = SegmentMetrics(save_dir=self.save_dir, plot=self.args.plots)
self.metrics = SegmentMetrics(save_dir=self.save_dir)
def preprocess(self, batch):
batch = super().preprocess(batch)
@ -31,13 +31,15 @@ class SegmentationValidator(DetectionValidator):
def init_metrics(self, model):
head = model.model[-1] if self.training else model.model.model[-1]
self.is_coco = self.data.get('val', '').endswith(f'coco{os.sep}val2017.txt') # is COCO dataset
val = self.data.get('val', '') # validation path
self.is_coco = isinstance(val, str) and val.endswith(f'coco{os.sep}val2017.txt') # is COCO dataset
self.class_map = ops.coco80_to_coco91_class() if self.is_coco else list(range(1000))
self.args.save_json |= self.is_coco and not self.training # run on final val if training COCO
self.nc = head.nc
self.nm = head.nm if hasattr(head, "nm") else 32
self.names = model.names
self.metrics.names = self.names
self.metrics.plot = self.args.plots
self.confusion_matrix = ConfusionMatrix(nc=self.nc)
self.plot_masks = []
self.seen = 0
@ -97,7 +99,9 @@ class SegmentationValidator(DetectionValidator):
# Evaluate
if nl:
tbox = ops.xywh2xyxy(bbox) # target boxes
height, width = batch["img"].shape[2:]
tbox = ops.xywh2xyxy(bbox) * torch.tensor(
(width, height, width, height), device=self.device) # target boxes
ops.scale_boxes(batch["img"][si].shape[1:], tbox, shape,
ratio_pad=batch["ratio_pad"][si]) # native-space labels
labelsn = torch.cat((cls, tbox), 1) # native-space labels

Loading…
Cancel
Save