`ultralytics 8.0.97` confusion matrix, windows, docs updates (#2511)

Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com>
Co-authored-by: Dowon <ks2515@naver.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
single_channel
Glenn Jocher 2 years ago committed by GitHub
parent 6ee3a9a74b
commit d1107ca4cb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -23,6 +23,8 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: "0" # pulls all commits (needed correct last updated dates in Docs)
- name: Set up Python environment
uses: actions/setup-python@v4
with:

@ -1,3 +1,7 @@
---
description: Learn how to install the Ultralytics package in developer mode and build/serve locally using MkDocs. Deploy your project to your host easily.
---
# Ultralytics Docs
Ultralytics Docs are deployed to [https://docs.ultralytics.com](https://docs.ultralytics.com).

@ -1,3 +1,7 @@
---
description: Learn how Ultralytics prioritize security. Get insights into Snyk and GitHub CodeQL scans, and how to report security issues in YOLOv8.
---
# Security Policy
At [Ultralytics](https://ultralytics.com), the security of our users' data and systems is of utmost importance. To

@ -1,5 +1,6 @@
---
comments: true
description: Learn how torchvision organizes classification image datasets. Use this code to create and train models. CLI and Python code shown.
---
# Image Classification Datasets Overview
@ -77,6 +78,7 @@ cifar-10-/
In this example, the `train` directory contains subdirectories for each class in the dataset, and each class subdirectory contains all the images for that class. The `test` directory has a similar structure. The `root` directory also contains other files that are part of the CIFAR10 dataset.
## Usage
!!! example ""
=== "Python"
@ -98,4 +100,5 @@ In this example, the `train` directory contains subdirectories for each class in
```
## Supported Datasets
TODO

@ -1,5 +1,6 @@
---
comments: true
description: Learn about the COCO dataset, designed to encourage research on object detection, segmentation, and captioning with standardized evaluation metrics.
---
# COCO Dataset

@ -1,5 +1,6 @@
---
comments: true
description: Learn about supported dataset formats for training YOLO detection models, including Ultralytics YOLO and COCO, in this Object Detection Datasets Overview.
---
# Object Detection Datasets Overview
@ -20,6 +21,7 @@ The dataset format used for training YOLO detection models is as follows:
- Object width and height: The width and height of the object, normalized to be between 0 and 1.
The format for a single row in the detection dataset file is as follows:
```
<object-class> <x> <y> <width> <height>
```
@ -55,6 +57,7 @@ The `names` field is a list of the names of the object classes. The order of the
NOTE: Either `nc` or `names` must be defined. Defining both are not mandatory
Alternatively, you can directly define class names like this:
```yaml
names:
0: person
@ -72,6 +75,7 @@ names: ['person', 'car']
```
## Usage
!!! example ""
=== "Python"
@ -93,6 +97,7 @@ names: ['person', 'car']
```
## Supported Datasets
TODO
## Port or Convert label formats

@ -1,5 +1,6 @@
---
comments: true
description: Ultralytics provides support for various datasets to facilitate multiple computer vision tasks. Check out our list of main datasets and their summaries.
---
# Datasets Overview
@ -10,44 +11,44 @@ Ultralytics provides support for various datasets to facilitate computer vision
Bounding box object detection is a computer vision technique that involves detecting and localizing objects in an image by drawing a bounding box around each object.
* [Argoverse](detect/argoverse.md): A dataset containing 3D tracking and motion forecasting data from urban environments with rich annotations.
* [COCO](detect/coco.md): A large-scale dataset designed for object detection, segmentation, and captioning with over 200K labeled images.
* [COCO8](detect/coco8.md): Contains the first 4 images from COCO train and COCO val, suitable for quick tests.
* [Global Wheat 2020](detect/globalwheat2020.md): A dataset of wheat head images collected from around the world for object detection and localization tasks.
* [Objects365](detect/objects365.md): A high-quality, large-scale dataset for object detection with 365 object categories and over 600K annotated images.
* [SKU-110K](detect/sku-110k.md): A dataset featuring dense object detection in retail environments with over 11K images and 1.7 million bounding boxes.
* [VisDrone](detect/visdrone.md): A dataset containing object detection and multi-object tracking data from drone-captured imagery with over 10K images and video sequences.
* [VOC](detect/voc.md): The Pascal Visual Object Classes (VOC) dataset for object detection and segmentation with 20 object classes and over 11K images.
* [xView](detect/xview.md): A dataset for object detection in overhead imagery with 60 object categories and over 1 million annotated objects.
* [Argoverse](detect/argoverse.md): A dataset containing 3D tracking and motion forecasting data from urban environments with rich annotations.
* [COCO](detect/coco.md): A large-scale dataset designed for object detection, segmentation, and captioning with over 200K labeled images.
* [COCO8](detect/coco8.md): Contains the first 4 images from COCO train and COCO val, suitable for quick tests.
* [Global Wheat 2020](detect/globalwheat2020.md): A dataset of wheat head images collected from around the world for object detection and localization tasks.
* [Objects365](detect/objects365.md): A high-quality, large-scale dataset for object detection with 365 object categories and over 600K annotated images.
* [SKU-110K](detect/sku-110k.md): A dataset featuring dense object detection in retail environments with over 11K images and 1.7 million bounding boxes.
* [VisDrone](detect/visdrone.md): A dataset containing object detection and multi-object tracking data from drone-captured imagery with over 10K images and video sequences.
* [VOC](detect/voc.md): The Pascal Visual Object Classes (VOC) dataset for object detection and segmentation with 20 object classes and over 11K images.
* [xView](detect/xview.md): A dataset for object detection in overhead imagery with 60 object categories and over 1 million annotated objects.
## [Instance Segmentation Datasets](segment/index.md)
Instance segmentation is a computer vision technique that involves identifying and localizing objects in an image at the pixel level.
* [COCO](segment/coco.md): A large-scale dataset designed for object detection, segmentation, and captioning tasks with over 200K labeled images.
* [COCO8-seg](segment/coco8-seg.md): A smaller dataset for instance segmentation tasks, containing a subset of 8 COCO images with segmentation annotations.
* [COCO](segment/coco.md): A large-scale dataset designed for object detection, segmentation, and captioning tasks with over 200K labeled images.
* [COCO8-seg](segment/coco8-seg.md): A smaller dataset for instance segmentation tasks, containing a subset of 8 COCO images with segmentation annotations.
## [Pose Estimation](pose/index.md)
Pose estimation is a technique used to determine the pose of the object relative to the camera or the world coordinate system.
* [COCO](pose/coco.md): A large-scale dataset with human pose annotations designed for pose estimation tasks.
* [COCO8-pose](pose/coco8-pose.md): A smaller dataset for pose estimation tasks, containing a subset of 8 COCO images with human pose annotations.
* [COCO](pose/coco.md): A large-scale dataset with human pose annotations designed for pose estimation tasks.
* [COCO8-pose](pose/coco8-pose.md): A smaller dataset for pose estimation tasks, containing a subset of 8 COCO images with human pose annotations.
## [Classification](classify/index.md)
Image classification is a computer vision task that involves categorizing an image into one or more predefined classes or categories based on its visual content.
* [Caltech 101](classify/caltech101.md): A dataset containing images of 101 object categories for image classification tasks.
* [Caltech 256](classify/caltech256.md): An extended version of Caltech 101 with 256 object categories and more challenging images.
* [CIFAR-10](classify/cifar10.md): A dataset of 60K 32x32 color images in 10 classes, with 6K images per class.
* [CIFAR-100](classify/cifar100.md): An extended version of CIFAR-10 with 100 object categories and 600 images per class.
* [Fashion-MNIST](classify/fashion-mnist.md): A dataset consisting of 70,000 grayscale images of 10 fashion categories for image classification tasks.
* [ImageNet](classify/imagenet.md): A large-scale dataset for object detection and image classification with over 14 million images and 20,000 categories.
* [ImageNet-10](classify/imagenet10.md): A smaller subset of ImageNet with 10 categories for faster experimentation and testing.
* [Imagenette](classify/imagenette.md): A smaller subset of ImageNet that contains 10 easily distinguishable classes for quicker training and testing.
* [Imagewoof](classify/imagewoof.md): A more challenging subset of ImageNet containing 10 dog breed categories for image classification tasks.
* [MNIST](classify/mnist.md): A dataset of 70,000 grayscale images of handwritten digits for image classification tasks.
* [Caltech 101](classify/caltech101.md): A dataset containing images of 101 object categories for image classification tasks.
* [Caltech 256](classify/caltech256.md): An extended version of Caltech 101 with 256 object categories and more challenging images.
* [CIFAR-10](classify/cifar10.md): A dataset of 60K 32x32 color images in 10 classes, with 6K images per class.
* [CIFAR-100](classify/cifar100.md): An extended version of CIFAR-10 with 100 object categories and 600 images per class.
* [Fashion-MNIST](classify/fashion-mnist.md): A dataset consisting of 70,000 grayscale images of 10 fashion categories for image classification tasks.
* [ImageNet](classify/imagenet.md): A large-scale dataset for object detection and image classification with over 14 million images and 20,000 categories.
* [ImageNet-10](classify/imagenet10.md): A smaller subset of ImageNet with 10 categories for faster experimentation and testing.
* [Imagenette](classify/imagenette.md): A smaller subset of ImageNet that contains 10 easily distinguishable classes for quicker training and testing.
* [Imagewoof](classify/imagewoof.md): A more challenging subset of ImageNet containing 10 dog breed categories for image classification tasks.
* [MNIST](classify/mnist.md): A dataset of 70,000 grayscale images of handwritten digits for image classification tasks.
## [Multi-Object Tracking](track/index.md)

@ -1,5 +1,6 @@
---
comments: true
description: Learn how to format your dataset for training YOLO models with Ultralytics YOLO format using our concise tutorial and example YAML files.
---
# Pose Estimation Datasets Overview
@ -25,16 +26,16 @@ Here is an example of the label format for pose estimation task:
Format with Dim = 2
```
<class-index> <x> <y> <width> <height> <px1> <py1> <px2> <py2> <pxn> <pyn>
<class-index> <x> <y> <width> <height> <px1> <py1> <px2> <py2> ... <pxn> <pyn>
```
Format with Dim = 3
```
<class-index> <x> <y> <width> <height> <px1> <py1> <p1-visibility> <px2> <py2> <p2-visibility> <pxn> <pyn> <p2-visibility>
```
In this format, `<class-index>` is the index of the class for the object,`<x> <y> <width> <height>` are coordinates of boudning box, and `<px1> <py1> <px2> <py2> <pxn> <pyn>` are the pixel coordinates of the keypoints. The coordinates are separated by spaces.
In this format, `<class-index>` is the index of the class for the object,`<x> <y> <width> <height>` are coordinates of boudning box, and `<px1> <py1> <px2> <py2> ... <pxn> <pyn>` are the pixel coordinates of the keypoints. The coordinates are separated by spaces.
** Dataset file format **
@ -62,6 +63,7 @@ The `names` field is a list of the names of the object classes. The order of the
NOTE: Either `nc` or `names` must be defined. Defining both are not mandatory
Alternatively, you can directly define class names like this:
```
names:
0: person
@ -86,6 +88,7 @@ flip_idx: [0, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15]
```
## Usage
!!! example ""
=== "Python"
@ -107,6 +110,7 @@ flip_idx: [0, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15]
```
## Supported Datasets
TODO
## Port or Convert label formats

@ -1,5 +1,6 @@
---
comments: true
description: Learn about the Ultralytics YOLO dataset format for segmentation models. Use YAML to train Detection Models. Convert COCO to YOLO format using Python.
---
# Instance Segmentation Datasets Overview
@ -32,6 +33,7 @@ Here is an example of the YOLO dataset format for a single image with two object
0 0.6812 0.48541 0.67 0.4875 0.67656 0.487 0.675 0.489 0.66
1 0.5046 0.0 0.5015 0.004 0.4984 0.00416 0.4937 0.010 0.492 0.0104
```
Note: The length of each row does not have to be equal.
** Dataset file format **
@ -56,6 +58,7 @@ The `names` field is a list of the names of the object classes. The order of the
NOTE: Either `nc` or `names` must be defined. Defining both are not mandatory.
Alternatively, you can directly define class names like this:
```yaml
names:
0: person
@ -73,6 +76,7 @@ names: ['person', 'car']
```
## Usage
!!! example ""
=== "Python"

@ -1,5 +1,6 @@
---
comments: true
description: Discover the datasets compatible with Multi-Object Detector. Train your trackers and make your detections more efficient with Ultralytics' YOLO.
---
# Multi-object Tracking Datasets Overview
@ -26,4 +27,3 @@ Support for training trackers alone is coming soon
```bash
yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" conf=0.3, iou=0.5 show
```

@ -1,3 +1,7 @@
---
description: Individual Contributor License Agreement. Settle Intellectual Property issues for Contributions made to anything open source released by Ultralytics.
---
# Ultralytics Individual Contributor License Agreement
Thank you for your interest in contributing to open source software projects (“Projects”) made available by Ultralytics

@ -1,5 +1,6 @@
---
comments: true
description: 'Get quick answers to common Ultralytics YOLO questions: Hardware requirements, fine-tuning, conversion, real-time detection, and accuracy tips.'
---
# Ultralytics YOLO Frequently Asked Questions (FAQ)

@ -1,5 +1,6 @@
---
comments: true
description: Read the Ultralytics Contributor Covenant Code of Conduct. Learn ways to create a welcoming community & consequences for inappropriate conduct.
---
# Ultralytics Contributor Covenant Code of Conduct

@ -1,5 +1,6 @@
---
comments: true
description: Learn how to contribute to Ultralytics Open-Source YOLO Repositories with contributions guidelines, pull requests requirements, and GitHub CI tests.
---
# Contributing to Ultralytics Open-Source YOLO Repositories

@ -1,5 +1,6 @@
---
comments: true
description: Get comprehensive resources for Ultralytics YOLO repositories. Find guides, FAQs, MRE creation, CLA & more. Join the supportive community now!
---
Welcome to the Ultralytics Help page! We are committed to providing you with comprehensive resources to make your experience with Ultralytics YOLO repositories as smooth and enjoyable as possible. On this page, you'll find essential links to guides and documents that will help you navigate through common tasks and address any questions you might have while using our repositories.

@ -1,5 +1,6 @@
---
comments: true
description: Learn how to create a Minimum Reproducible Example (MRE) for Ultralytics YOLO bug reports to help maintainers and contributors understand your issue better.
---
# Creating a Minimum Reproducible Example for Bug Reports in Ultralytics YOLO Repositories

@ -1,5 +1,6 @@
---
comments: true
description: Run YOLO models on your Android device for real-time object detection with Ultralytics Android App. Utilizes TensorFlow Lite and hardware delegates.
---
# Ultralytics Android App: Real-time Object Detection with YOLO Models

@ -1,5 +1,6 @@
---
comments: true
description: Experience the power of YOLOv5 and YOLOv8 models with Ultralytics HUB app. Download from Google Play and App Store now.
---
# Ultralytics HUB App

@ -1,5 +1,6 @@
---
comments: true
description: Get started with the Ultralytics iOS app and run YOLO models in real-time for object detection on your iPhone or iPad with the Apple Neural Engine.
---
# Ultralytics iOS App: Real-time Object Detection with YOLO Models
@ -33,7 +34,6 @@ By combining quantized YOLO models with the Apple Neural Engine, the Ultralytics
| 2021 | [iPhone 13](https://en.wikipedia.org/wiki/IPhone_13) | [A15 Bionic](https://en.wikipedia.org/wiki/Apple_A15) | 5 nm | 15.8 |
| 2022 | [iPhone 14](https://en.wikipedia.org/wiki/IPhone_14) | [A16 Bionic](https://en.wikipedia.org/wiki/Apple_A16) | 4 nm | 17.0 |
Please note that this list only includes iPhone models from 2017 onwards, and the ANE TOPs values are approximate.
## Getting Started with the Ultralytics iOS App

@ -1,5 +1,6 @@
---
comments: true
description: Upload custom datasets to Ultralytics HUB for YOLOv5 and YOLOv8 models. Follow YAML structure, zip and upload. Scan & train new models.
---
# HUB Datasets

@ -1,5 +1,6 @@
---
comments: true
description: 'Ultralytics HUB: Train & deploy YOLO models from one spot! Use drag-and-drop interface with templates & pre-training models. Check quickstart, datasets, and more.'
---
# Ultralytics HUB
@ -20,7 +21,6 @@ comments: true
launch [Ultralytics HUB](https://bit.ly/ultralytics_hub), a new web tool for training and deploying all your YOLOv5 and YOLOv8 🚀
models from one spot!
## Introduction
HUB is designed to be user-friendly and intuitive, with a drag-and-drop interface that allows users to

@ -6,7 +6,6 @@ comments: true
This page is currently under construction! 👷Please check back later for updates. 😃🔜
# YOLO Inference API
The YOLO Inference API allows you to access the YOLOv8 object detection capabilities via a RESTful API. This enables you to run object detection on images without the need to install and set up the YOLOv8 environment locally.
@ -45,7 +44,6 @@ print(response.json())
In this example, replace `API_KEY` with your actual API key, `MODEL_ID` with the desired model ID, and `path/to/image.jpg` with the path to the image you want to analyze.
## Example Usage with CLI
You can use the YOLO Inference API with the command-line interface (CLI) by utilizing the `curl` command. Replace `API_KEY` with your actual API key, `MODEL_ID` with the desired model ID, and `image.jpg` with the path to the image you want to analyze:
@ -334,7 +332,6 @@ YOLO segmentation models, such as `yolov8n-seg.pt`, can return JSON responses fr
}
```
### Pose Model Format
YOLO pose models, such as `yolov8n-pose.pt`, can return JSON responses from local inference, CLI API inference, and Python API inference. All of these methods produce the same JSON response format.

@ -1,5 +1,6 @@
---
comments: true
description: Train and Deploy your Model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle or directly on Mobile.
---
# HUB Models
@ -11,7 +12,6 @@ Connect to the Ultralytics HUB notebook and use your model API key to begin trai
<a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb" target="_blank">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
## Deploy to Real World
Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run

@ -1,5 +1,6 @@
---
comments: true
description: Explore Ultralytics YOLOv8, a cutting-edge real-time object detection and image segmentation model for various applications and hardware platforms.
---
<div align="center">

@ -1,5 +1,6 @@
---
comments: true
description: Learn about the supported models and architectures, such as YOLOv3, YOLOv5, and YOLOv8, and how to contribute your own model to Ultralytics.
---
# Models

@ -1,5 +1,6 @@
---
comments: true
description: Learn about the Vision Transformer (ViT) and segment anything with SAM models. Train and use pre-trained models with Python API.
---
# Vision Transformers
@ -9,11 +10,11 @@ Vit models currently support Python environment:
```python
from ultralytics.vit import SAM
# from ultralytics.vit import MODEL_TYPe
# from ultralytics.vit import MODEL_TYPE
model = SAM("sam_b.pt")
model.info() # display model information
model.predict(...) # train the model
model.predict(...) # predict
```
# Segment Anything

@ -1,5 +1,6 @@
---
comments: true
description: Detect objects faster and more accurately using Ultralytics YOLOv5u. Find pre-trained models for each task, including Inference, Validation and Training.
---
# YOLOv5u

@ -1,5 +1,6 @@
---
comments: true
description: Learn about YOLOv8's pre-trained weights supporting detection, instance segmentation, pose, and classification tasks. Get performance details.
---
# YOLOv8

@ -1,5 +1,6 @@
---
comments: true
description: Benchmark mode compares speed and accuracy of various YOLOv8 export formats like ONNX or OpenVINO. Optimize formats for speed or accuracy.
---
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">

@ -1,5 +1,6 @@
---
comments: true
description: 'Export mode: Create a deployment-ready YOLOv8 model by converting it to various formats. Export to ONNX or OpenVINO for up to 3x CPU speedup.'
---
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">

@ -1,5 +1,6 @@
---
comments: true
description: Use Ultralytics YOLOv8 Modes (Train, Val, Predict, Export, Track, Benchmark) to train, validate, predict, track, export or benchmark.
---
# Ultralytics YOLOv8 Modes

@ -1,5 +1,6 @@
---
comments: true
description: Get started with YOLOv8 Predict mode and input sources. Accepts various input sources such as images, videos, and directories.
---
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
@ -58,10 +59,11 @@ whether each source can be used in streaming mode with `stream=True` ✅ and an
| YouTube ✅ | `'https://youtu.be/Zgi9g1ksQHc'` | `str` | |
| stream ✅ | `'rtsp://example.com/media.mp4'` | `str` | RTSP, RTMP, HTTP |
## Arguments
`model.predict` accepts multiple arguments that control the prediction operation. These arguments can be passed directly to `model.predict`:
!!! example
```
model.predict(source, save=True, imgsz=320, conf=0.5)
```
@ -220,6 +222,7 @@ masks, classification logits, etc.) found in the results object
res_plotted = res[0].plot()
cv2.imshow("result", res_plotted)
```
| Argument | Description |
|-------------------------------|----------------------------------------------------------------------------------------|
| `conf (bool)` | Whether to plot the detection confidence score. |
@ -234,7 +237,6 @@ masks, classification logits, etc.) found in the results object
| `masks (bool)` | Whether to plot the masks. |
| `probs (bool)` | Whether to plot classification probability. |
## Streaming Source `for`-loop
Here's a Python script using OpenCV (cv2) and YOLOv8 to run inference on video frames. This script assumes you have already installed the necessary packages (opencv-python and ultralytics).

@ -1,5 +1,6 @@
---
comments: true
description: Validate and improve YOLOv8n model accuracy on COCO128 and other datasets using hyperparameter & configuration tuning, in Val mode.
---
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">

@ -5,14 +5,14 @@ https://github.com/squidfunk/mkdocs-material/blob/master/src/partials/source-fil
<br>
<div class="md-source-file">
<small>
<small>
<!-- mkdocs-git-revision-date-localized-plugin -->
{% if page.meta.git_revision_date_localized %}
📅 {{ lang.t("source.file.date.updated") }}:
{{ page.meta.git_revision_date_localized }}
{% if page.meta.git_creation_date_localized %}
<br />
<br/>
🎂 {{ lang.t("source.file.date.created") }}:
{{ page.meta.git_creation_date_localized }}
{% endif %}
@ -22,5 +22,5 @@ https://github.com/squidfunk/mkdocs-material/blob/master/src/partials/source-fil
📅 {{ lang.t("source.file.date.updated") }}:
{{ page.meta.revision_date }}
{% endif %}
</small>
</small>
</div>

@ -1,5 +1,6 @@
---
comments: true
description: Install and use YOLOv8 via CLI or Python. Run single-line commands or integrate with Python projects for object detection, segmentation, and classification.
---
## Install
@ -32,13 +33,11 @@ See the `ultralytics` [requirements.txt](https://github.com/ultralytics/ultralyt
<img width="800" alt="PyTorch Installation Instructions" src="https://user-images.githubusercontent.com/26833433/228650108-ab0ec98a-b328-4f40-a40d-95355e8a84e3.png">
</a>
## Use with CLI
The YOLO command line interface (CLI) allows for simple single-line commands without the need for a Python environment.
CLI requires no customization or Python code. You can simply run all tasks from the terminal with the `yolo` command. Check out the [CLI Guide](usage/cli.md) to learn more about using YOLOv8 from the command line.
!!! example
=== "Syntax"
@ -93,7 +92,6 @@ CLI requires no customization or Python code. You can simply run all tasks from
yolo cfg
```
!!! warning "Warning"
Arguments must be passed as `arg=val` pairs, split by an equals `=` sign and delimited by spaces ` ` between pairs. Do not use `--` argument prefixes or commas `,` between arguments.

@ -1,3 +1,7 @@
---
description: Learn how to use Ultralytics hub authentication in your projects with examples and guidelines from the Auth page on Ultralytics Docs.
---
# Auth
---
:::ultralytics.hub.auth.Auth

@ -1,3 +1,7 @@
---
description: Accelerate your AI development with the Ultralytics HUB Training Session. High-performance training of object detection models.
---
# HUBTrainingSession
---
:::ultralytics.hub.session.HUBTrainingSession

@ -1,3 +1,7 @@
---
description: Explore Ultralytics events, including 'request_with_credentials' and 'smart_request', to improve your project's performance and efficiency.
---
# Events
---
:::ultralytics.hub.utils.Events

@ -1,3 +1,7 @@
---
description: Ensure class names match filenames for easy imports. Use AutoBackend to automatically rename and refactor model files.
---
# AutoBackend
---
:::ultralytics.nn.autobackend.AutoBackend

@ -1,3 +1,7 @@
---
description: Detect 80+ object categories with bounding box coordinates and class probabilities using AutoShape in Ultralytics YOLO. Explore Detections now.
---
# AutoShape
---
:::ultralytics.nn.autoshape.AutoShape

@ -1,3 +1,7 @@
---
description: Explore Ultralytics neural network modules for convolution, attention, detection, pose, and classification in PyTorch.
---
# Conv
---
:::ultralytics.nn.modules.Conv

@ -1,3 +1,7 @@
---
description: Learn how to work with Ultralytics YOLO Detection, Segmentation & Classification Models, load weights and parse models in PyTorch.
---
# BaseModel
---
:::ultralytics.nn.tasks.BaseModel

@ -1,3 +1,7 @@
---
description: Learn how to register custom event-tracking and track predictions with Ultralytics YOLO via on_predict_start and register_tracker methods.
---
# on_predict_start
---
:::ultralytics.tracker.track.on_predict_start

@ -1,3 +1,7 @@
---
description: 'TrackState: A comprehensive guide to Ultralytics tracker''s BaseTrack for monitoring model performance. Improve your tracking capabilities now!'
---
# TrackState
---
:::ultralytics.tracker.trackers.basetrack.TrackState

@ -1,3 +1,7 @@
---
description: '"Optimize tracking with Ultralytics BOTrack. Easily sort and track bots with BOTSORT. Streamline data collection for improved performance."'
---
# BOTrack
---
:::ultralytics.tracker.trackers.bot_sort.BOTrack

@ -1,3 +1,7 @@
---
description: Learn how to track ByteAI model sizes and tips for model optimization with STrack, a byte tracking tool from Ultralytics.
---
# STrack
---
:::ultralytics.tracker.trackers.byte_tracker.STrack

@ -1,3 +1,7 @@
---
description: '"Track Google Marketing Campaigns in GMC with Ultralytics Tracker. Learn to set up and use GMC for detailed analytics. Get started now."'
---
# GMC
---
:::ultralytics.tracker.utils.gmc.GMC

@ -1,3 +1,7 @@
---
description: Improve object tracking with KalmanFilterXYAH in Ultralytics YOLO - an efficient and accurate algorithm for state estimation.
---
# KalmanFilterXYAH
---
:::ultralytics.tracker.utils.kalman_filter.KalmanFilterXYAH

@ -1,3 +1,7 @@
---
description: Learn how to match and fuse object detections for accurate target tracking using Ultralytics' YOLO merge_matches, iou_distance, and embedding_distance.
---
# merge_matches
---
:::ultralytics.tracker.utils.matching.merge_matches

@ -1,3 +1,7 @@
---
description: Learn how to use auto_annotate in Ultralytics YOLO to generate annotations automatically for your dataset. Simplify object detection workflows.
---
# auto_annotate
---
:::ultralytics.yolo.data.annotator.auto_annotate

@ -1,3 +1,7 @@
---
description: Use Ultralytics YOLO Data Augmentation transforms with Base, MixUp, and Albumentations for object detection and classification.
---
# BaseTransform
---
:::ultralytics.yolo.data.augment.BaseTransform

@ -1,3 +1,7 @@
---
description: Learn about BaseDataset in Ultralytics YOLO, a flexible dataset class for object detection. Maximize your YOLO performance with custom datasets.
---
# BaseDataset
---
:::ultralytics.yolo.data.base.BaseDataset

@ -1,3 +1,7 @@
---
description: Maximize YOLO performance with Ultralytics' InfiniteDataLoader, seed_worker, build_dataloader, and load_inference_source functions.
---
# InfiniteDataLoader
---
:::ultralytics.yolo.data.build.InfiniteDataLoader

@ -1,3 +1,7 @@
---
description: Convert COCO-91 to COCO-80 class, RLE to polygon, and merge multi-segment images with Ultralytics YOLO data converter. Improve your object detection.
---
# coco91_to_coco80_class
---
:::ultralytics.yolo.data.converter.coco91_to_coco80_class

@ -1,3 +1,7 @@
---
description: 'Ultralytics YOLO Docs: Learn about stream loaders for image and tensor data, as well as autocasting techniques. Check out SourceTypes and more.'
---
# SourceTypes
---
:::ultralytics.yolo.data.dataloaders.stream_loaders.SourceTypes

@ -1,3 +1,7 @@
---
description: Enhance image data with Albumentations CenterCrop, normalize, augment_hsv, replicate, random_perspective, cutout, & box_candidates.
---
# Albumentations
---
:::ultralytics.yolo.data.dataloaders.v5augmentations.Albumentations

@ -1,3 +1,7 @@
---
description: Efficiently load images and labels to models using Ultralytics YOLO's InfiniteDataLoader, LoadScreenshots, and LoadStreams.
---
# InfiniteDataLoader
---
:::ultralytics.yolo.data.dataloaders.v5loader.InfiniteDataLoader

@ -1,3 +1,7 @@
---
description: Create custom YOLOv5 datasets with Ultralytics YOLODataset and SemanticDataset. Streamline your object detection and segmentation projects.
---
# YOLODataset
---
:::ultralytics.yolo.data.dataset.YOLODataset

@ -1,3 +1,7 @@
---
description: Create a custom dataset of mixed and oriented rectangular objects with Ultralytics YOLO's MixAndRectDataset.
---
# MixAndRectDataset
---
:::ultralytics.yolo.data.dataset_wrappers.MixAndRectDataset

@ -1,3 +1,7 @@
---
description: Efficiently handle data in YOLO with Ultralytics. Utilize HUBDatasetStats and customize dataset with these data utility functions.
---
# HUBDatasetStats
---
:::ultralytics.yolo.data.utils.HUBDatasetStats

@ -1,3 +1,7 @@
---
description: Learn how to export your YOLO model in various formats using Ultralytics' exporter package - iOS, GDC, and more.
---
# Exporter
---
:::ultralytics.yolo.engine.exporter.Exporter

@ -1,3 +1,7 @@
---
description: Discover the YOLO model of Ultralytics engine to simplify your object detection tasks with state-of-the-art models.
---
# YOLO
---
:::ultralytics.yolo.engine.model.YOLO

@ -1,3 +1,7 @@
---
description: '"The BasePredictor class in Ultralytics YOLO Engine predicts object detection in images and videos. Learn to implement YOLO with ease."'
---
# BasePredictor
---
:::ultralytics.yolo.engine.predictor.BasePredictor

@ -1,3 +1,7 @@
---
description: Learn about BaseTensor & Boxes in Ultralytics YOLO Engine. Check out Ultralytics Docs for quality tutorials and resources on object detection.
---
# BaseTensor
---
:::ultralytics.yolo.engine.results.BaseTensor

@ -1,3 +1,7 @@
---
description: Train faster with mixed precision. Learn how to use BaseTrainer with Advanced Mixed Precision to optimize YOLOv3 and YOLOv4 models.
---
# BaseTrainer
---
:::ultralytics.yolo.engine.trainer.BaseTrainer

@ -1,3 +1,7 @@
---
description: Ensure YOLOv5 models meet constraints and standards with the BaseValidator class. Learn how to use it here.
---
# BaseValidator
---
:::ultralytics.yolo.engine.validator.BaseValidator

@ -1,3 +1,7 @@
---
description: Dynamically adjusts input size to optimize GPU memory usage during training. Learn how to use check_train_batch_size with Ultralytics YOLO.
---
# check_train_batch_size
---
:::ultralytics.yolo.utils.autobatch.check_train_batch_size

@ -1,3 +1,7 @@
---
description: Improve your YOLO's performance and measure its speed. Benchmark utility for YOLOv5.
---
# benchmark
---
:::ultralytics.yolo.utils.benchmarks.benchmark

@ -1,3 +1,7 @@
---
description: Learn about YOLO's callback functions from on_train_start to add_integration_callbacks. See how these callbacks modify and save models.
---
# on_pretrain_routine_start
---
:::ultralytics.yolo.utils.callbacks.base.on_pretrain_routine_start

@ -1,3 +1,7 @@
---
description: Improve your YOLOv5 model training with callbacks from ClearML. Learn about log debug samples, pre-training routines, validation and more.
---
# _log_debug_samples
---
:::ultralytics.yolo.utils.callbacks.clearml._log_debug_samples

@ -1,3 +1,7 @@
---
description: Learn about YOLO callbacks using the Comet.ml platform, enhancing object detection training and testing with custom logging and visualizations.
---
# _get_comet_mode
---
:::ultralytics.yolo.utils.callbacks.comet._get_comet_mode

@ -1,3 +1,7 @@
---
description: Improve YOLOv5 model training with Ultralytics' on-train callbacks. Boost performance on-pretrain-routine-end, model-save, train/predict start.
---
# on_pretrain_routine_end
---
:::ultralytics.yolo.utils.callbacks.hub.on_pretrain_routine_end

@ -1,3 +1,7 @@
---
description: Track model performance and metrics with MLflow in YOLOv5. Use callbacks like on_pretrain_routine_end or on_train_end to log information.
---
# on_pretrain_routine_end
---
:::ultralytics.yolo.utils.callbacks.mlflow.on_pretrain_routine_end

@ -1,3 +1,7 @@
---
description: Improve YOLOv5 training with Neptune, a powerful logging tool. Track metrics like images, plots, and epochs for better model performance.
---
# _log_scalars
---
:::ultralytics.yolo.utils.callbacks.neptune._log_scalars

@ -1,3 +1,7 @@
---
description: '"Improve YOLO model performance with on_fit_epoch_end callback. Learn to integrate with Ray Tune for hyperparameter tuning. Ultralytics YOLO docs."'
---
# on_fit_epoch_end
---
:::ultralytics.yolo.utils.callbacks.raytune.on_fit_epoch_end

@ -1,3 +1,7 @@
---
description: Learn how to monitor the training process with Tensorboard using Ultralytics YOLO's "_log_scalars" and "on_batch_end" methods.
---
# _log_scalars
---
:::ultralytics.yolo.utils.callbacks.tensorboard._log_scalars

@ -1,3 +1,7 @@
---
description: Learn how to use Ultralytics YOLO's built-in callbacks `on_pretrain_routine_start` and `on_train_epoch_end` for improved training performance.
---
# on_pretrain_routine_start
---
:::ultralytics.yolo.utils.callbacks.wb.on_pretrain_routine_start

@ -1,3 +1,7 @@
---
description: 'Check functions for YOLO utils: image size, version, font, requirements, filename suffix, YAML file, YOLO, and Git version.'
---
# is_ascii
---
:::ultralytics.yolo.utils.checks.is_ascii

@ -1,3 +1,7 @@
---
description: Learn how to find free network port and generate DDP (Distributed Data Parallel) command in Ultralytics YOLO with easy examples.
---
# find_free_network_port
---
:::ultralytics.yolo.utils.dist.find_free_network_port

@ -1,3 +1,7 @@
---
description: Download and unzip YOLO pretrained models. Ultralytics YOLO docs utils.downloads.unzip_file, checks disk space, downloads and attempts assets.
---
# is_url
---
:::ultralytics.yolo.utils.downloads.is_url

@ -1,3 +1,7 @@
---
description: Learn about HUBModelError in Ultralytics YOLO Docs. Resolve the error and get the most out of your YOLO model.
---
# HUBModelError
---
:::ultralytics.yolo.utils.errors.HUBModelError

@ -1,3 +1,7 @@
---
description: 'Learn about Ultralytics YOLO files and directory utilities: WorkingDirectory, file_age, file_size, and make_dirs.'
---
# WorkingDirectory
---
:::ultralytics.yolo.utils.files.WorkingDirectory

@ -1,3 +1,7 @@
---
description: Learn about Bounding Boxes (Bboxes) and _ntuple in Ultralytics YOLO for object detection. Improve accuracy and speed with these powerful tools.
---
# Bboxes
---
:::ultralytics.yolo.utils.instance.Bboxes

@ -1,3 +1,7 @@
---
description: Learn about Varifocal Loss and Keypoint Loss in Ultralytics YOLO for advanced bounding box and pose estimation. Visit our docs for more.
---
# VarifocalLoss
---
:::ultralytics.yolo.utils.loss.VarifocalLoss

@ -1,3 +1,7 @@
---
description: Explore Ultralytics YOLO's FocalLoss, DetMetrics, PoseMetrics, ClassifyMetrics, and more with Ultralytics Metrics documentation.
---
# FocalLoss
---
:::ultralytics.yolo.utils.metrics.FocalLoss

@ -1,3 +1,7 @@
---
description: Learn about various utility functions in Ultralytics YOLO, including x, y, width, height conversions, non-max suppression, and more.
---
# Profile
---
:::ultralytics.yolo.utils.ops.Profile

@ -1,3 +1,7 @@
---
description: 'Discover the power of YOLO''s plotting functions: Colors, Labels and Images. Code examples to output targets and visualize features. Check it now.'
---
# Colors
---
:::ultralytics.yolo.utils.plotting.Colors

@ -1,3 +1,7 @@
---
description: Improve your YOLO models with Ultralytics' TaskAlignedAssigner, select_highest_overlaps, and dist2bbox utilities. Streamline your workflow today.
---
# TaskAlignedAssigner
---
:::ultralytics.yolo.utils.tal.TaskAlignedAssigner

@ -1,3 +1,7 @@
---
description: Optimize your PyTorch models with Ultralytics YOLO's torch_utils functions such as ModelEMA, select_device, and is_parallel.
---
# ModelEMA
---
:::ultralytics.yolo.utils.torch_utils.ModelEMA

@ -1,3 +1,7 @@
---
description: Learn how to use ClassificationPredictor in Ultralytics YOLOv8 for object classification tasks in a simple and efficient way.
---
# ClassificationPredictor
---
:::ultralytics.yolo.v8.classify.predict.ClassificationPredictor

@ -1,3 +1,7 @@
---
description: Train a custom image classification model using Ultralytics YOLOv8 with ClassificationTrainer. Boost accuracy and efficiency today.
---
# ClassificationTrainer
---
:::ultralytics.yolo.v8.classify.train.ClassificationTrainer

@ -1,3 +1,7 @@
---
description: Ensure model classification accuracy with Ultralytics YOLO's ClassificationValidator. Validate and improve your model with ease.
---
# ClassificationValidator
---
:::ultralytics.yolo.v8.classify.val.ClassificationValidator

@ -1,3 +1,7 @@
---
description: Detect and predict objects in images and videos using the Ultralytics YOLO v8 model with DetectionPredictor.
---
# DetectionPredictor
---
:::ultralytics.yolo.v8.detect.predict.DetectionPredictor

@ -1,3 +1,7 @@
---
description: Train and optimize custom object detection models with Ultralytics DetectionTrainer and train functions. Get started with YOLO v8 today.
---
# DetectionTrainer
---
:::ultralytics.yolo.v8.detect.train.DetectionTrainer

@ -1,3 +1,7 @@
---
description: Validate YOLOv5 detections using this PyTorch module. Ensure model accuracy with NMS IOU threshold tuning and label mapping.
---
# DetectionValidator
---
:::ultralytics.yolo.v8.detect.val.DetectionValidator

@ -1,3 +1,7 @@
---
description: Predict human pose coordinates and confidence scores using YOLOv5. Use on real-time video streams or static images.
---
# PosePredictor
---
:::ultralytics.yolo.v8.pose.predict.PosePredictor

@ -1,3 +1,7 @@
---
description: Boost posture detection using PoseTrainer and train models using train() API. Learn PoseLoss for ultra-fast and accurate pose detection with Ultralytics YOLO.
---
# PoseTrainer
---
:::ultralytics.yolo.v8.pose.train.PoseTrainer

@ -1,3 +1,7 @@
---
description: Ensure proper human poses in images with YOLOv8 Pose Validation, part of the Ultralytics YOLO v8 suite.
---
# PoseValidator
---
:::ultralytics.yolo.v8.pose.val.PoseValidator

@ -1,3 +1,7 @@
---
description: '"Use SegmentationPredictor in YOLOv8 for efficient object detection and segmentation. Explore Ultralytics YOLO Docs for more information."'
---
# SegmentationPredictor
---
:::ultralytics.yolo.v8.segment.predict.SegmentationPredictor

@ -1,3 +1,7 @@
---
description: Learn about SegmentationTrainer and Train in Ultralytics YOLO v8 for efficient object detection models. Improve your training with Ultralytics Docs.
---
# SegmentationTrainer
---
:::ultralytics.yolo.v8.segment.train.SegmentationTrainer

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save