`ultralytics 8.0.82` docs updates and fixes (#2098)

Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
Co-authored-by: Aurelio Losquiño Muñoz <38859113+aurelm95@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Paula Derrenger <107626595+pderrenger@users.noreply.github.com>
Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
single_channel
Glenn Jocher 2 years ago committed by GitHub
parent a38f227672
commit 55a03ad85f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -207,7 +207,7 @@ See [Pose Docs](https://docs.ultralytics.com/tasks/pose) for usage examples with
## <div align="center">Ultralytics HUB</div> ## <div align="center">Ultralytics HUB</div>
Experience seamless AI with [Ultralytics HUB](https://bit.ly/ultralytics_hub) ⭐, the all-in-one solution for data visualization, YOLOv5 and YOLOv8 (coming soon) 🚀 model training and deployment, without any coding. Transform images into actionable insights and bring your AI visions to life with ease using our cutting-edge platform and user-friendly [Ultralytics App](https://ultralytics.com/app_install). Start your journey for **Free** now! Experience seamless AI with [Ultralytics HUB](https://bit.ly/ultralytics_hub) ⭐, the all-in-one solution for data visualization, YOLOv5 and YOLOv8 🚀 model training and deployment, without any coding. Transform images into actionable insights and bring your AI visions to life with ease using our cutting-edge platform and user-friendly [Ultralytics App](https://ultralytics.com/app_install). Start your journey for **Free** now!
<a href="https://bit.ly/ultralytics_hub" target="_blank"> <a href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a> <img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>

@ -23,9 +23,10 @@ WORKDIR /usr/src/ultralytics
RUN git clone https://github.com/ultralytics/ultralytics /usr/src/ultralytics RUN git clone https://github.com/ultralytics/ultralytics /usr/src/ultralytics
ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt /usr/src/ultralytics/ ADD https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt /usr/src/ultralytics/
# Install pip packages # Install pip packages manually for TensorRT compatibility https://github.com/NVIDIA/TensorRT/issues/2567
RUN python3 -m pip install --upgrade pip wheel RUN python3 -m pip install --upgrade pip wheel
RUN pip install --no-cache -e . RUN pip install --no-cache tqdm matplotlib pyyaml psutil thop pandas onnx "numpy==1.23"
RUN pip install --no-cache -e . --no-deps
# Set environment variables # Set environment variables
ENV OMP_NUM_THREADS=1 ENV OMP_NUM_THREADS=1
@ -37,4 +38,4 @@ ENV OMP_NUM_THREADS=1
# t=ultralytics/ultralytics:latest-jetson && sudo docker build --platform linux/arm64 -f docker/Dockerfile-jetson -t $t . && sudo docker push $t # t=ultralytics/ultralytics:latest-jetson && sudo docker build --platform linux/arm64 -f docker/Dockerfile-jetson -t $t . && sudo docker push $t
# Pull and Run # Pull and Run
# t=ultralytics/ultralytics:jetson && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all $t # t=ultralytics/ultralytics:jetson && sudo docker pull $t && sudo docker run -it --runtime=nvidia $t

@ -31,15 +31,6 @@ Explore the YOLOv8 Docs, a comprehensive resource designed to help you understan
- [YOLOv3](https://pjreddie.com/media/files/papers/YOLOv3.pdf), launched in 2018, further enhanced the model's performance using a more efficient backbone network, multiple anchors and spatial pyramid pooling. - [YOLOv3](https://pjreddie.com/media/files/papers/YOLOv3.pdf), launched in 2018, further enhanced the model's performance using a more efficient backbone network, multiple anchors and spatial pyramid pooling.
- [YOLOv4](https://arxiv.org/abs/2004.10934) was released in 2020, introducing innovations like Mosaic data augmentation, a new anchor-free detection head, and a new loss function. - [YOLOv4](https://arxiv.org/abs/2004.10934) was released in 2020, introducing innovations like Mosaic data augmentation, a new anchor-free detection head, and a new loss function.
- [YOLOv5](https://github.com/ultralytics/yolov5) further improved the model's performance and added new features such as hyperparameter optimization, integrated experiment tracking and automatic export to popular export formats. - [YOLOv5](https://github.com/ultralytics/yolov5) further improved the model's performance and added new features such as hyperparameter optimization, integrated experiment tracking and automatic export to popular export formats.
- [YOLOv6](https://github.com/meituan/YOLOv6) was open-sourced by Meituan in 2022 and is in use in many of the company's autonomous delivery robots. - [YOLOv6](https://github.com/meituan/YOLOv6) was open-sourced by [Meituan](https://about.meituan.com/en) in 2022 and is in use in many of the company's autonomous delivery robots.
- [YOLOv7](https://github.com/WongKinYiu/yolov7) added additional tasks such as pose estimation on the COCO keypoints dataset. - [YOLOv7](https://github.com/WongKinYiu/yolov7) added additional tasks such as pose estimation on the COCO keypoints dataset.
- [YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of YOLO by Ultralytics. As a cutting-edge, state-of-the-art (SOTA) model, YOLOv8 builds on the success of previous versions, introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLOv8 supports a full range of vision AI tasks, including [detection](tasks/detect.md), [segmentation](tasks/segment.md), [pose estimation](tasks/pose.md), [tracking](modes/track.md), and [classification](tasks/classify.md). This versatility allows users to leverage YOLOv8's capabilities across diverse applications and domains.
Since its launch YOLO has been employed in various applications, including autonomous vehicles, security and surveillance, and medical imaging, and has won several competitions like the COCO Object Detection Challenge and the DOTA Object Detection Challenge.
## Ultralytics YOLOv8
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO object detection and image segmentation model. As a cutting-edge, state-of-the-art (SOTA) model, YOLOv8 builds on the success of previous versions, introducing new features and improvements for enhanced performance, flexibility, and efficiency.
YOLOv8 is designed with a strong focus on speed, size, and accuracy, making it a compelling choice for various vision AI tasks. It outperforms previous versions by incorporating innovations like a new backbone network, a new anchor-free split head, and new loss functions. These improvements enable YOLOv8 to deliver superior results, while maintaining a compact size and exceptional speed.
Additionally, YOLOv8 supports a full range of vision AI tasks, including [detection](tasks/detect.md), [segmentation](tasks/segment.md), [pose estimation](tasks/pose.md), [tracking](modes/track.md), and [classification](tasks/classify.md). This versatility allows users to leverage YOLOv8's capabilities across diverse applications and domains.

@ -69,6 +69,15 @@ see the [Configuration](../usage/cfg.md) page.
yolo classify train data=mnist160 model=yolov8n-cls.yaml pretrained=yolov8n-cls.pt epochs=100 imgsz=64 yolo classify train data=mnist160 model=yolov8n-cls.yaml pretrained=yolov8n-cls.pt epochs=100 imgsz=64
``` ```
### Dataset format
The YOLO classification dataset format is same as the torchvision format. Each class of images has its own folder and you have to simply pass the path of the dataset folder, i.e, `yolo classify train data="path/to/dataset"`
```
dataset/
├── class1/
├── class2/
├── class3/
├── ...
```
## Val ## Val
Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains

@ -67,6 +67,9 @@ the [Configuration](../usage/cfg.md) page.
# Build a new model from YAML, transfer pretrained weights to it and start training # Build a new model from YAML, transfer pretrained weights to it and start training
yolo detect train data=coco128.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640 yolo detect train data=coco128.yaml model=yolov8n.yaml pretrained=yolov8n.pt epochs=100 imgsz=640
``` ```
### Dataset format
YOLO detection dataset format can be found in detail in the [Dataset Guide](../yolov5/train_custom_data.md).
To convert your existing dataset from other formats( like COCO, VOC etc.) to YOLO format, please use [json2yolo tool](https://github.com/ultralytics/JSON2YOLO) by Ultralytics.
## Val ## Val

@ -68,6 +68,13 @@ arguments see the [Configuration](../usage/cfg.md) page.
yolo segment train data=coco128-seg.yaml model=yolov8n-seg.yaml pretrained=yolov8n-seg.pt epochs=100 imgsz=640 yolo segment train data=coco128-seg.yaml model=yolov8n-seg.yaml pretrained=yolov8n-seg.pt epochs=100 imgsz=640
``` ```
### Dataset format
YOLO segmentation dataset label format extends detection format with segment points.
`cls x1 y1 x2 y2 p1 p2 ... pn`
To convert your existing dataset from other formats( like COCO, VOC etc.) to YOLO format, please use [json2yolo tool](https://github.com/ultralytics/JSON2YOLO) by Ultralytics.
## Val ## Val
Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the `model` Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the `model`

@ -0,0 +1,107 @@
# Hyperparameter Tuning with Ray Tune and YOLOv8
Hyperparameter tuning (or hyperparameter optimization) is the process of determining the right combination of hyperparameters that maximizes model performance. It works by running multiple trials in a single training process, evaluating the performance of each trial, and selecting the best hyperparameter values based on the evaluation results.
## Ultralytics YOLOv8 and Ray Tune Integration
[Ultralytics](https://ultralytics.com) YOLOv8 integrates hyperparameter tuning with Ray Tune, allowing you to easily optimize your YOLOv8 model's hyperparameters. By using Ray Tune, you can leverage advanced search algorithms, parallelism, and early stopping to speed up the tuning process and achieve better model performance.
### Ray Tune
<div align="center">
<a href="https://docs.ray.io/en/latest/tune/index.html" target="_blank">
<img width="480" src="https://docs.ray.io/en/latest/_images/tune_overview.png"></a>
</div>
[Ray Tune](https://docs.ray.io/en/latest/tune/index.html) is a powerful and flexible hyperparameter tuning library for machine learning models. It provides an efficient way to optimize hyperparameters by supporting various search algorithms, parallelism, and early stopping strategies. Ray Tune's flexible architecture enables seamless integration with popular machine learning frameworks, including Ultralytics YOLOv8.
### Weights & Biases
YOLOv8 also supports optional integration with [Weights & Biases](https://wandb.ai/site) (wandb) for tracking the tuning progress.
## Installation
To install the required packages, run:
!!! tip "Installation"
```bash
pip install -U ultralytics "ray[tune]" # install and/or update
pip install wandb # optional
```
## Usage
!!! example "Usage"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
results = model.tune(data="coco128.yaml")
```
## `tune()` Method Parameters
The `tune()` method in YOLOv8 provides an easy-to-use interface for hyperparameter tuning with Ray Tune. It accepts several arguments that allow you to customize the tuning process. Below is a detailed explanation of each parameter:
| Parameter | Type | Description | Default Value |
|-----------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|
| `data` | str | The dataset configuration file (in YAML format) to run the tuner on. This file should specify the training and validation data paths, as well as other dataset-specific settings. | |
| `space` | dict, optional | A dictionary defining the hyperparameter search space for Ray Tune. Each key corresponds to a hyperparameter name, and the value specifies the range of values to explore during tuning. If not provided, YOLOv8 uses a default search space with various hyperparameters. | |
| `grace_period` | int, optional | The grace period in epochs for the [ASHA scheduler](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#asha-tune-schedulers-asha) in Ray Tune. The scheduler will not terminate any trial before this number of epochs, allowing the model to have some minimum training before making a decision on early stopping. | 10 |
| `gpu_per_trial` | int, optional | The number of GPUs to allocate per trial during tuning. This helps manage GPU usage, particularly in multi-GPU environments. If not provided, the tuner will use all available GPUs. | None |
| `max_samples` | int, optional | The maximum number of trials to run during tuning. This parameter helps control the total number of hyperparameter combinations tested, ensuring the tuning process does not run indefinitely. | 10 |
| `train_args` | dict, optional | A dictionary of additional arguments to pass to the `train()` method during tuning. These arguments can include settings like the number of training epochs, batch size, and other training-specific configurations. | {} |
By customizing these parameters, you can fine-tune the hyperparameter optimization process to suit your specific needs and available computational resources.
## Default Search Space Description
The following table lists the default search space parameters for hyperparameter tuning in YOLOv8 with Ray Tune. Each parameter has a specific value range defined by `tune.uniform()`.
| Parameter | Value Range | Description |
|-----------------|----------------------------|------------------------------------------|
| lr0 | `tune.uniform(1e-5, 1e-1)` | Initial learning rate |
| lrf | `tune.uniform(0.01, 1.0)` | Final learning rate factor |
| momentum | `tune.uniform(0.6, 0.98)` | Momentum |
| weight_decay | `tune.uniform(0.0, 0.001)` | Weight decay |
| warmup_epochs | `tune.uniform(0.0, 5.0)` | Warmup epochs |
| warmup_momentum | `tune.uniform(0.0, 0.95)` | Warmup momentum |
| box | `tune.uniform(0.02, 0.2)` | Box loss weight |
| cls | `tune.uniform(0.2, 4.0)` | Class loss weight |
| fl_gamma | `tune.uniform(0.0, 2.0)` | Focal loss gamma |
| hsv_h | `tune.uniform(0.0, 0.1)` | Hue augmentation range |
| hsv_s | `tune.uniform(0.0, 0.9)` | Saturation augmentation range |
| hsv_v | `tune.uniform(0.0, 0.9)` | Value (brightness) augmentation range |
| degrees | `tune.uniform(0.0, 45.0)` | Rotation augmentation range (degrees) |
| translate | `tune.uniform(0.0, 0.9)` | Translation augmentation range |
| scale | `tune.uniform(0.0, 0.9)` | Scaling augmentation range |
| shear | `tune.uniform(0.0, 10.0)` | Shear augmentation range (degrees) |
| perspective | `tune.uniform(0.0, 0.001)` | Perspective augmentation range |
| flipud | `tune.uniform(0.0, 1.0)` | Vertical flip augmentation probability |
| fliplr | `tune.uniform(0.0, 1.0)` | Horizontal flip augmentation probability |
| mosaic | `tune.uniform(0.0, 1.0)` | Mosaic augmentation probability |
| mixup | `tune.uniform(0.0, 1.0)` | Mixup augmentation probability |
| copy_paste | `tune.uniform(0.0, 1.0)` | Copy-paste augmentation probability |
## Custom Search Space Example
In this example, we demonstrate how to use a custom search space for hyperparameter tuning with Ray Tune and YOLOv8. By providing a custom search space, you can focus the tuning process on specific hyperparameters of interest.
!!! example "Usage"
```python
from ultralytics import YOLO
from ray import tune
model = YOLO("yolov8n.pt")
result = model.tune(
data="coco128.yaml",
space={"lr0": tune.uniform(1e-5, 1e-1)},
train_args={"epochs": 50}
)
```
In the code snippet above, we create a YOLO model with the "yolov8n.pt" pretrained weights. Then, we call the `tune()` method, specifying the dataset configuration with "coco128.yaml". We provide a custom search space for the initial learning rate `lr0` using a dictionary with the key "lr0" and the value `tune.uniform(1e-5, 1e-1)`. Finally, we pass additional training arguments, such as the number of epochs, using the `train_args` parameter.

@ -41,6 +41,7 @@ theme:
- toc.integrate - toc.integrate
- navigation.top - navigation.top
- navigation.tabs - navigation.tabs
- navigation.tabs.sticky
- navigation.footer - navigation.footer
- navigation.tracking - navigation.tracking
- navigation.instant - navigation.instant

@ -46,7 +46,7 @@ setup(
'Intended Audience :: Developers', 'Intended Audience :: Developers',
'Intended Audience :: Education', 'Intended Audience :: Education',
'Intended Audience :: Science/Research', 'Intended Audience :: Science/Research',
'License :: OSI Approved :: GNU General Public License v3 (GPLv3)', 'License :: OSI Approved :: GNU Affero General Public License v3 (AGPLv3)',
'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.8',

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
__version__ = '8.0.81' __version__ = '8.0.82'
from ultralytics.hub import start from ultralytics.hub import start
from ultralytics.yolo.engine.model import YOLO from ultralytics.yolo.engine.model import YOLO

@ -6,7 +6,7 @@ from time import sleep
import requests import requests
from ultralytics.hub.utils import HUB_API_ROOT, PREFIX, check_dataset_disk_space, smart_request from ultralytics.hub.utils import HUB_API_ROOT, PREFIX, smart_request
from ultralytics.yolo.utils import LOGGER, __version__, checks, emojis, is_colab, threaded from ultralytics.yolo.utils import LOGGER, __version__, checks, emojis, is_colab, threaded
from ultralytics.yolo.utils.errors import HUBModelError from ultralytics.yolo.utils.errors import HUBModelError
@ -136,11 +136,6 @@ class HUBTrainingSession:
except Exception: except Exception:
raise raise
def check_disk_space(self):
"""Check if there is enough disk space for the dataset."""
if not check_dataset_disk_space(url=self.model['data']):
raise MemoryError('Not enough disk space')
def upload_model(self, epoch, weights, is_best=False, map=0.0, final=False): def upload_model(self, epoch, weights, is_best=False, map=0.0, final=False):
""" """
Upload a model checkpoint to Ultralytics HUB. Upload a model checkpoint to Ultralytics HUB.

@ -2,7 +2,6 @@
import os import os
import platform import platform
import shutil
import sys import sys
import threading import threading
import time import time
@ -21,28 +20,6 @@ HELP_MSG = 'If this issue persists please visit https://github.com/ultralytics/h
HUB_API_ROOT = os.environ.get('ULTRALYTICS_HUB_API', 'https://api.ultralytics.com') HUB_API_ROOT = os.environ.get('ULTRALYTICS_HUB_API', 'https://api.ultralytics.com')
def check_dataset_disk_space(url='https://ultralytics.com/assets/coco128.zip', sf=2.0):
"""
Check if there is sufficient disk space to download and store a dataset.
Args:
url (str, optional): The URL to the dataset file. Defaults to 'https://ultralytics.com/assets/coco128.zip'.
sf (float, optional): Safety factor, the multiplier for the required free space. Defaults to 2.0.
Returns:
(bool): True if there is sufficient disk space, False otherwise.
"""
gib = 1 << 30 # bytes per GiB
data = int(requests.head(url).headers['Content-Length']) / gib # dataset size (GB)
total, used, free = (x / gib for x in shutil.disk_usage('/')) # bytes
LOGGER.info(f'{PREFIX}{data:.3f} GB dataset, {free:.1f}/{total:.1f} GB free disk space')
if data * sf < free:
return True # sufficient space
LOGGER.warning(f'{PREFIX}WARNING: Insufficient free disk space {free:.1f} GB < {data * sf:.3f} GB required, '
f'training cancelled ❌. Please free {data * sf - free:.1f} GB additional disk space and try again.')
return False # insufficient space
def request_with_credentials(url: str) -> any: def request_with_credentials(url: str) -> any:
""" """
Make an AJAX request with cookies attached in a Google Colab environment. Make an AJAX request with cookies attached in a Google Colab environment.

@ -350,7 +350,6 @@ class YOLO:
if any(kwargs): if any(kwargs):
LOGGER.warning('WARNING ⚠️ using HUB training arguments, ignoring local training arguments.') LOGGER.warning('WARNING ⚠️ using HUB training arguments, ignoring local training arguments.')
kwargs = self.session.train_args kwargs = self.session.train_args
self.session.check_disk_space()
check_pip_update_available() check_pip_update_available()
overrides = self.overrides.copy() overrides = self.overrides.copy()
overrides.update(kwargs) overrides.update(kwargs)

@ -290,9 +290,9 @@ class Results(SimpleClass):
line += (conf, ) * save_conf + (() if id is None else (id, )) line += (conf, ) * save_conf + (() if id is None else (id, ))
texts.append(('%g ' * len(line)).rstrip() % line) texts.append(('%g ' * len(line)).rstrip() % line)
if texts:
with open(txt_file, 'a') as f: with open(txt_file, 'a') as f:
for text in texts: f.writelines(text + '\n' for text in texts)
f.write(text + '\n')
def save_crop(self, save_dir, file_name=Path('im.jpg')): def save_crop(self, save_dir, file_name=Path('im.jpg')):
""" """

@ -1,6 +1,7 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
import contextlib import contextlib
import shutil
import subprocess import subprocess
from itertools import repeat from itertools import repeat
from multiprocessing.pool import ThreadPool from multiprocessing.pool import ThreadPool
@ -57,6 +58,38 @@ def unzip_file(file, path=None, exclude=('.DS_Store', '__MACOSX')):
return unzip_dir # return unzip dir return unzip_dir # return unzip dir
def check_disk_space(url='https://ultralytics.com/assets/coco128.zip', sf=1.5, hard=True):
"""
Check if there is sufficient disk space to download and store a file.
Args:
url (str, optional): The URL to the file. Defaults to 'https://ultralytics.com/assets/coco128.zip'.
sf (float, optional): Safety factor, the multiplier for the required free space. Defaults to 2.0.
hard (bool, optional): Whether to throw an error or not on insufficient disk space. Defaults to True.
Returns:
(bool): True if there is sufficient disk space, False otherwise.
"""
with contextlib.suppress(Exception):
gib = 1 << 30 # bytes per GiB
data = int(requests.head(url).headers['Content-Length']) / gib # file size (GB)
total, used, free = (x / gib for x in shutil.disk_usage('/')) # bytes
if data * sf < free:
return True # sufficient space
# Insufficient space
text = (f'WARNING ⚠️ Insufficient free disk space {free:.1f} GB < {data * sf:.3f} GB required, '
f'Please free {data * sf - free:.1f} GB additional disk space and try again.')
if hard:
raise MemoryError(text)
else:
LOGGER.warning(text)
return False
# Pass if error
return True
def safe_download(url, def safe_download(url,
file=None, file=None,
dir=None, dir=None,
@ -91,6 +124,7 @@ def safe_download(url,
desc = f'Downloading {clean_url(url)} to {f}' desc = f'Downloading {clean_url(url)} to {f}'
LOGGER.info(f'{desc}...') LOGGER.info(f'{desc}...')
f.parent.mkdir(parents=True, exist_ok=True) # make directory if missing f.parent.mkdir(parents=True, exist_ok=True) # make directory if missing
check_disk_space(url)
for i in range(retry + 1): for i in range(retry + 1):
try: try:
if curl or i > 0: # curl download with retry, continue if curl or i > 0: # curl download with retry, continue

Loading…
Cancel
Save