ultralytics 8.0.53
DDP AMP and Edge TPU fixes (#1362)
Co-authored-by: Richard Aljaste <richardaljasteabramson@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Vuong Kha Sieu <75152429+hotfur@users.noreply.github.com>
This commit is contained in:
35
docs/app.md
35
docs/app.md
@ -3,7 +3,6 @@
|
||||
<a href="https://bit.ly/ultralytics_hub" target="_blank">
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
|
||||
<br>
|
||||
<br>
|
||||
<div align="center">
|
||||
<a href="https://github.com/ultralytics" style="text-decoration:none;">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="" /></a>
|
||||
@ -27,26 +26,26 @@
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-instagram.png" width="2%" alt="" /></a>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://github.com/ultralytics/hub/actions/workflows/ci.yaml">
|
||||
<img src="https://github.com/ultralytics/hub/actions/workflows/ci.yaml/badge.svg" alt="CI CPU"></a>
|
||||
<a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
|
||||
<a href="https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app" style="text-decoration:none;">
|
||||
<img src="https://raw.githubusercontent.com/ultralytics/assets/master/app/google-play.svg" width="15%" alt="" /></a>
|
||||
<a href="https://apps.apple.com/xk/app/ultralytics/id1583935240" style="text-decoration:none;">
|
||||
<img src="https://raw.githubusercontent.com/ultralytics/assets/master/app/app-store.svg" width="15%" alt="" /></a>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
Welcome to the Ultralytics HUB app for demonstrating YOLOv5 and YOLOv8 models! In this app, available on the [Apple App
|
||||
Store](https://apps.apple.com/xk/app/ultralytics/id1583935240) and the
|
||||
[Google Play Store](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app), you will be able
|
||||
to see the power and capabilities of YOLOv5, a state-of-the-art object detection model developed by Ultralytics.
|
||||
Welcome to the Ultralytics HUB app, which is designed to demonstrate the power and capabilities of the YOLOv5 and YOLOv8
|
||||
models. This app is available for download on
|
||||
the [Apple App Store](https://apps.apple.com/xk/app/ultralytics/id1583935240) and
|
||||
the [Google Play Store](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app).
|
||||
|
||||
**To install simply scan the QR code above**. The App currently features YOLOv5 models, with YOLOv8 models coming soon.
|
||||
**To install the app, simply scan the QR code provided above**. At the moment, the app features YOLOv5 models, with
|
||||
YOLOv8 models set to be available soon.
|
||||
|
||||
With YOLOv5, you can detect and classify objects in images and videos with high accuracy and speed. The model has been
|
||||
trained on a large dataset and is able to detect a wide range of objects, including cars, pedestrians, and traffic
|
||||
signs.
|
||||
With the YOLOv5 model, you can easily detect and classify objects in images and videos with high accuracy and speed. The
|
||||
model has been trained on a vast dataset and can recognize a wide range of objects, including pedestrians, traffic
|
||||
signs, and cars.
|
||||
|
||||
In this app, you will be able to try out YOLOv5 on your own images and videos, and see the model in action. You can also
|
||||
learn more about how YOLOv5 works and how it can be used in real-world applications.
|
||||
Using this app, you can try out YOLOv5 on your images and videos, and observe how the model works in real-time.
|
||||
Additionally, you can learn more about YOLOv5's functionality and how it can be integrated into real-world applications.
|
||||
|
||||
We hope you enjoy using YOLOv5 and seeing its capabilities firsthand. Thank you for choosing Ultralytics for your object
|
||||
detection needs!
|
||||
We are confident that you will enjoy using YOLOv5 and be amazed at its capabilities. Thank you for choosing Ultralytics
|
||||
for your AI solutions.
|
236
docs/cfg.md
236
docs/cfg.md
@ -1,236 +0,0 @@
|
||||
YOLO settings and hyperparameters play a critical role in the model's performance, speed, and accuracy. These settings
|
||||
and hyperparameters can affect the model's behavior at various stages of the model development process, including
|
||||
training, validation, and prediction.
|
||||
|
||||
YOLOv8 'yolo' CLI commands use the following syntax:
|
||||
|
||||
!!! example ""
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo TASK MODE ARGS
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `TASK` (optional) is one of `[detect, segment, classify]`. If it is not passed explicitly YOLOv8 will try to guess
|
||||
the `TASK` from the model type.
|
||||
- `MODE` (required) is one of `[train, val, predict, export]`
|
||||
- `ARGS` (optional) are any number of custom `arg=value` pairs like `imgsz=320` that override defaults.
|
||||
For a full list of available `ARGS` see the [Configuration](cfg.md) page and `defaults.yaml`
|
||||
GitHub [source](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/cfg/default.yaml).
|
||||
|
||||
#### Tasks
|
||||
|
||||
YOLO models can be used for a variety of tasks, including detection, segmentation, and classification. These tasks
|
||||
differ in the type of output they produce and the specific problem they are designed to solve.
|
||||
|
||||
- **Detect**: Detection tasks involve identifying and localizing objects or regions of interest in an image or video.
|
||||
YOLO models can be used for object detection tasks by predicting the bounding boxes and class labels of objects in an
|
||||
image.
|
||||
- **Segment**: Segmentation tasks involve dividing an image or video into regions or pixels that correspond to
|
||||
different objects or classes. YOLO models can be used for image segmentation tasks by predicting a mask or label for
|
||||
each pixel in an image.
|
||||
- **Classify**: Classification tasks involve assigning a class label to an input, such as an image or text. YOLO
|
||||
models can be used for image classification tasks by predicting the class label of an input image.
|
||||
|
||||
#### Modes
|
||||
|
||||
YOLO models can be used in different modes depending on the specific problem you are trying to solve. These modes
|
||||
include train, val, and predict.
|
||||
|
||||
- **Train**: The train mode is used to train the model on a dataset. This mode is typically used during the development
|
||||
and
|
||||
testing phase of a model.
|
||||
- **Val**: The val mode is used to evaluate the model's performance on a validation dataset. This mode is typically used
|
||||
to
|
||||
tune the model's hyperparameters and detect overfitting.
|
||||
- **Predict**: The predict mode is used to make predictions with the model on new data. This mode is typically used in
|
||||
production or when deploying the model to users.
|
||||
|
||||
| Key | Value | Description |
|
||||
|--------|----------|-----------------------------------------------------------------------------------------------|
|
||||
| task | 'detect' | inference task, i.e. detect, segment, or classify |
|
||||
| mode | 'train' | YOLO mode, i.e. train, val, predict, or export |
|
||||
| resume | False | resume training from last checkpoint or custom checkpoint if passed as resume=path/to/best.pt |
|
||||
| model | null | path to model file, i.e. yolov8n.pt, yolov8n.yaml |
|
||||
| data | null | path to data file, i.e. coco128.yaml |
|
||||
|
||||
### Training
|
||||
|
||||
Training settings for YOLO models refer to the various hyperparameters and configurations used to train the model on a
|
||||
dataset. These settings can affect the model's performance, speed, and accuracy. Some common YOLO training settings
|
||||
include the batch size, learning rate, momentum, and weight decay. Other factors that may affect the training process
|
||||
include the choice of optimizer, the choice of loss function, and the size and composition of the training dataset. It
|
||||
is important to carefully tune and experiment with these settings to achieve the best possible performance for a given
|
||||
task.
|
||||
|
||||
| Key | Value | Description |
|
||||
|-----------------|--------|--------------------------------------------------------------------------------|
|
||||
| model | null | path to model file, i.e. yolov8n.pt, yolov8n.yaml |
|
||||
| data | null | path to data file, i.e. coco128.yaml |
|
||||
| epochs | 100 | number of epochs to train for |
|
||||
| patience | 50 | epochs to wait for no observable improvement for early stopping of training |
|
||||
| batch | 16 | number of images per batch (-1 for AutoBatch) |
|
||||
| imgsz | 640 | size of input images as integer or w,h |
|
||||
| save | True | save train checkpoints and predict results |
|
||||
| save_period | -1 | Save checkpoint every x epochs (disabled if < 1) |
|
||||
| cache | False | True/ram, disk or False. Use cache for data loading |
|
||||
| device | null | device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu |
|
||||
| workers | 8 | number of worker threads for data loading (per RANK if DDP) |
|
||||
| project | null | project name |
|
||||
| name | null | experiment name |
|
||||
| exist_ok | False | whether to overwrite existing experiment |
|
||||
| pretrained | False | whether to use a pretrained model |
|
||||
| optimizer | 'SGD' | optimizer to use, choices=['SGD', 'Adam', 'AdamW', 'RMSProp'] |
|
||||
| verbose | False | whether to print verbose output |
|
||||
| seed | 0 | random seed for reproducibility |
|
||||
| deterministic | True | whether to enable deterministic mode |
|
||||
| single_cls | False | train multi-class data as single-class |
|
||||
| image_weights | False | use weighted image selection for training |
|
||||
| rect | False | support rectangular training |
|
||||
| cos_lr | False | use cosine learning rate scheduler |
|
||||
| close_mosaic | 10 | disable mosaic augmentation for final 10 epochs |
|
||||
| resume | False | resume training from last checkpoint |
|
||||
| lr0 | 0.01 | initial learning rate (i.e. SGD=1E-2, Adam=1E-3) |
|
||||
| lrf | 0.01 | final learning rate (lr0 * lrf) |
|
||||
| momentum | 0.937 | SGD momentum/Adam beta1 |
|
||||
| weight_decay | 0.0005 | optimizer weight decay 5e-4 |
|
||||
| warmup_epochs | 3.0 | warmup epochs (fractions ok) |
|
||||
| warmup_momentum | 0.8 | warmup initial momentum |
|
||||
| warmup_bias_lr | 0.1 | warmup initial bias lr |
|
||||
| box | 7.5 | box loss gain |
|
||||
| cls | 0.5 | cls loss gain (scale with pixels) |
|
||||
| dfl | 1.5 | dfl loss gain |
|
||||
| fl_gamma | 0.0 | focal loss gamma (efficientDet default gamma=1.5) |
|
||||
| label_smoothing | 0.0 | label smoothing (fraction) |
|
||||
| nbs | 64 | nominal batch size |
|
||||
| overlap_mask | True | masks should overlap during training (segment train only) |
|
||||
| mask_ratio | 4 | mask downsample ratio (segment train only) |
|
||||
| dropout | 0.0 | use dropout regularization (classify train only) |
|
||||
| val | True | validate/test during training |
|
||||
|
||||
### Prediction
|
||||
|
||||
Prediction settings for YOLO models refer to the various hyperparameters and configurations used to make predictions
|
||||
with the model on new data. These settings can affect the model's performance, speed, and accuracy. Some common YOLO
|
||||
prediction settings include the confidence threshold, non-maximum suppression (NMS) threshold, and the number of classes
|
||||
to consider. Other factors that may affect the prediction process include the size and format of the input data, the
|
||||
presence of additional features such as masks or multiple labels per box, and the specific task the model is being used
|
||||
for. It is important to carefully tune and experiment with these settings to achieve the best possible performance for a
|
||||
given task.
|
||||
|
||||
| Key | Value | Description |
|
||||
|----------------|----------------------|----------------------------------------------------------|
|
||||
| source | 'ultralytics/assets' | source directory for images or videos |
|
||||
| conf | 0.25 | object confidence threshold for detection |
|
||||
| iou | 0.7 | intersection over union (IoU) threshold for NMS |
|
||||
| half | False | use half precision (FP16) |
|
||||
| device | null | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
|
||||
| show | False | show results if possible |
|
||||
| save | False | save images with results |
|
||||
| save_txt | False | save results as .txt file |
|
||||
| save_conf | False | save results with confidence scores |
|
||||
| save_crop | False | save cropped images with results |
|
||||
| hide_labels | False | hide labels |
|
||||
| hide_conf | False | hide confidence scores |
|
||||
| max_det | 300 | maximum number of detections per image |
|
||||
| vid_stride | False | video frame-rate stride |
|
||||
| line_thickness | 3 | bounding box thickness (pixels) |
|
||||
| visualize | False | visualize model features |
|
||||
| augment | False | apply image augmentation to prediction sources |
|
||||
| agnostic_nms | False | class-agnostic NMS |
|
||||
| retina_masks | False | use high-resolution segmentation masks |
|
||||
| classes | null | filter results by class, i.e. class=0, or class=[0,2,3] |
|
||||
| box | True | Show boxes in segmentation predictions |
|
||||
|
||||
### Validation
|
||||
|
||||
Validation settings for YOLO models refer to the various hyperparameters and configurations used to
|
||||
evaluate the model's performance on a validation dataset. These settings can affect the model's performance, speed, and
|
||||
accuracy. Some common YOLO validation settings include the batch size, the frequency with which validation is performed
|
||||
during training, and the metrics used to evaluate the model's performance. Other factors that may affect the validation
|
||||
process include the size and composition of the validation dataset and the specific task the model is being used for. It
|
||||
is important to carefully tune and experiment with these settings to ensure that the model is performing well on the
|
||||
validation dataset and to detect and prevent overfitting.
|
||||
|
||||
| Key | Value | Description |
|
||||
|-------------|-------|--------------------------------------------------------------------|
|
||||
| save_json | False | save results to JSON file |
|
||||
| save_hybrid | False | save hybrid version of labels (labels + additional predictions) |
|
||||
| conf | 0.001 | object confidence threshold for detection |
|
||||
| iou | 0.6 | intersection over union (IoU) threshold for NMS |
|
||||
| max_det | 300 | maximum number of detections per image |
|
||||
| half | True | use half precision (FP16) |
|
||||
| device | null | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
|
||||
| dnn | False | use OpenCV DNN for ONNX inference |
|
||||
| plots | False | show plots during training |
|
||||
| rect | False | support rectangular evaluation |
|
||||
| split | val | dataset split to use for validation, i.e. 'val', 'test' or 'train' |
|
||||
|
||||
### Export
|
||||
|
||||
Export settings for YOLO models refer to the various configurations and options used to save or
|
||||
export the model for use in other environments or platforms. These settings can affect the model's performance, size,
|
||||
and compatibility with different systems. Some common YOLO export settings include the format of the exported model
|
||||
file (e.g. ONNX, TensorFlow SavedModel), the device on which the model will be run (e.g. CPU, GPU), and the presence of
|
||||
additional features such as masks or multiple labels per box. Other factors that may affect the export process include
|
||||
the specific task the model is being used for and the requirements or constraints of the target environment or platform.
|
||||
It is important to carefully consider and configure these settings to ensure that the exported model is optimized for
|
||||
the intended use case and can be used effectively in the target environment.
|
||||
|
||||
### Augmentation
|
||||
|
||||
Augmentation settings for YOLO models refer to the various transformations and modifications
|
||||
applied to the training data to increase the diversity and size of the dataset. These settings can affect the model's
|
||||
performance, speed, and accuracy. Some common YOLO augmentation settings include the type and intensity of the
|
||||
transformations applied (e.g. random flips, rotations, cropping, color changes), the probability with which each
|
||||
transformation is applied, and the presence of additional features such as masks or multiple labels per box. Other
|
||||
factors that may affect the augmentation process include the size and composition of the original dataset and the
|
||||
specific task the model is being used for. It is important to carefully tune and experiment with these settings to
|
||||
ensure that the augmented dataset is diverse and representative enough to train a high-performing model.
|
||||
|
||||
| Key | Value | Description |
|
||||
|-------------|-------|-------------------------------------------------|
|
||||
| hsv_h | 0.015 | image HSV-Hue augmentation (fraction) |
|
||||
| hsv_s | 0.7 | image HSV-Saturation augmentation (fraction) |
|
||||
| hsv_v | 0.4 | image HSV-Value augmentation (fraction) |
|
||||
| degrees | 0.0 | image rotation (+/- deg) |
|
||||
| translate | 0.1 | image translation (+/- fraction) |
|
||||
| scale | 0.5 | image scale (+/- gain) |
|
||||
| shear | 0.0 | image shear (+/- deg) |
|
||||
| perspective | 0.0 | image perspective (+/- fraction), range 0-0.001 |
|
||||
| flipud | 0.0 | image flip up-down (probability) |
|
||||
| fliplr | 0.5 | image flip left-right (probability) |
|
||||
| mosaic | 1.0 | image mosaic (probability) |
|
||||
| mixup | 0.0 | image mixup (probability) |
|
||||
| copy_paste | 0.0 | segment copy-paste (probability) |
|
||||
|
||||
### Logging, checkpoints, plotting and file management
|
||||
|
||||
Logging, checkpoints, plotting, and file management are important considerations when training a YOLO model.
|
||||
|
||||
- Logging: It is often helpful to log various metrics and statistics during training to track the model's progress and
|
||||
diagnose any issues that may arise. This can be done using a logging library such as TensorBoard or by writing log
|
||||
messages to a file.
|
||||
- Checkpoints: It is a good practice to save checkpoints of the model at regular intervals during training. This allows
|
||||
you to resume training from a previous point if the training process is interrupted or if you want to experiment with
|
||||
different training configurations.
|
||||
- Plotting: Visualizing the model's performance and training progress can be helpful for understanding how the model is
|
||||
behaving and identifying potential issues. This can be done using a plotting library such as matplotlib or by
|
||||
generating plots using a logging library such as TensorBoard.
|
||||
- File management: Managing the various files generated during the training process, such as model checkpoints, log
|
||||
files, and plots, can be challenging. It is important to have a clear and organized file structure to keep track of
|
||||
these files and make it easy to access and analyze them as needed.
|
||||
|
||||
Effective logging, checkpointing, plotting, and file management can help you keep track of the model's progress and make
|
||||
it easier to debug and optimize the training process.
|
||||
|
||||
| Key | Value | Description |
|
||||
|----------|--------|------------------------------------------------------------------------------------------------|
|
||||
| project | 'runs' | project name |
|
||||
| name | 'exp' | experiment name. `exp` gets automatically incremented if not specified, i.e, `exp`, `exp2` ... |
|
||||
| exist_ok | False | whether to overwrite existing experiment |
|
||||
| plots | False | save plots during train/val |
|
||||
| save | False | save train checkpoints and predict results |
|
@ -3,7 +3,6 @@
|
||||
<a href="https://bit.ly/ultralytics_hub" target="_blank">
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
|
||||
<br>
|
||||
<br>
|
||||
<div align="center">
|
||||
<a href="https://github.com/ultralytics" style="text-decoration:none;">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="" /></a>
|
||||
@ -32,7 +31,6 @@
|
||||
<a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
|
||||
[Ultralytics HUB](https://hub.ultralytics.com) is a new no-code online tool developed
|
||||
|
65
docs/modes/benchmark.md
Normal file
65
docs/modes/benchmark.md
Normal file
@ -0,0 +1,65 @@
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||
|
||||
**Benchmark mode** is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks
|
||||
provide information on the size of the exported format, its `mAP50-95` metrics (for object detection and segmentation)
|
||||
or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export
|
||||
formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for
|
||||
their specific use case based on their requirements for speed and accuracy.
|
||||
|
||||
!!! tip "Tip"
|
||||
|
||||
* Export to ONNX or OpenVINO for up to 3x CPU speedup.
|
||||
* Export to TensorRT for up to 5x GPU speedup.
|
||||
|
||||
## Usage Examples
|
||||
|
||||
Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a
|
||||
full list of export arguments.
|
||||
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics.yolo.utils.benchmarks import benchmark
|
||||
|
||||
# Benchmark
|
||||
benchmark(model='yolov8n.pt', imgsz=640, half=False, device=0)
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo benchmark model=yolov8n.pt imgsz=640 half=False device=0
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
Arguments such as `model`, `imgsz`, `half`, `device`, and `hard_fail` provide users with the flexibility to fine-tune
|
||||
the benchmarks to their specific needs and compare the performance of different export formats with ease.
|
||||
|
||||
| Key | Value | Description |
|
||||
|-------------|---------|----------------------------------------------------------------------|
|
||||
| `model` | `None` | path to model file, i.e. yolov8n.pt, yolov8n.yaml |
|
||||
| `imgsz` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
|
||||
| `half` | `False` | FP16 quantization |
|
||||
| `device` | `None` | device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu |
|
||||
| `hard_fail` | `False` | do not continue on error (bool), or val floor threshold (float) |
|
||||
|
||||
## Export Formats
|
||||
|
||||
Benchmarks will attempt to run automatically on all possible export formats below.
|
||||
|
||||
| Format | `format` Argument | Model | Metadata |
|
||||
|--------------------------------------------------------------------|-------------------|---------------------------|----------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ |
|
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ |
|
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ |
|
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ |
|
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ |
|
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ |
|
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ |
|
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ |
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ |
|
81
docs/modes/export.md
Normal file
81
docs/modes/export.md
Normal file
@ -0,0 +1,81 @@
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||
|
||||
**Export mode** is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the
|
||||
model is converted to a format that can be used by other software applications or hardware devices. This mode is useful
|
||||
when deploying the model to production environments.
|
||||
|
||||
!!! tip "Tip"
|
||||
|
||||
* Export to ONNX or OpenVINO for up to 3x CPU speedup.
|
||||
* Export to TensorRT for up to 5x GPU speedup.
|
||||
|
||||
## Usage Examples
|
||||
|
||||
Export a YOLOv8n model to a different format like ONNX or TensorRT. See Arguments section below for a full list of
|
||||
export arguments.
|
||||
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt") # load an official model
|
||||
model = YOLO("path/to/best.pt") # load a custom trained
|
||||
|
||||
# Export the model
|
||||
model.export(format="onnx")
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo export model=yolov8n.pt format=onnx # export official model
|
||||
yolo export model=path/to/best.pt format=onnx # export custom trained model
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
Export settings for YOLO models refer to the various configurations and options used to save or
|
||||
export the model for use in other environments or platforms. These settings can affect the model's performance, size,
|
||||
and compatibility with different systems. Some common YOLO export settings include the format of the exported model
|
||||
file (e.g. ONNX, TensorFlow SavedModel), the device on which the model will be run (e.g. CPU, GPU), and the presence of
|
||||
additional features such as masks or multiple labels per box. Other factors that may affect the export process include
|
||||
the specific task the model is being used for and the requirements or constraints of the target environment or platform.
|
||||
It is important to carefully consider and configure these settings to ensure that the exported model is optimized for
|
||||
the intended use case and can be used effectively in the target environment.
|
||||
|
||||
| Key | Value | Description |
|
||||
|-------------|-----------------|------------------------------------------------------|
|
||||
| `format` | `'torchscript'` | format to export to |
|
||||
| `imgsz` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
|
||||
| `keras` | `False` | use Keras for TF SavedModel export |
|
||||
| `optimize` | `False` | TorchScript: optimize for mobile |
|
||||
| `half` | `False` | FP16 quantization |
|
||||
| `int8` | `False` | INT8 quantization |
|
||||
| `dynamic` | `False` | ONNX/TF/TensorRT: dynamic axes |
|
||||
| `simplify` | `False` | ONNX: simplify model |
|
||||
| `opset` | `None` | ONNX: opset version (optional, defaults to latest) |
|
||||
| `workspace` | `4` | TensorRT: workspace size (GB) |
|
||||
| `nms` | `False` | CoreML: add NMS |
|
||||
|
||||
## Export Formats
|
||||
|
||||
Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument,
|
||||
i.e. `format='onnx'` or `format='engine'`.
|
||||
|
||||
| Format | `format` Argument | Model | Metadata |
|
||||
|--------------------------------------------------------------------|-------------------|---------------------------|----------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ |
|
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ |
|
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ |
|
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ |
|
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ |
|
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ |
|
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ |
|
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ |
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ |
|
62
docs/modes/index.md
Normal file
62
docs/modes/index.md
Normal file
@ -0,0 +1,62 @@
|
||||
# YOLOv8 Modes
|
||||
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||
|
||||
Ultralytics YOLOv8 supports several **modes** that can be used to perform different tasks. These modes are:
|
||||
|
||||
**Train**: For training a YOLOv8 model on a custom dataset.
|
||||
**Val**: For validating a YOLOv8 model after it has been trained.
|
||||
**Predict**: For making predictions using a trained YOLOv8 model on new images or videos.
|
||||
**Export**: For exporting a YOLOv8 model to a format that can be used for deployment.
|
||||
**Track**: For tracking objects in real-time using a YOLOv8 model.
|
||||
**Benchmark**: For benchmarking YOLOv8 exports (ONNX, TensorRT, etc.) speed and accuracy.
|
||||
|
||||
## [Train](train.md)
|
||||
|
||||
Train mode is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the
|
||||
specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can
|
||||
accurately predict the classes and locations of objects in an image.
|
||||
|
||||
[Train Examples](train.md){ .md-button .md-button--primary}
|
||||
|
||||
## [Val](val.md)
|
||||
|
||||
Val mode is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a
|
||||
validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters
|
||||
of the model to improve its performance.
|
||||
|
||||
[Val Examples](val.md){ .md-button .md-button--primary}
|
||||
|
||||
## [Predict](predict.md)
|
||||
|
||||
Predict mode is used for making predictions using a trained YOLOv8 model on new images or videos. In this mode, the
|
||||
model is loaded from a checkpoint file, and the user can provide images or videos to perform inference. The model
|
||||
predicts the classes and locations of objects in the input images or videos.
|
||||
|
||||
[Predict Examples](predict.md){ .md-button .md-button--primary}
|
||||
|
||||
## [Export](export.md)
|
||||
|
||||
Export mode is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the model is
|
||||
converted to a format that can be used by other software applications or hardware devices. This mode is useful when
|
||||
deploying the model to production environments.
|
||||
|
||||
[Export Examples](export.md){ .md-button .md-button--primary}
|
||||
|
||||
## [Track](track.md)
|
||||
|
||||
Track mode is used for tracking objects in real-time using a YOLOv8 model. In this mode, the model is loaded from a
|
||||
checkpoint file, and the user can provide a live video stream to perform real-time object tracking. This mode is useful
|
||||
for applications such as surveillance systems or self-driving cars.
|
||||
|
||||
[Track Examples](track.md){ .md-button .md-button--primary}
|
||||
|
||||
## [Benchmark](benchmark.md)
|
||||
|
||||
Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide
|
||||
information on the size of the exported format, its `mAP50-95` metrics (for object detection and segmentation)
|
||||
or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export
|
||||
formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for
|
||||
their specific use case based on their requirements for speed and accuracy.
|
||||
|
||||
[Benchmark Examples](benchmark.md){ .md-button .md-button--primary}
|
@ -1,10 +1,12 @@
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||
|
||||
Inference or prediction of a task returns a list of `Results` objects. Alternatively, in the streaming mode, it returns
|
||||
a generator of `Results` objects which is memory efficient. Streaming mode can be enabled by passing `stream=True` in
|
||||
predictor's call method.
|
||||
|
||||
!!! example "Predict"
|
||||
|
||||
=== "Getting a List"
|
||||
=== "Return a List"
|
||||
|
||||
```python
|
||||
inputs = [img, img] # list of np arrays
|
||||
@ -16,7 +18,7 @@ predictor's call method.
|
||||
probs = result.probs # Class probabilities for classification outputs
|
||||
```
|
||||
|
||||
=== "Getting a Generator"
|
||||
=== "Return a Generator"
|
||||
|
||||
```python
|
||||
inputs = [img, img] # list of numpy arrays
|
||||
@ -51,6 +53,46 @@ source can be used as a stream and the model argument required for that source.
|
||||
| YouTube | ✓ | `'https://youtu.be/Zgi9g1ksQHc'` | `str` | |
|
||||
| stream | ✓ | `'rtsp://example.com/media.mp4'` | `str` | RTSP, RTMP, HTTP |
|
||||
|
||||
## Image Formats
|
||||
|
||||
For images, YOLOv8 supports a variety of image formats defined
|
||||
in [yolo/data/utils.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/data/utils.py). The
|
||||
following suffixes are valid for images:
|
||||
|
||||
| Image Suffixes | Example Predict Command | Reference |
|
||||
|----------------|----------------------------------|--------------------------------------------------------------------------------------|
|
||||
| bmp | `yolo predict source=image.bmp` | [Microsoft](https://docs.microsoft.com/en-us/windows/win32/gdi/bitmap-file-format) |
|
||||
| dng | `yolo predict source=image.dng` | [Adobe](https://helpx.adobe.com/photoshop/using/digital-negative.html) |
|
||||
| jpeg | `yolo predict source=image.jpeg` | [Joint Photographic Experts Group](https://jpeg.org/jpeg/) |
|
||||
| jpg | `yolo predict source=image.jpg` | [Joint Photographic Experts Group](https://jpeg.org/jpeg/) |
|
||||
| mpo | `yolo predict source=image.mpo` | [CIPA](https://www.cipa.jp/std/documents/e/DC-007-Translation-2018-E.pdf) |
|
||||
| png | `yolo predict source=image.png` | [Portable Network Graphics](https://www.w3.org/TR/PNG/) |
|
||||
| tif | `yolo predict source=image.tif` | [Adobe](https://www.adobe.com/content/dam/acom/en/products/photoshop/pdfs/tiff6.pdf) |
|
||||
| tiff | `yolo predict source=image.tiff` | [Adobe](https://www.adobe.com/content/dam/acom/en/products/photoshop/pdfs/tiff6.pdf) |
|
||||
| webp | `yolo predict source=image.webp` | [Google Developers](https://developers.google.com/speed/webp) |
|
||||
| pfm | `yolo predict source=image.pfm` | [HDR Labs](http://hdrlabs.com/tools/pfrenchy/) |
|
||||
|
||||
## Video Formats
|
||||
|
||||
For videos, YOLOv8 also supports a variety of video formats defined
|
||||
in [yolo/data/utils.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/data/utils.py). The
|
||||
following suffixes are valid for videos:
|
||||
|
||||
| Video Suffixes | Example Predict Command | Reference |
|
||||
|----------------|----------------------------------|----------------------------------------------------------------------------------------------------------------|
|
||||
| asf | `yolo predict source=video.asf` | [Microsoft](https://docs.microsoft.com/en-us/windows/win32/wmformat/asf-file-structure) |
|
||||
| avi | `yolo predict source=video.avi` | [Microsoft](https://docs.microsoft.com/en-us/windows/win32/directshow/avi-riff-file-reference) |
|
||||
| gif | `yolo predict source=video.gif` | [CompuServe](https://www.w3.org/Graphics/GIF/spec-gif89a.txt) |
|
||||
| m4v | `yolo predict source=video.m4v` | [Apple](https://developer.apple.com/library/archive/documentation/QuickTime/QTFF/QTFFChap2/qtff2.html) |
|
||||
| mkv | `yolo predict source=video.mkv` | [Matroska](https://matroska.org/technical/specs/index.html) |
|
||||
| mov | `yolo predict source=video.mov` | [Apple](https://developer.apple.com/library/archive/documentation/QuickTime/QTFF/QTFFPreface/qtffPreface.html) |
|
||||
| mp4 | `yolo predict source=video.mp4` | [ISO 68939](https://www.iso.org/standard/68939.html) |
|
||||
| mpeg | `yolo predict source=video.mpeg` | [ISO 56021](https://www.iso.org/standard/56021.html) |
|
||||
| mpg | `yolo predict source=video.mpg` | [ISO 56021](https://www.iso.org/standard/56021.html) |
|
||||
| ts | `yolo predict source=video.ts` | [MPEG Transport Stream](https://en.wikipedia.org/wiki/MPEG_transport_stream) |
|
||||
| wmv | `yolo predict source=video.wmv` | [Microsoft](https://docs.microsoft.com/en-us/windows/win32/wmformat/wmv-file-structure) |
|
||||
| webm | `yolo predict source=video.webm` | [Google Developers](https://developers.google.com/media/vp9/getting-started/webm-file-format) |
|
||||
|
||||
## Working with Results
|
||||
|
||||
Results object consists of these component objects:
|
||||
@ -116,7 +158,7 @@ results = model(inputs)
|
||||
results[0].probs # cls prob, (num_class, )
|
||||
```
|
||||
|
||||
Class reference documentation for `Results` module and its components can be found [here](reference/results.md)
|
||||
Class reference documentation for `Results` module and its components can be found [here](../reference/results.md)
|
||||
|
||||
## Plotting results
|
||||
|
@ -1,3 +1,5 @@
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||
|
||||
Object tracking is a task that involves identifying the location and class of objects, then assigning a unique ID to
|
||||
that detection in video streams.
|
||||
|
||||
@ -87,9 +89,8 @@ any configurations(expect the `tracker_type`) you need to.
|
||||
|
||||
```bash
|
||||
yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" tracker='custom_tracker.yaml'
|
||||
|
||||
```
|
||||
|
||||
Please refer to [ultralytics/tracker/cfg](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/tracker/cfg)
|
||||
page.
|
||||
page
|
||||
|
88
docs/modes/train.md
Normal file
88
docs/modes/train.md
Normal file
@ -0,0 +1,88 @@
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||
|
||||
**Train mode** is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the
|
||||
specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can
|
||||
accurately predict the classes and locations of objects in an image.
|
||||
|
||||
!!! tip "Tip"
|
||||
|
||||
* YOLOv8 datasets like COCO, VOC, ImageNet and many others automatically download on first use, i.e. `yolo train data=coco.yaml`
|
||||
|
||||
## Usage Examples
|
||||
|
||||
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. See Arguments section below for a full list of
|
||||
training arguments.
|
||||
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.yaml") # build a new model from scratch
|
||||
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data="coco128.yaml", epochs=100, imgsz=640)
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
Training settings for YOLO models refer to the various hyperparameters and configurations used to train the model on a
|
||||
dataset. These settings can affect the model's performance, speed, and accuracy. Some common YOLO training settings
|
||||
include the batch size, learning rate, momentum, and weight decay. Other factors that may affect the training process
|
||||
include the choice of optimizer, the choice of loss function, and the size and composition of the training dataset. It
|
||||
is important to carefully tune and experiment with these settings to achieve the best possible performance for a given
|
||||
task.
|
||||
|
||||
| Key | Value | Description |
|
||||
|-------------------|----------|-----------------------------------------------------------------------------|
|
||||
| `model` | `None` | path to model file, i.e. yolov8n.pt, yolov8n.yaml |
|
||||
| `data` | `None` | path to data file, i.e. coco128.yaml |
|
||||
| `epochs` | `100` | number of epochs to train for |
|
||||
| `patience` | `50` | epochs to wait for no observable improvement for early stopping of training |
|
||||
| `batch` | `16` | number of images per batch (-1 for AutoBatch) |
|
||||
| `imgsz` | `640` | size of input images as integer or w,h |
|
||||
| `save` | `True` | save train checkpoints and predict results |
|
||||
| `save_period` | `-1` | Save checkpoint every x epochs (disabled if < 1) |
|
||||
| `cache` | `False` | True/ram, disk or False. Use cache for data loading |
|
||||
| `device` | `None` | device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu |
|
||||
| `workers` | `8` | number of worker threads for data loading (per RANK if DDP) |
|
||||
| `project` | `None` | project name |
|
||||
| `name` | `None` | experiment name |
|
||||
| `exist_ok` | `False` | whether to overwrite existing experiment |
|
||||
| `pretrained` | `False` | whether to use a pretrained model |
|
||||
| `optimizer` | `'SGD'` | optimizer to use, choices=['SGD', 'Adam', 'AdamW', 'RMSProp'] |
|
||||
| `verbose` | `False` | whether to print verbose output |
|
||||
| `seed` | `0` | random seed for reproducibility |
|
||||
| `deterministic` | `True` | whether to enable deterministic mode |
|
||||
| `single_cls` | `False` | train multi-class data as single-class |
|
||||
| `image_weights` | `False` | use weighted image selection for training |
|
||||
| `rect` | `False` | support rectangular training |
|
||||
| `cos_lr` | `False` | use cosine learning rate scheduler |
|
||||
| `close_mosaic` | `10` | disable mosaic augmentation for final 10 epochs |
|
||||
| `resume` | `False` | resume training from last checkpoint |
|
||||
| `lr0` | `0.01` | initial learning rate (i.e. SGD=1E-2, Adam=1E-3) |
|
||||
| `lrf` | `0.01` | final learning rate (lr0 * lrf) |
|
||||
| `momentum` | `0.937` | SGD momentum/Adam beta1 |
|
||||
| `weight_decay` | `0.0005` | optimizer weight decay 5e-4 |
|
||||
| `warmup_epochs` | `3.0` | warmup epochs (fractions ok) |
|
||||
| `warmup_momentum` | `0.8` | warmup initial momentum |
|
||||
| `warmup_bias_lr` | `0.1` | warmup initial bias lr |
|
||||
| `box` | `7.5` | box loss gain |
|
||||
| `cls` | `0.5` | cls loss gain (scale with pixels) |
|
||||
| `dfl` | `1.5` | dfl loss gain |
|
||||
| `fl_gamma` | `0.0` | focal loss gamma (efficientDet default gamma=1.5) |
|
||||
| `label_smoothing` | `0.0` | label smoothing (fraction) |
|
||||
| `nbs` | `64` | nominal batch size |
|
||||
| `overlap_mask` | `True` | masks should overlap during training (segment train only) |
|
||||
| `mask_ratio` | `4` | mask downsample ratio (segment train only) |
|
||||
| `dropout` | `0.0` | use dropout regularization (classify train only) |
|
||||
| `val` | `True` | validate/test during training |
|
86
docs/modes/val.md
Normal file
86
docs/modes/val.md
Normal file
@ -0,0 +1,86 @@
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||
|
||||
**Val mode** is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a
|
||||
validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters
|
||||
of the model to improve its performance.
|
||||
|
||||
!!! tip "Tip"
|
||||
|
||||
* YOLOv8 models automatically remember their training settings, so you can validate a model at the same image size and on the original dataset easily with just `yolo val model=yolov8n.pt` or `model('yolov8n.pt').val()`
|
||||
|
||||
## Usage Examples
|
||||
|
||||
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's
|
||||
training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
|
||||
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt") # load an official model
|
||||
model = YOLO("path/to/best.pt") # load a custom model
|
||||
|
||||
# Validate the model
|
||||
metrics = model.val() # no arguments needed, dataset and settings remembered
|
||||
metrics.box.map # map50-95
|
||||
metrics.box.map50 # map50
|
||||
metrics.box.map75 # map75
|
||||
metrics.box.maps # a list contains map50-95 of each category
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo detect val model=yolov8n.pt # val official model
|
||||
yolo detect val model=path/to/best.pt # val custom model
|
||||
```
|
||||
|
||||
## Arguments
|
||||
|
||||
Validation settings for YOLO models refer to the various hyperparameters and configurations used to
|
||||
evaluate the model's performance on a validation dataset. These settings can affect the model's performance, speed, and
|
||||
accuracy. Some common YOLO validation settings include the batch size, the frequency with which validation is performed
|
||||
during training, and the metrics used to evaluate the model's performance. Other factors that may affect the validation
|
||||
process include the size and composition of the validation dataset and the specific task the model is being used for. It
|
||||
is important to carefully tune and experiment with these settings to ensure that the model is performing well on the
|
||||
validation dataset and to detect and prevent overfitting.
|
||||
|
||||
| Key | Value | Description |
|
||||
|---------------|---------|--------------------------------------------------------------------|
|
||||
| `data` | `None` | path to data file, i.e. coco128.yaml |
|
||||
| `imgsz` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
|
||||
| `batch` | `16` | number of images per batch (-1 for AutoBatch) |
|
||||
| `save_json` | `False` | save results to JSON file |
|
||||
| `save_hybrid` | `False` | save hybrid version of labels (labels + additional predictions) |
|
||||
| `conf` | `0.001` | object confidence threshold for detection |
|
||||
| `iou` | `0.6` | intersection over union (IoU) threshold for NMS |
|
||||
| `max_det` | `300` | maximum number of detections per image |
|
||||
| `half` | `True` | use half precision (FP16) |
|
||||
| `device` | `None` | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
|
||||
| `dnn` | `False` | use OpenCV DNN for ONNX inference |
|
||||
| `plots` | `False` | show plots during training |
|
||||
| `rect` | `False` | support rectangular evaluation |
|
||||
| `split` | `val` | dataset split to use for validation, i.e. 'val', 'test' or 'train' |
|
||||
|
||||
## Export Formats
|
||||
|
||||
Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument,
|
||||
i.e. `format='onnx'` or `format='engine'`.
|
||||
|
||||
| Format | `format` Argument | Model | Metadata |
|
||||
|--------------------------------------------------------------------|-------------------|---------------------------|----------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ |
|
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ |
|
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ |
|
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ |
|
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ |
|
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ |
|
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ |
|
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ |
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ |
|
@ -43,7 +43,7 @@ CLI requires no customization or code. You can simply run all tasks from the ter
|
||||
yolo detect train model=yolov8n.pt data=coco128.yaml device=\'0,1,2,3\'
|
||||
```
|
||||
|
||||
[CLI Guide](cli.md){ .md-button .md-button--primary}
|
||||
[CLI Guide](usage/cli.md){ .md-button .md-button--primary}
|
||||
|
||||
## Use with Python
|
||||
|
||||
@ -70,4 +70,4 @@ classification into their Python projects using YOLOv8.
|
||||
success = model.export(format="onnx") # export the model to ONNX format
|
||||
```
|
||||
|
||||
[Python Guide](python.md){.md-button .md-button--primary}
|
||||
[Python Guide](usage/python.md){.md-button .md-button--primary}
|
||||
|
@ -16,7 +16,7 @@ of that class are located or what their exact shape is.
|
||||
## Train
|
||||
|
||||
Train YOLOv8n-cls on the MNIST160 dataset for 100 epochs at image size 64. For a full list of available arguments
|
||||
see the [Configuration](../cfg.md) page.
|
||||
see the [Configuration](../usage/cfg.md) page.
|
||||
|
||||
!!! example ""
|
||||
|
||||
@ -118,20 +118,21 @@ Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
|
||||
yolo export model=path/to/best.pt format=onnx # export custom trained model
|
||||
```
|
||||
|
||||
Available YOLOv8-cls export formats include:
|
||||
Available YOLOv8-cls export formats are in the table below. You can predict or validate directly on exported models,
|
||||
i.e. `yolo predict model=yolov8n-cls.onnx`.
|
||||
|
||||
| Format | `format=` | Model | Metadata |
|
||||
|--------------------------------------------------------------------|---------------|-------------------------------|----------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n-cls.pt` | ✅ |
|
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-cls.torchscript` | ✅ |
|
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-cls.onnx` | ✅ |
|
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-cls_openvino_model/` | ✅ |
|
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-cls.engine` | ✅ |
|
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-cls.mlmodel` | ✅ |
|
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-cls_saved_model/` | ✅ |
|
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-cls.pb` | ❌ |
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-cls.tflite` | ✅ |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-cls_edgetpu.tflite` | ✅ |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` | ✅ |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` | ✅ |
|
||||
| Format | `format` Argument | Model | Metadata |
|
||||
|--------------------------------------------------------------------|-------------------|-------------------------------|----------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n-cls.pt` | ✅ |
|
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-cls.torchscript` | ✅ |
|
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-cls.onnx` | ✅ |
|
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-cls_openvino_model/` | ✅ |
|
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-cls.engine` | ✅ |
|
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-cls.mlmodel` | ✅ |
|
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-cls_saved_model/` | ✅ |
|
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-cls.pb` | ❌ |
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-cls.tflite` | ✅ |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-cls_edgetpu.tflite` | ✅ |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` | ✅ |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` | ✅ |
|
||||
|
@ -16,7 +16,7 @@ scene, but don't need to know exactly where the object is or its exact shape.
|
||||
## Train
|
||||
|
||||
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see
|
||||
the [Configuration](../cfg.md) page.
|
||||
the [Configuration](../usage/cfg.md) page.
|
||||
|
||||
!!! example ""
|
||||
|
||||
@ -120,19 +120,20 @@ Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
|
||||
yolo export model=path/to/best.pt format=onnx # export custom trained model
|
||||
```
|
||||
|
||||
Available YOLOv8 export formats include:
|
||||
Available YOLOv8 export formats are in the table below. You can predict or validate directly on exported models,
|
||||
i.e. `yolo predict model=yolov8n.onnx`.
|
||||
|
||||
| Format | `format=` | Model | Metadata |
|
||||
|--------------------------------------------------------------------|---------------|---------------------------|----------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ |
|
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ |
|
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ |
|
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ |
|
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ |
|
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ |
|
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ |
|
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ |
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ |
|
||||
| Format | `format` Argument | Model | Metadata |
|
||||
|--------------------------------------------------------------------|-------------------|---------------------------|----------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ |
|
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ |
|
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ |
|
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ |
|
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ |
|
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ |
|
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ |
|
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ |
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ |
|
46
docs/tasks/index.md
Normal file
46
docs/tasks/index.md
Normal file
@ -0,0 +1,46 @@
|
||||
# Ultralytics YOLOv8 Tasks
|
||||
|
||||
YOLOv8 is an AI framework that supports multiple computer vision **tasks**. The framework can be used to
|
||||
perform [detection](detect.md), [segmentation](segment.md), [classification](classify.md),
|
||||
and [keypoints](keypoints.md) detection. Each of these tasks has a different objective and use case.
|
||||
|
||||
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
|
||||
|
||||
## [Detection](detect.md)
|
||||
|
||||
Detection is the primary task supported by YOLOv8. It involves detecting objects in an image or video frame and drawing
|
||||
bounding boxes around them. The detected objects are classified into different categories based on their features.
|
||||
YOLOv8 can detect multiple objects in a single image or video frame with high accuracy and speed.
|
||||
|
||||
[Detection Examples](detect.md){ .md-button .md-button--primary}
|
||||
|
||||
## [Segmentation](segment.md)
|
||||
|
||||
Segmentation is a task that involves segmenting an image into different regions based on the content of the image. Each
|
||||
region is assigned a label based on its content. This task is useful in applications such as image segmentation and
|
||||
medical imaging. YOLOv8 uses a variant of the U-Net architecture to perform segmentation.
|
||||
|
||||
[Segmentation Examples](segment.md){ .md-button .md-button--primary}
|
||||
|
||||
## [Classification](classify.md)
|
||||
|
||||
Classification is a task that involves classifying an image into different categories. YOLOv8 can be used to classify
|
||||
images based on their content. It uses a variant of the EfficientNet architecture to perform classification.
|
||||
|
||||
[Classification Examples](classify.md){ .md-button .md-button--primary}
|
||||
|
||||
<!--
|
||||
## [Keypoints](keypoints.md)
|
||||
|
||||
Keypoints detection is a task that involves detecting specific points in an image or video frame. These points are
|
||||
referred to as keypoints and are used to track movement or pose estimation. YOLOv8 can detect keypoints in an image or
|
||||
video frame with high accuracy and speed.
|
||||
|
||||
[Keypoints Examples](keypoints.md){ .md-button .md-button--primary}
|
||||
-->
|
||||
|
||||
## Conclusion
|
||||
|
||||
YOLOv8 supports multiple tasks, including detection, segmentation, classification, and keypoints detection. Each of
|
||||
these tasks has different objectives and use cases. By understanding the differences between these tasks, you can choose
|
||||
the appropriate task for your computer vision application.
|
141
docs/tasks/keypoints.md
Normal file
141
docs/tasks/keypoints.md
Normal file
@ -0,0 +1,141 @@
|
||||
Key Point Estimation is a task that involves identifying the location of specific points in an image, usually referred
|
||||
to as keypoints. The keypoints can represent various parts of the object such as joints, landmarks, or other distinctive
|
||||
features. The locations of the keypoints are usually represented as a set of 2D `[x, y]` or 3D `[x, y, visible]`
|
||||
coordinates.
|
||||
|
||||
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
|
||||
|
||||
The output of a keypoint detector is a set of points that represent the keypoints on the object in the image, usually
|
||||
along with the confidence scores for each point. Keypoint estimation is a good choice when you need to identify specific
|
||||
parts of an object in a scene, and their location in relation to each other.
|
||||
|
||||
!!! tip "Tip"
|
||||
|
||||
YOLOv8 _keypoints_ models use the `-kpts` suffix, i.e. `yolov8n-kpts.pt`. These models are trained on the COCO dataset and are suitable for a variety of keypoint estimation tasks.
|
||||
|
||||
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8){ .md-button .md-button--primary}
|
||||
|
||||
## Train TODO
|
||||
|
||||
Train an OpenPose model on a custom dataset of keypoints using the OpenPose framework. For more information on how to
|
||||
train an OpenPose model on a custom dataset, see the OpenPose Training page.
|
||||
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.yaml") # build a new model from scratch
|
||||
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||
|
||||
# Train the model
|
||||
model.train(data="coco128.yaml", epochs=100, imgsz=640)
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640
|
||||
```
|
||||
|
||||
## Val TODO
|
||||
|
||||
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's
|
||||
training `data` and arguments as model attributes.
|
||||
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt") # load an official model
|
||||
model = YOLO("path/to/best.pt") # load a custom model
|
||||
|
||||
# Validate the model
|
||||
metrics = model.val() # no arguments needed, dataset and settings remembered
|
||||
metrics.box.map # map50-95
|
||||
metrics.box.map50 # map50
|
||||
metrics.box.map75 # map75
|
||||
metrics.box.maps # a list contains map50-95 of each category
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo detect val model=yolov8n.pt # val official model
|
||||
yolo detect val model=path/to/best.pt # val custom model
|
||||
```
|
||||
|
||||
## Predict TODO
|
||||
|
||||
Use a trained YOLOv8n model to run predictions on images.
|
||||
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt") # load an official model
|
||||
model = YOLO("path/to/best.pt") # load a custom model
|
||||
|
||||
# Predict with the model
|
||||
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo detect predict model=yolov8n.pt source="https://ultralytics.com/images/bus.jpg" # predict with official model
|
||||
yolo detect predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg" # predict with custom model
|
||||
```
|
||||
|
||||
Read more details of `predict` in our [Predict](https://docs.ultralytics.com/predict/) page.
|
||||
|
||||
## Export TODO
|
||||
|
||||
Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
|
||||
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt") # load an official model
|
||||
model = YOLO("path/to/best.pt") # load a custom trained
|
||||
|
||||
# Export the model
|
||||
model.export(format="onnx")
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo export model=yolov8n.pt format=onnx # export official model
|
||||
yolo export model=path/to/best.pt format=onnx # export custom trained model
|
||||
```
|
||||
|
||||
Available YOLOv8-pose export formats are in the table below. You can predict or validate directly on exported models,
|
||||
i.e. `yolo predict model=yolov8n-pose.onnx`.
|
||||
|
||||
| Format | `format` Argument | Model | Metadata |
|
||||
|--------------------------------------------------------------------|-------------------|---------------------------|----------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ |
|
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ |
|
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ |
|
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ |
|
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ |
|
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ |
|
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ |
|
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ |
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ |
|
@ -16,7 +16,7 @@ segmentation is useful when you need to know not only where objects are in an im
|
||||
## Train
|
||||
|
||||
Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. For a full list of available
|
||||
arguments see the [Configuration](../cfg.md) page.
|
||||
arguments see the [Configuration](../usage/cfg.md) page.
|
||||
|
||||
!!! example ""
|
||||
|
||||
@ -124,21 +124,22 @@ Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
|
||||
yolo export model=path/to/best.pt format=onnx # export custom trained model
|
||||
```
|
||||
|
||||
Available YOLOv8-seg export formats include:
|
||||
Available YOLOv8-seg export formats are in the table below. You can predict or validate directly on exported models,
|
||||
i.e. `yolo predict model=yolov8n-seg.onnx`.
|
||||
|
||||
| Format | `format=` | Model | Metadata |
|
||||
|--------------------------------------------------------------------|---------------|-------------------------------|----------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n-seg.pt` | ✅ |
|
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-seg.torchscript` | ✅ |
|
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-seg.onnx` | ✅ |
|
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-seg_openvino_model/` | ✅ |
|
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-seg.engine` | ✅ |
|
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-seg.mlmodel` | ✅ |
|
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-seg_saved_model/` | ✅ |
|
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-seg.pb` | ❌ |
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-seg.tflite` | ✅ |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-seg_edgetpu.tflite` | ✅ |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` | ✅ |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` | ✅ |
|
||||
| Format | `format` Argument | Model | Metadata |
|
||||
|--------------------------------------------------------------------|-------------------|-------------------------------|----------|
|
||||
| [PyTorch](https://pytorch.org/) | - | `yolov8n-seg.pt` | ✅ |
|
||||
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-seg.torchscript` | ✅ |
|
||||
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-seg.onnx` | ✅ |
|
||||
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-seg_openvino_model/` | ✅ |
|
||||
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-seg.engine` | ✅ |
|
||||
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-seg.mlmodel` | ✅ |
|
||||
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-seg_saved_model/` | ✅ |
|
||||
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-seg.pb` | ❌ |
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-seg.tflite` | ✅ |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-seg_edgetpu.tflite` | ✅ |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` | ✅ |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` | ✅ |
|
||||
|
||||
|
250
docs/usage/cfg.md
Normal file
250
docs/usage/cfg.md
Normal file
@ -0,0 +1,250 @@
|
||||
YOLO settings and hyperparameters play a critical role in the model's performance, speed, and accuracy. These settings
|
||||
and hyperparameters can affect the model's behavior at various stages of the model development process, including
|
||||
training, validation, and prediction.
|
||||
|
||||
YOLOv8 'yolo' CLI commands use the following syntax:
|
||||
|
||||
!!! example ""
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo TASK MODE ARGS
|
||||
```
|
||||
|
||||
Where:
|
||||
|
||||
- `TASK` (optional) is one of `[detect, segment, classify]`. If it is not passed explicitly YOLOv8 will try to guess
|
||||
the `TASK` from the model type.
|
||||
- `MODE` (required) is one of `[train, val, predict, export]`
|
||||
- `ARGS` (optional) are any number of custom `arg=value` pairs like `imgsz=320` that override defaults.
|
||||
For a full list of available `ARGS` see the [Configuration](cfg.md) page and `defaults.yaml`
|
||||
GitHub [source](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/cfg/default.yaml).
|
||||
|
||||
#### Tasks
|
||||
|
||||
YOLO models can be used for a variety of tasks, including detection, segmentation, and classification. These tasks
|
||||
differ in the type of output they produce and the specific problem they are designed to solve.
|
||||
|
||||
- **Detect**: Detection tasks involve identifying and localizing objects or regions of interest in an image or video.
|
||||
YOLO models can be used for object detection tasks by predicting the bounding boxes and class labels of objects in an
|
||||
image.
|
||||
- **Segment**: Segmentation tasks involve dividing an image or video into regions or pixels that correspond to
|
||||
different objects or classes. YOLO models can be used for image segmentation tasks by predicting a mask or label for
|
||||
each pixel in an image.
|
||||
- **Classify**: Classification tasks involve assigning a class label to an input, such as an image or text. YOLO
|
||||
models can be used for image classification tasks by predicting the class label of an input image.
|
||||
|
||||
#### Modes
|
||||
|
||||
YOLO models can be used in different modes depending on the specific problem you are trying to solve. These modes
|
||||
include train, val, and predict.
|
||||
|
||||
- **Train**: The train mode is used to train the model on a dataset. This mode is typically used during the development
|
||||
and
|
||||
testing phase of a model.
|
||||
- **Val**: The val mode is used to evaluate the model's performance on a validation dataset. This mode is typically used
|
||||
to
|
||||
tune the model's hyperparameters and detect overfitting.
|
||||
- **Predict**: The predict mode is used to make predictions with the model on new data. This mode is typically used in
|
||||
production or when deploying the model to users.
|
||||
|
||||
| Key | Value | Description |
|
||||
|----------|------------|-----------------------------------------------------------------------------------------------|
|
||||
| `task` | `'detect'` | inference task, i.e. detect, segment, or classify |
|
||||
| `mode` | `'train'` | YOLO mode, i.e. train, val, predict, or export |
|
||||
| `resume` | `False` | resume training from last checkpoint or custom checkpoint if passed as resume=path/to/best.pt |
|
||||
| `model` | `None` | path to model file, i.e. yolov8n.pt, yolov8n.yaml |
|
||||
| `data` | `None` | path to data file, i.e. coco128.yaml |
|
||||
|
||||
### Training
|
||||
|
||||
Training settings for YOLO models refer to the various hyperparameters and configurations used to train the model on a
|
||||
dataset. These settings can affect the model's performance, speed, and accuracy. Some common YOLO training settings
|
||||
include the batch size, learning rate, momentum, and weight decay. Other factors that may affect the training process
|
||||
include the choice of optimizer, the choice of loss function, and the size and composition of the training dataset. It
|
||||
is important to carefully tune and experiment with these settings to achieve the best possible performance for a given
|
||||
task.
|
||||
|
||||
| Key | Value | Description |
|
||||
|-------------------|----------|-----------------------------------------------------------------------------|
|
||||
| `model` | `None` | path to model file, i.e. yolov8n.pt, yolov8n.yaml |
|
||||
| `data` | `None` | path to data file, i.e. coco128.yaml |
|
||||
| `epochs` | `100` | number of epochs to train for |
|
||||
| `patience` | `50` | epochs to wait for no observable improvement for early stopping of training |
|
||||
| `batch` | `16` | number of images per batch (-1 for AutoBatch) |
|
||||
| `imgsz` | `640` | size of input images as integer or w,h |
|
||||
| `save` | `True` | save train checkpoints and predict results |
|
||||
| `save_period` | `-1` | Save checkpoint every x epochs (disabled if < 1) |
|
||||
| `cache` | `False` | True/ram, disk or False. Use cache for data loading |
|
||||
| `device` | `None` | device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu |
|
||||
| `workers` | `8` | number of worker threads for data loading (per RANK if DDP) |
|
||||
| `project` | `None` | project name |
|
||||
| `name` | `None` | experiment name |
|
||||
| `exist_ok` | `False` | whether to overwrite existing experiment |
|
||||
| `pretrained` | `False` | whether to use a pretrained model |
|
||||
| `optimizer` | `'SGD'` | optimizer to use, choices=['SGD', 'Adam', 'AdamW', 'RMSProp'] |
|
||||
| `verbose` | `False` | whether to print verbose output |
|
||||
| `seed` | `0` | random seed for reproducibility |
|
||||
| `deterministic` | `True` | whether to enable deterministic mode |
|
||||
| `single_cls` | `False` | train multi-class data as single-class |
|
||||
| `image_weights` | `False` | use weighted image selection for training |
|
||||
| `rect` | `False` | support rectangular training |
|
||||
| `cos_lr` | `False` | use cosine learning rate scheduler |
|
||||
| `close_mosaic` | `10` | disable mosaic augmentation for final 10 epochs |
|
||||
| `resume` | `False` | resume training from last checkpoint |
|
||||
| `lr0` | `0.01` | initial learning rate (i.e. SGD=1E-2, Adam=1E-3) |
|
||||
| `lrf` | `0.01` | final learning rate (lr0 * lrf) |
|
||||
| `momentum` | `0.937` | SGD momentum/Adam beta1 |
|
||||
| `weight_decay` | `0.0005` | optimizer weight decay 5e-4 |
|
||||
| `warmup_epochs` | `3.0` | warmup epochs (fractions ok) |
|
||||
| `warmup_momentum` | `0.8` | warmup initial momentum |
|
||||
| `warmup_bias_lr` | `0.1` | warmup initial bias lr |
|
||||
| `box` | `7.5` | box loss gain |
|
||||
| `cls` | `0.5` | cls loss gain (scale with pixels) |
|
||||
| `dfl` | `1.5` | dfl loss gain |
|
||||
| `fl_gamma` | `0.0` | focal loss gamma (efficientDet default gamma=1.5) |
|
||||
| `label_smoothing` | `0.0` | label smoothing (fraction) |
|
||||
| `nbs` | `64` | nominal batch size |
|
||||
| `overlap_mask` | `True` | masks should overlap during training (segment train only) |
|
||||
| `mask_ratio` | `4` | mask downsample ratio (segment train only) |
|
||||
| `dropout` | `0.0` | use dropout regularization (classify train only) |
|
||||
| `val` | `True` | validate/test during training |
|
||||
|
||||
### Prediction
|
||||
|
||||
Prediction settings for YOLO models refer to the various hyperparameters and configurations used to make predictions
|
||||
with the model on new data. These settings can affect the model's performance, speed, and accuracy. Some common YOLO
|
||||
prediction settings include the confidence threshold, non-maximum suppression (NMS) threshold, and the number of classes
|
||||
to consider. Other factors that may affect the prediction process include the size and format of the input data, the
|
||||
presence of additional features such as masks or multiple labels per box, and the specific task the model is being used
|
||||
for. It is important to carefully tune and experiment with these settings to achieve the best possible performance for a
|
||||
given task.
|
||||
|
||||
| Key | Value | Description |
|
||||
|------------------|------------------------|----------------------------------------------------------|
|
||||
| `source` | `'ultralytics/assets'` | source directory for images or videos |
|
||||
| `conf` | `0.25` | object confidence threshold for detection |
|
||||
| `iou` | `0.7` | intersection over union (IoU) threshold for NMS |
|
||||
| `half` | `False` | use half precision (FP16) |
|
||||
| `device` | `None` | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
|
||||
| `show` | `False` | show results if possible |
|
||||
| `save` | `False` | save images with results |
|
||||
| `save_txt` | `False` | save results as .txt file |
|
||||
| `save_conf` | `False` | save results with confidence scores |
|
||||
| `save_crop` | `False` | save cropped images with results |
|
||||
| `hide_labels` | `False` | hide labels |
|
||||
| `hide_conf` | `False` | hide confidence scores |
|
||||
| `max_det` | `300` | maximum number of detections per image |
|
||||
| `vid_stride` | `False` | video frame-rate stride |
|
||||
| `line_thickness` | `3` | bounding box thickness (pixels) |
|
||||
| `visualize` | `False` | visualize model features |
|
||||
| `augment` | `False` | apply image augmentation to prediction sources |
|
||||
| `agnostic_nms` | `False` | class-agnostic NMS |
|
||||
| `retina_masks` | `False` | use high-resolution segmentation masks |
|
||||
| `classes` | `None` | filter results by class, i.e. class=0, or class=[0,2,3] |
|
||||
| `box` | `True` | Show boxes in segmentation predictions |
|
||||
|
||||
### Validation
|
||||
|
||||
Validation settings for YOLO models refer to the various hyperparameters and configurations used to
|
||||
evaluate the model's performance on a validation dataset. These settings can affect the model's performance, speed, and
|
||||
accuracy. Some common YOLO validation settings include the batch size, the frequency with which validation is performed
|
||||
during training, and the metrics used to evaluate the model's performance. Other factors that may affect the validation
|
||||
process include the size and composition of the validation dataset and the specific task the model is being used for. It
|
||||
is important to carefully tune and experiment with these settings to ensure that the model is performing well on the
|
||||
validation dataset and to detect and prevent overfitting.
|
||||
|
||||
| Key | Value | Description |
|
||||
|---------------|---------|--------------------------------------------------------------------|
|
||||
| `save_json` | `False` | save results to JSON file |
|
||||
| `save_hybrid` | `False` | save hybrid version of labels (labels + additional predictions) |
|
||||
| `conf` | `0.001` | object confidence threshold for detection |
|
||||
| `iou` | `0.6` | intersection over union (IoU) threshold for NMS |
|
||||
| `max_det` | `300` | maximum number of detections per image |
|
||||
| `half` | `True` | use half precision (FP16) |
|
||||
| `device` | `None` | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
|
||||
| `dnn` | `False` | use OpenCV DNN for ONNX inference |
|
||||
| `plots` | `False` | show plots during training |
|
||||
| `rect` | `False` | support rectangular evaluation |
|
||||
| `split` | `val` | dataset split to use for validation, i.e. 'val', 'test' or 'train' |
|
||||
|
||||
### Export
|
||||
|
||||
Export settings for YOLO models refer to the various configurations and options used to save or
|
||||
export the model for use in other environments or platforms. These settings can affect the model's performance, size,
|
||||
and compatibility with different systems. Some common YOLO export settings include the format of the exported model
|
||||
file (e.g. ONNX, TensorFlow SavedModel), the device on which the model will be run (e.g. CPU, GPU), and the presence of
|
||||
additional features such as masks or multiple labels per box. Other factors that may affect the export process include
|
||||
the specific task the model is being used for and the requirements or constraints of the target environment or platform.
|
||||
It is important to carefully consider and configure these settings to ensure that the exported model is optimized for
|
||||
the intended use case and can be used effectively in the target environment.
|
||||
|
||||
| Key | Value | Description |
|
||||
|-------------|-----------------|------------------------------------------------------|
|
||||
| `format` | `'torchscript'` | format to export to |
|
||||
| `imgsz` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
|
||||
| `keras` | `False` | use Keras for TF SavedModel export |
|
||||
| `optimize` | `False` | TorchScript: optimize for mobile |
|
||||
| `half` | `False` | FP16 quantization |
|
||||
| `int8` | `False` | INT8 quantization |
|
||||
| `dynamic` | `False` | ONNX/TF/TensorRT: dynamic axes |
|
||||
| `simplify` | `False` | ONNX: simplify model |
|
||||
| `opset` | `None` | ONNX: opset version (optional, defaults to latest) |
|
||||
| `workspace` | `4` | TensorRT: workspace size (GB) |
|
||||
| `nms` | `False` | CoreML: add NMS |
|
||||
|
||||
### Augmentation
|
||||
|
||||
Augmentation settings for YOLO models refer to the various transformations and modifications
|
||||
applied to the training data to increase the diversity and size of the dataset. These settings can affect the model's
|
||||
performance, speed, and accuracy. Some common YOLO augmentation settings include the type and intensity of the
|
||||
transformations applied (e.g. random flips, rotations, cropping, color changes), the probability with which each
|
||||
transformation is applied, and the presence of additional features such as masks or multiple labels per box. Other
|
||||
factors that may affect the augmentation process include the size and composition of the original dataset and the
|
||||
specific task the model is being used for. It is important to carefully tune and experiment with these settings to
|
||||
ensure that the augmented dataset is diverse and representative enough to train a high-performing model.
|
||||
|
||||
| Key | Value | Description |
|
||||
|---------------|-------|-------------------------------------------------|
|
||||
| `hsv_h` | 0.015 | image HSV-Hue augmentation (fraction) |
|
||||
| `hsv_s` | 0.7 | image HSV-Saturation augmentation (fraction) |
|
||||
| `hsv_v` | 0.4 | image HSV-Value augmentation (fraction) |
|
||||
| `degrees` | 0.0 | image rotation (+/- deg) |
|
||||
| `translate` | 0.1 | image translation (+/- fraction) |
|
||||
| `scale` | 0.5 | image scale (+/- gain) |
|
||||
| `shear` | 0.0 | image shear (+/- deg) |
|
||||
| `perspective` | 0.0 | image perspective (+/- fraction), range 0-0.001 |
|
||||
| `flipud` | 0.0 | image flip up-down (probability) |
|
||||
| `fliplr` | 0.5 | image flip left-right (probability) |
|
||||
| `mosaic` | 1.0 | image mosaic (probability) |
|
||||
| `mixup` | 0.0 | image mixup (probability) |
|
||||
| `copy_paste` | 0.0 | segment copy-paste (probability) |
|
||||
|
||||
### Logging, checkpoints, plotting and file management
|
||||
|
||||
Logging, checkpoints, plotting, and file management are important considerations when training a YOLO model.
|
||||
|
||||
- Logging: It is often helpful to log various metrics and statistics during training to track the model's progress and
|
||||
diagnose any issues that may arise. This can be done using a logging library such as TensorBoard or by writing log
|
||||
messages to a file.
|
||||
- Checkpoints: It is a good practice to save checkpoints of the model at regular intervals during training. This allows
|
||||
you to resume training from a previous point if the training process is interrupted or if you want to experiment with
|
||||
different training configurations.
|
||||
- Plotting: Visualizing the model's performance and training progress can be helpful for understanding how the model is
|
||||
behaving and identifying potential issues. This can be done using a plotting library such as matplotlib or by
|
||||
generating plots using a logging library such as TensorBoard.
|
||||
- File management: Managing the various files generated during the training process, such as model checkpoints, log
|
||||
files, and plots, can be challenging. It is important to have a clear and organized file structure to keep track of
|
||||
these files and make it easy to access and analyze them as needed.
|
||||
|
||||
Effective logging, checkpointing, plotting, and file management can help you keep track of the model's progress and make
|
||||
it easier to debug and optimize the training process.
|
||||
|
||||
| Key | Value | Description |
|
||||
|------------|----------|------------------------------------------------------------------------------------------------|
|
||||
| `project` | `'runs'` | project name |
|
||||
| `name` | `'exp'` | experiment name. `exp` gets automatically incremented if not specified, i.e, `exp`, `exp2` ... |
|
||||
| `exist_ok` | `False` | whether to overwrite existing experiment |
|
||||
| `plots` | `False` | save plots during train/val |
|
||||
| `save` | `False` | save train checkpoints and predict results |
|
@ -9,7 +9,7 @@ custom model and dataloader by just overriding these functions:
|
||||
|
||||
* `get_model(cfg, weights)` - The function that builds the model to be trained
|
||||
* `get_dataloder()` - The function that builds the dataloader
|
||||
More details and source code can be found in [`BaseTrainer` Reference](reference/base_trainer.md)
|
||||
More details and source code can be found in [`BaseTrainer` Reference](../reference/base_trainer.md)
|
||||
|
||||
## DetectionTrainer
|
||||
|
@ -127,7 +127,7 @@ The simplest way of simply using YOLOv8 directly in a Python environment.
|
||||
|
||||
To know more about using `YOLO` models, refer Model class Reference
|
||||
|
||||
[Model reference](reference/model.md){ .md-button .md-button--primary}
|
||||
[Model reference](../reference/model.md){ .md-button .md-button--primary}
|
||||
|
||||
---
|
||||
|
Reference in New Issue
Block a user