ultralytics 8.0.53 DDP AMP and Edge TPU fixes (#1362)

Co-authored-by: Richard Aljaste <richardaljasteabramson@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Vuong Kha Sieu <75152429+hotfur@users.noreply.github.com>
This commit is contained in:
Glenn Jocher
2023-03-12 02:08:13 +01:00
committed by GitHub
parent 177a68b39f
commit f921e1ac21
46 changed files with 1045 additions and 384 deletions

86
docs/usage/callbacks.md Normal file
View File

@ -0,0 +1,86 @@
## Callbacks
Ultralytics framework supports callbacks as entry points in strategic stages of train, val, export, and predict modes.
Each callback accepts a `Trainer`, `Validator`, or `Predictor` object depending on the operation type. All properties of
these objects can be found in Reference section of the docs.
## Examples
### Returning additional information with Prediction
In this example, we want to return the original frame with each result object. Here's how we can do that
```python
def on_predict_batch_end(predictor):
# results -> List[batch_size]
_, _, im0s, _, _ = predictor.batch
im0s = im0s if isinstance(im0s, list) else [im0s]
predictor.results = zip(predictor.results, im0s)
model = YOLO(f"yolov8n.pt")
model.add_callback("on_predict_batch_end", on_predict_batch_end)
for (result, frame) in model.track/predict():
pass
```
## All callbacks
Here are all supported callbacks.
### Trainer
`on_pretrain_routine_start`
`on_pretrain_routine_end`
`on_train_start`
`on_train_epoch_start`
`on_train_batch_start`
`optimizer_step`
`on_before_zero_grad`
`on_train_batch_end`
`on_train_epoch_end`
`on_fit_epoch_end`
`on_model_save`
`on_train_end`
`on_params_update`
`teardown`
### Validator
`on_val_start`
`on_val_batch_start`
`on_val_batch_end`
`on_val_end`
### Predictor
`on_predict_start`
`on_predict_batch_start`
`on_predict_postprocess_end`
`on_predict_batch_end`
`on_predict_end`
### Exporter
`on_export_start`
`on_export_end`

250
docs/usage/cfg.md Normal file
View File

@ -0,0 +1,250 @@
YOLO settings and hyperparameters play a critical role in the model's performance, speed, and accuracy. These settings
and hyperparameters can affect the model's behavior at various stages of the model development process, including
training, validation, and prediction.
YOLOv8 'yolo' CLI commands use the following syntax:
!!! example ""
=== "CLI"
```bash
yolo TASK MODE ARGS
```
Where:
- `TASK` (optional) is one of `[detect, segment, classify]`. If it is not passed explicitly YOLOv8 will try to guess
the `TASK` from the model type.
- `MODE` (required) is one of `[train, val, predict, export]`
- `ARGS` (optional) are any number of custom `arg=value` pairs like `imgsz=320` that override defaults.
For a full list of available `ARGS` see the [Configuration](cfg.md) page and `defaults.yaml`
GitHub [source](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/cfg/default.yaml).
#### Tasks
YOLO models can be used for a variety of tasks, including detection, segmentation, and classification. These tasks
differ in the type of output they produce and the specific problem they are designed to solve.
- **Detect**: Detection tasks involve identifying and localizing objects or regions of interest in an image or video.
YOLO models can be used for object detection tasks by predicting the bounding boxes and class labels of objects in an
image.
- **Segment**: Segmentation tasks involve dividing an image or video into regions or pixels that correspond to
different objects or classes. YOLO models can be used for image segmentation tasks by predicting a mask or label for
each pixel in an image.
- **Classify**: Classification tasks involve assigning a class label to an input, such as an image or text. YOLO
models can be used for image classification tasks by predicting the class label of an input image.
#### Modes
YOLO models can be used in different modes depending on the specific problem you are trying to solve. These modes
include train, val, and predict.
- **Train**: The train mode is used to train the model on a dataset. This mode is typically used during the development
and
testing phase of a model.
- **Val**: The val mode is used to evaluate the model's performance on a validation dataset. This mode is typically used
to
tune the model's hyperparameters and detect overfitting.
- **Predict**: The predict mode is used to make predictions with the model on new data. This mode is typically used in
production or when deploying the model to users.
| Key | Value | Description |
|----------|------------|-----------------------------------------------------------------------------------------------|
| `task` | `'detect'` | inference task, i.e. detect, segment, or classify |
| `mode` | `'train'` | YOLO mode, i.e. train, val, predict, or export |
| `resume` | `False` | resume training from last checkpoint or custom checkpoint if passed as resume=path/to/best.pt |
| `model` | `None` | path to model file, i.e. yolov8n.pt, yolov8n.yaml |
| `data` | `None` | path to data file, i.e. coco128.yaml |
### Training
Training settings for YOLO models refer to the various hyperparameters and configurations used to train the model on a
dataset. These settings can affect the model's performance, speed, and accuracy. Some common YOLO training settings
include the batch size, learning rate, momentum, and weight decay. Other factors that may affect the training process
include the choice of optimizer, the choice of loss function, and the size and composition of the training dataset. It
is important to carefully tune and experiment with these settings to achieve the best possible performance for a given
task.
| Key | Value | Description |
|-------------------|----------|-----------------------------------------------------------------------------|
| `model` | `None` | path to model file, i.e. yolov8n.pt, yolov8n.yaml |
| `data` | `None` | path to data file, i.e. coco128.yaml |
| `epochs` | `100` | number of epochs to train for |
| `patience` | `50` | epochs to wait for no observable improvement for early stopping of training |
| `batch` | `16` | number of images per batch (-1 for AutoBatch) |
| `imgsz` | `640` | size of input images as integer or w,h |
| `save` | `True` | save train checkpoints and predict results |
| `save_period` | `-1` | Save checkpoint every x epochs (disabled if < 1) |
| `cache` | `False` | True/ram, disk or False. Use cache for data loading |
| `device` | `None` | device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu |
| `workers` | `8` | number of worker threads for data loading (per RANK if DDP) |
| `project` | `None` | project name |
| `name` | `None` | experiment name |
| `exist_ok` | `False` | whether to overwrite existing experiment |
| `pretrained` | `False` | whether to use a pretrained model |
| `optimizer` | `'SGD'` | optimizer to use, choices=['SGD', 'Adam', 'AdamW', 'RMSProp'] |
| `verbose` | `False` | whether to print verbose output |
| `seed` | `0` | random seed for reproducibility |
| `deterministic` | `True` | whether to enable deterministic mode |
| `single_cls` | `False` | train multi-class data as single-class |
| `image_weights` | `False` | use weighted image selection for training |
| `rect` | `False` | support rectangular training |
| `cos_lr` | `False` | use cosine learning rate scheduler |
| `close_mosaic` | `10` | disable mosaic augmentation for final 10 epochs |
| `resume` | `False` | resume training from last checkpoint |
| `lr0` | `0.01` | initial learning rate (i.e. SGD=1E-2, Adam=1E-3) |
| `lrf` | `0.01` | final learning rate (lr0 * lrf) |
| `momentum` | `0.937` | SGD momentum/Adam beta1 |
| `weight_decay` | `0.0005` | optimizer weight decay 5e-4 |
| `warmup_epochs` | `3.0` | warmup epochs (fractions ok) |
| `warmup_momentum` | `0.8` | warmup initial momentum |
| `warmup_bias_lr` | `0.1` | warmup initial bias lr |
| `box` | `7.5` | box loss gain |
| `cls` | `0.5` | cls loss gain (scale with pixels) |
| `dfl` | `1.5` | dfl loss gain |
| `fl_gamma` | `0.0` | focal loss gamma (efficientDet default gamma=1.5) |
| `label_smoothing` | `0.0` | label smoothing (fraction) |
| `nbs` | `64` | nominal batch size |
| `overlap_mask` | `True` | masks should overlap during training (segment train only) |
| `mask_ratio` | `4` | mask downsample ratio (segment train only) |
| `dropout` | `0.0` | use dropout regularization (classify train only) |
| `val` | `True` | validate/test during training |
### Prediction
Prediction settings for YOLO models refer to the various hyperparameters and configurations used to make predictions
with the model on new data. These settings can affect the model's performance, speed, and accuracy. Some common YOLO
prediction settings include the confidence threshold, non-maximum suppression (NMS) threshold, and the number of classes
to consider. Other factors that may affect the prediction process include the size and format of the input data, the
presence of additional features such as masks or multiple labels per box, and the specific task the model is being used
for. It is important to carefully tune and experiment with these settings to achieve the best possible performance for a
given task.
| Key | Value | Description |
|------------------|------------------------|----------------------------------------------------------|
| `source` | `'ultralytics/assets'` | source directory for images or videos |
| `conf` | `0.25` | object confidence threshold for detection |
| `iou` | `0.7` | intersection over union (IoU) threshold for NMS |
| `half` | `False` | use half precision (FP16) |
| `device` | `None` | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
| `show` | `False` | show results if possible |
| `save` | `False` | save images with results |
| `save_txt` | `False` | save results as .txt file |
| `save_conf` | `False` | save results with confidence scores |
| `save_crop` | `False` | save cropped images with results |
| `hide_labels` | `False` | hide labels |
| `hide_conf` | `False` | hide confidence scores |
| `max_det` | `300` | maximum number of detections per image |
| `vid_stride` | `False` | video frame-rate stride |
| `line_thickness` | `3` | bounding box thickness (pixels) |
| `visualize` | `False` | visualize model features |
| `augment` | `False` | apply image augmentation to prediction sources |
| `agnostic_nms` | `False` | class-agnostic NMS |
| `retina_masks` | `False` | use high-resolution segmentation masks |
| `classes` | `None` | filter results by class, i.e. class=0, or class=[0,2,3] |
| `box` | `True` | Show boxes in segmentation predictions |
### Validation
Validation settings for YOLO models refer to the various hyperparameters and configurations used to
evaluate the model's performance on a validation dataset. These settings can affect the model's performance, speed, and
accuracy. Some common YOLO validation settings include the batch size, the frequency with which validation is performed
during training, and the metrics used to evaluate the model's performance. Other factors that may affect the validation
process include the size and composition of the validation dataset and the specific task the model is being used for. It
is important to carefully tune and experiment with these settings to ensure that the model is performing well on the
validation dataset and to detect and prevent overfitting.
| Key | Value | Description |
|---------------|---------|--------------------------------------------------------------------|
| `save_json` | `False` | save results to JSON file |
| `save_hybrid` | `False` | save hybrid version of labels (labels + additional predictions) |
| `conf` | `0.001` | object confidence threshold for detection |
| `iou` | `0.6` | intersection over union (IoU) threshold for NMS |
| `max_det` | `300` | maximum number of detections per image |
| `half` | `True` | use half precision (FP16) |
| `device` | `None` | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
| `dnn` | `False` | use OpenCV DNN for ONNX inference |
| `plots` | `False` | show plots during training |
| `rect` | `False` | support rectangular evaluation |
| `split` | `val` | dataset split to use for validation, i.e. 'val', 'test' or 'train' |
### Export
Export settings for YOLO models refer to the various configurations and options used to save or
export the model for use in other environments or platforms. These settings can affect the model's performance, size,
and compatibility with different systems. Some common YOLO export settings include the format of the exported model
file (e.g. ONNX, TensorFlow SavedModel), the device on which the model will be run (e.g. CPU, GPU), and the presence of
additional features such as masks or multiple labels per box. Other factors that may affect the export process include
the specific task the model is being used for and the requirements or constraints of the target environment or platform.
It is important to carefully consider and configure these settings to ensure that the exported model is optimized for
the intended use case and can be used effectively in the target environment.
| Key | Value | Description |
|-------------|-----------------|------------------------------------------------------|
| `format` | `'torchscript'` | format to export to |
| `imgsz` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
| `keras` | `False` | use Keras for TF SavedModel export |
| `optimize` | `False` | TorchScript: optimize for mobile |
| `half` | `False` | FP16 quantization |
| `int8` | `False` | INT8 quantization |
| `dynamic` | `False` | ONNX/TF/TensorRT: dynamic axes |
| `simplify` | `False` | ONNX: simplify model |
| `opset` | `None` | ONNX: opset version (optional, defaults to latest) |
| `workspace` | `4` | TensorRT: workspace size (GB) |
| `nms` | `False` | CoreML: add NMS |
### Augmentation
Augmentation settings for YOLO models refer to the various transformations and modifications
applied to the training data to increase the diversity and size of the dataset. These settings can affect the model's
performance, speed, and accuracy. Some common YOLO augmentation settings include the type and intensity of the
transformations applied (e.g. random flips, rotations, cropping, color changes), the probability with which each
transformation is applied, and the presence of additional features such as masks or multiple labels per box. Other
factors that may affect the augmentation process include the size and composition of the original dataset and the
specific task the model is being used for. It is important to carefully tune and experiment with these settings to
ensure that the augmented dataset is diverse and representative enough to train a high-performing model.
| Key | Value | Description |
|---------------|-------|-------------------------------------------------|
| `hsv_h` | 0.015 | image HSV-Hue augmentation (fraction) |
| `hsv_s` | 0.7 | image HSV-Saturation augmentation (fraction) |
| `hsv_v` | 0.4 | image HSV-Value augmentation (fraction) |
| `degrees` | 0.0 | image rotation (+/- deg) |
| `translate` | 0.1 | image translation (+/- fraction) |
| `scale` | 0.5 | image scale (+/- gain) |
| `shear` | 0.0 | image shear (+/- deg) |
| `perspective` | 0.0 | image perspective (+/- fraction), range 0-0.001 |
| `flipud` | 0.0 | image flip up-down (probability) |
| `fliplr` | 0.5 | image flip left-right (probability) |
| `mosaic` | 1.0 | image mosaic (probability) |
| `mixup` | 0.0 | image mixup (probability) |
| `copy_paste` | 0.0 | segment copy-paste (probability) |
### Logging, checkpoints, plotting and file management
Logging, checkpoints, plotting, and file management are important considerations when training a YOLO model.
- Logging: It is often helpful to log various metrics and statistics during training to track the model's progress and
diagnose any issues that may arise. This can be done using a logging library such as TensorBoard or by writing log
messages to a file.
- Checkpoints: It is a good practice to save checkpoints of the model at regular intervals during training. This allows
you to resume training from a previous point if the training process is interrupted or if you want to experiment with
different training configurations.
- Plotting: Visualizing the model's performance and training progress can be helpful for understanding how the model is
behaving and identifying potential issues. This can be done using a plotting library such as matplotlib or by
generating plots using a logging library such as TensorBoard.
- File management: Managing the various files generated during the training process, such as model checkpoints, log
files, and plots, can be challenging. It is important to have a clear and organized file structure to keep track of
these files and make it easy to access and analyze them as needed.
Effective logging, checkpointing, plotting, and file management can help you keep track of the model's progress and make
it easier to debug and optimize the training process.
| Key | Value | Description |
|------------|----------|------------------------------------------------------------------------------------------------|
| `project` | `'runs'` | project name |
| `name` | `'exp'` | experiment name. `exp` gets automatically incremented if not specified, i.e, `exp`, `exp2` ... |
| `exist_ok` | `False` | whether to overwrite existing experiment |
| `plots` | `False` | save plots during train/val |
| `save` | `False` | save train checkpoints and predict results |

138
docs/usage/cli.md Normal file
View File

@ -0,0 +1,138 @@
The YOLO Command Line Interface (CLI) is the easiest way to get started training, validating, predicting and exporting
YOLOv8 models.
The `yolo` command is used for all actions:
!!! example ""
=== "CLI"
```bash
yolo TASK MODE ARGS
```
Where:
- `TASK` (optional) is one of `[detect, segment, classify]`. If it is not passed explicitly YOLOv8 will try to guess
the `TASK` from the model type.
- `MODE` (required) is one of `[train, val, predict, export]`
- `ARGS` (optional) are any number of custom `arg=value` pairs like `imgsz=320` that override defaults.
For a full list of available `ARGS` see the [Configuration](cfg.md) page and `defaults.yaml`
GitHub [source](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/cfg/default.yaml).
!!! note ""
<b>Note:</b> Arguments MUST be passed as `arg=val` with an equals sign and a space between `arg=val` pairs
- `yolo predict model=yolov8n.pt imgsz=640 conf=0.25` &nbsp; ✅
- `yolo predict model yolov8n.pt imgsz 640 conf 0.25` &nbsp; ❌
- `yolo predict --model yolov8n.pt --imgsz 640 --conf 0.25` &nbsp; ❌
## Train
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see
the [Configuration](cfg.md) page.
!!! example ""
```bash
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640
yolo detect train resume model=last.pt # resume training
```
## Val
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's
training `data` and arguments as model attributes.
!!! example ""
```bash
yolo detect val model=yolov8n.pt # val official model
yolo detect val model=path/to/best.pt # val custom model
```
## Predict
Use a trained YOLOv8n model to run predictions on images.
!!! example ""
```bash
yolo detect predict model=yolov8n.pt source="https://ultralytics.com/images/bus.jpg" # predict with official model
yolo detect predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg" # predict with custom model
```
## Export
Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
!!! example ""
```bash
yolo export model=yolov8n.pt format=onnx # export official model
yolo export model=path/to/best.pt format=onnx # export custom trained model
```
Available YOLOv8 export formats include:
| Format | `format=` | Model |
|----------------------------------------------------------------------------|--------------------|---------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` |
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` |
---
## Overriding default arguments
Default arguments can be overridden by simply passing them as arguments in the CLI in `arg=value` pairs.
!!! tip ""
=== "Example 1"
Train a detection model for `10 epochs` with `learning_rate` of `0.01`
```bash
yolo detect train data=coco128.yaml model=yolov8n.pt epochs=10 lr0=0.01
```
=== "Example 2"
Predict a YouTube video using a pretrained segmentation model at image size 320:
```bash
yolo segment predict model=yolov8n-seg.pt source='https://youtu.be/Zgi9g1ksQHc' imgsz=320
```
=== "Example 3"
Validate a pretrained detection model at batch-size 1 and image size 640:
```bash
yolo detect val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640
```
---
## Overriding default config file
You can override the `default.yaml` config file entirely by passing a new file with the `cfg` arguments,
i.e. `cfg=custom.yaml`.
To do this first create a copy of `default.yaml` in your current working dir with the `yolo copy-cfg` command.
This will create `default_copy.yaml`, which you can then pass as `cfg=default_copy.yaml` along with any additional args,
like `imgsz=320` in this example:
!!! example ""
=== "CLI"
```bash
yolo copy-cfg
yolo cfg=default_copy.yaml imgsz=320
```

83
docs/usage/engine.md Normal file
View File

@ -0,0 +1,83 @@
Both the Ultralytics YOLO command-line and python interfaces are simply a high-level abstraction on the base engine
executors. Let's take a look at the Trainer engine.
## BaseTrainer
BaseTrainer contains the generic boilerplate training routine. It can be customized for any task based over overriding
the required functions or operations as long the as correct formats are followed. For example, you can support your own
custom model and dataloader by just overriding these functions:
* `get_model(cfg, weights)` - The function that builds the model to be trained
* `get_dataloder()` - The function that builds the dataloader
More details and source code can be found in [`BaseTrainer` Reference](../reference/base_trainer.md)
## DetectionTrainer
Here's how you can use the YOLOv8 `DetectionTrainer` and customize it.
```python
from ultralytics.yolo.v8.detect import DetectionTrainer
trainer = DetectionTrainer(overrides={...})
trainer.train()
trained_model = trainer.best # get best model
```
### Customizing the DetectionTrainer
Let's customize the trainer **to train a custom detection model** that is not supported directly. You can do this by
simply overloading the existing the `get_model` functionality:
```python
from ultralytics.yolo.v8.detect import DetectionTrainer
class CustomTrainer(DetectionTrainer):
def get_model(self, cfg, weights):
...
trainer = CustomTrainer(overrides={...})
trainer.train()
```
You now realize that you need to customize the trainer further to:
* Customize the `loss function`.
* Add `callback` that uploads model to your Google Drive after every 10 `epochs`
Here's how you can do it:
```python
from ultralytics.yolo.v8.detect import DetectionTrainer
class CustomTrainer(DetectionTrainer):
def get_model(self, cfg, weights):
...
def criterion(self, preds, batch):
# get ground truth
imgs = batch["imgs"]
bboxes = batch["bboxes"]
...
return loss, loss_items # see Reference-> Trainer for details on the expected format
# callback to upload model weights
def log_model(trainer):
last_weight_path = trainer.last
...
trainer = CustomTrainer(overrides={...})
trainer.add_callback("on_train_epoch_end", log_model) # Adds to existing callback
trainer.train()
```
To know more about Callback triggering events and entry point, checkout our Callbacks guide # TODO
## Other engine components
There are other components that can be customized similarly like `Validators` and `Predictors`
See Reference section for more information on these.

166
docs/usage/python.md Normal file
View File

@ -0,0 +1,166 @@
The simplest way of simply using YOLOv8 directly in a Python environment.
!!! example "Train"
=== "From pretrained(recommended)"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.pt") # pass any model type
model.train(epochs=5)
```
=== "From scratch"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.yaml")
model.train(data="coco128.yaml", epochs=5)
```
=== "Resume"
```python
# TODO: Resume feature is under development and should be released soon.
model = YOLO("last.pt")
model.train(resume=True)
```
!!! example "Val"
=== "Val after training"
```python
from ultralytics import YOLO
model = YOLO("yolov8n.yaml")
model.train(data="coco128.yaml", epochs=5)
model.val() # It'll automatically evaluate the data you trained.
```
=== "Val independently"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
# It'll use the data yaml file in model.pt if you don't set data.
model.val()
# or you can set the data you want to val
model.val(data="coco128.yaml")
```
!!! example "Predict"
=== "From source"
```python
from ultralytics import YOLO
from PIL import Image
import cv2
model = YOLO("model.pt")
# accepts all formats - image/dir/Path/URL/video/PIL/ndarray. 0 for webcam
results = model.predict(source="0")
results = model.predict(source="folder", show=True) # Display preds. Accepts all YOLO predict arguments
# from PIL
im1 = Image.open("bus.jpg")
results = model.predict(source=im1, save=True) # save plotted images
# from ndarray
im2 = cv2.imread("bus.jpg")
results = model.predict(source=im2, save=True, save_txt=True) # save predictions as labels
# from list of PIL/ndarray
results = model.predict(source=[im1, im2])
```
=== "Results usage"
```python
# results would be a list of Results object including all the predictions by default
# but be careful as it could occupy a lot memory when there're many images,
# especially the task is segmentation.
# 1. return as a list
results = model.predict(source="folder")
# results would be a generator which is more friendly to memory by setting stream=True
# 2. return as a generator
results = model.predict(source=0, stream=True)
for result in results:
# detection
result.boxes.xyxy # box with xyxy format, (N, 4)
result.boxes.xywh # box with xywh format, (N, 4)
result.boxes.xyxyn # box with xyxy format but normalized, (N, 4)
result.boxes.xywhn # box with xywh format but normalized, (N, 4)
result.boxes.conf # confidence score, (N, 1)
result.boxes.cls # cls, (N, 1)
# segmentation
result.masks.masks # masks, (N, H, W)
result.masks.segments # bounding coordinates of masks, List[segment] * N
# classification
result.probs # cls prob, (num_class, )
# Each result is composed of torch.Tensor by default,
# in which you can easily use following functionality:
result = result.cuda()
result = result.cpu()
result = result.to("cpu")
result = result.numpy()
```
!!! note "Export and Deployment"
=== "Export, Fuse & info"
```python
from ultralytics import YOLO
model = YOLO("model.pt")
model.fuse()
model.info(verbose=True) # Print model information
model.export(format=) # TODO:
```
=== "Deployment"
More functionality coming soon
To know more about using `YOLO` models, refer Model class Reference
[Model reference](../reference/model.md){ .md-button .md-button--primary}
---
### Using Trainers
`YOLO` model class is a high-level wrapper on the Trainer classes. Each YOLO task has its own trainer that inherits
from `BaseTrainer`.
!!! tip "Detection Trainer Example"
```python
from ultralytics.yolo import v8 import DetectionTrainer, DetectionValidator, DetectionPredictor
# trainer
trainer = DetectionTrainer(overrides={})
trainer.train()
trained_model = trainer.best
# Validator
val = DetectionValidator(args=...)
val(model=trained_model)
# predictor
pred = DetectionPredictor(overrides={})
pred(source=SOURCE, model=trained_model)
# resume from last weight
overrides["resume"] = trainer.last
trainer = detect.DetectionTrainer(overrides=overrides)
```
You can easily customize Trainers to support custom tasks or explore R&D ideas.
Learn more about Customizing `Trainers`, `Validators` and `Predictors` to suit your project needs in the Customization
Section.
[Customization tutorials](engine.md){ .md-button .md-button--primary}