Update tracker docs (#4044)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Burhan <62214284+Burhan-Q@users.noreply.github.com>
single_channel
Glenn Jocher 1 year ago committed by GitHub
parent a02b7e6273
commit e6d18cc944
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

1
.gitignore vendored

@ -123,6 +123,7 @@ venv.bak/
# mkdocs documentation # mkdocs documentation
/site /site
mkdocs_github_authors.yaml
# mypy # mypy
.mypy_cache/ .mypy_cache/

@ -145,61 +145,61 @@ The rows index the label files, each corresponding to an image in your dataset,
2. The dataset has now been split into `k` folds, each having a list of `train` and `val` indices. We will construct a DataFrame to display these results more clearly. 2. The dataset has now been split into `k` folds, each having a list of `train` and `val` indices. We will construct a DataFrame to display these results more clearly.
```python ```python
folds = [f'split_{n}' for n in range(1, ksplit + 1)] folds = [f'split_{n}' for n in range(1, ksplit + 1)]
folds_df = pd.DataFrame(index=indx, columns=folds) folds_df = pd.DataFrame(index=indx, columns=folds)
for idx, (train, val) in enumerate(kfolds, start=1): for idx, (train, val) in enumerate(kfolds, start=1):
folds_df[f'split_{idx}'].loc[labels_df.iloc[train].index] = 'train' folds_df[f'split_{idx}'].loc[labels_df.iloc[train].index] = 'train'
folds_df[f'split_{idx}'].loc[labels_df.iloc[val].index] = 'val' folds_df[f'split_{idx}'].loc[labels_df.iloc[val].index] = 'val'
``` ```
3. Now we will calculate the distribution of class labels for each fold as a ratio of the classes present in `val` to those present in `train`. 3. Now we will calculate the distribution of class labels for each fold as a ratio of the classes present in `val` to those present in `train`.
```python ```python
fold_lbl_distrb = pd.DataFrame(index=folds, columns=cls_idx) fold_lbl_distrb = pd.DataFrame(index=folds, columns=cls_idx)
for n, (train_indices, val_indices) in enumerate(kfolds, start=1): for n, (train_indices, val_indices) in enumerate(kfolds, start=1):
train_totals = labels_df.iloc[train_indices].sum() train_totals = labels_df.iloc[train_indices].sum()
val_totals = labels_df.iloc[val_indices].sum() val_totals = labels_df.iloc[val_indices].sum()
# To avoid division by zero, we add a small value (1E-7) to the denominator # To avoid division by zero, we add a small value (1E-7) to the denominator
ratio = val_totals / (train_totals + 1E-7) ratio = val_totals / (train_totals + 1E-7)
fold_lbl_distrb.loc[f'split_{n}'] = ratio fold_lbl_distrb.loc[f'split_{n}'] = ratio
``` ```
The ideal scenario is for all class ratios to be reasonably similar for each split and across classes. This, however, will be subject to the specifics of your dataset. The ideal scenario is for all class ratios to be reasonably similar for each split and across classes. This, however, will be subject to the specifics of your dataset.
4. Next, we create the directories and dataset YAML files for each split. 4. Next, we create the directories and dataset YAML files for each split.
```python ```python
save_path = Path(dataset_path / f'{datetime.date.today().isoformat()}_{ksplit}-Fold_Cross-val') save_path = Path(dataset_path / f'{datetime.date.today().isoformat()}_{ksplit}-Fold_Cross-val')
save_path.mkdir(parents=True, exist_ok=True) save_path.mkdir(parents=True, exist_ok=True)
images = sorted((dataset_path / 'images').rglob("*.jpg")) # change file extension as needed images = sorted((dataset_path / 'images').rglob("*.jpg")) # change file extension as needed
ds_yamls = [] ds_yamls = []
for split in folds_df.columns: for split in folds_df.columns:
# Create directories # Create directories
split_dir = save_path / split split_dir = save_path / split
split_dir.mkdir(parents=True, exist_ok=True) split_dir.mkdir(parents=True, exist_ok=True)
(split_dir / 'train' / 'images').mkdir(parents=True, exist_ok=True) (split_dir / 'train' / 'images').mkdir(parents=True, exist_ok=True)
(split_dir / 'train' / 'labels').mkdir(parents=True, exist_ok=True) (split_dir / 'train' / 'labels').mkdir(parents=True, exist_ok=True)
(split_dir / 'val' / 'images').mkdir(parents=True, exist_ok=True) (split_dir / 'val' / 'images').mkdir(parents=True, exist_ok=True)
(split_dir / 'val' / 'labels').mkdir(parents=True, exist_ok=True) (split_dir / 'val' / 'labels').mkdir(parents=True, exist_ok=True)
# Create dataset YAML files # Create dataset YAML files
dataset_yaml = split_dir / f'{split}_dataset.yaml' dataset_yaml = split_dir / f'{split}_dataset.yaml'
ds_yamls.append(dataset_yaml) ds_yamls.append(dataset_yaml)
with open(dataset_yaml, 'w') as ds_y: with open(dataset_yaml, 'w') as ds_y:
yaml.safe_dump({ yaml.safe_dump({
'path': save_path.as_posix(), 'path': save_path.as_posix(),
'train': 'train', 'train': 'train',
'val': 'val', 'val': 'val',
'names': classes 'names': classes
}, ds_y) }, ds_y)
``` ```
5. Lastly, copy images and labels into the respective directory ('train' or 'val') for each split. 5. Lastly, copy images and labels into the respective directory ('train' or 'val') for each split.
@ -246,8 +246,6 @@ fold_lbl_distrb.to_csv(save_path / "kfold_label_distribution.csv")
results[k] = model.metrics # save output metrics for further analysis results[k] = model.metrics # save output metrics for further analysis
``` ```
In this updated section, I have replaced manual string joining with the built-in `Path` method for constructing directories, which makes the code more Pythonic. I have also improved the explanation and clarity of the instructions.
## Conclusion ## Conclusion
In this guide, we have explored the process of using K-Fold cross-validation for training the YOLO object detection model. We learned how to split our dataset into K partitions, ensuring a balanced class distribution across the different folds. In this guide, we have explored the process of using K-Fold cross-validation for training the YOLO object detection model. We learned how to split our dataset into K partitions, ensuring a balanced class distribution across the different folds.
@ -260,4 +258,4 @@ Finally, we implemented the actual model training using each split in a loop, sa
This technique of K-Fold cross-validation is a robust way of making the most out of your available data, and it helps to ensure that your model performance is reliable and consistent across different data subsets. This results in a more generalizable and reliable model that is less likely to overfit to specific data patterns. This technique of K-Fold cross-validation is a robust way of making the most out of your available data, and it helps to ensure that your model performance is reliable and consistent across different data subsets. This results in a more generalizable and reliable model that is less likely to overfit to specific data patterns.
Remember that although we used YOLO in this guide, these steps are mostly transferable to other machine learning models. Understanding these steps allows you to apply cross-validation effectively in your own machine learning projects. Happy coding! Remember that although we used YOLO in this guide, these steps are mostly transferable to other machine learning models. Understanding these steps allows you to apply cross-validation effectively in your own machine learning projects. Happy coding!

@ -57,20 +57,20 @@ the benchmarks to their specific needs and compare the performance of different
Benchmarks will attempt to run automatically on all possible export formats below. Benchmarks will attempt to run automatically on all possible export formats below.
| Format | `format` Argument | Model | Metadata | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|---------------------------|----------| |--------------------------------------------------------------------|-------------------|---------------------------|----------|-----------------------------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | | [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | - |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | | [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | `imgsz`, `optimize` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | | [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | `imgsz`, `half` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | | [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | | [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | `imgsz`, `half`, `int8`, `nms` |
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | | [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | `imgsz`, `keras` |
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | | [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | `imgsz` |
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | | [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8` |
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |
| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | | [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` |
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.

@ -483,7 +483,7 @@ masks, classification probabilities, etc.) found in the results object
## Streaming Source `for`-loop ## Streaming Source `for`-loop
Here's a Python script using OpenCV (cv2) and YOLOv8 to run inference on video frames. This script assumes you have already installed the necessary packages (opencv-python and ultralytics). Here's a Python script using OpenCV (`cv2`) and YOLOv8 to run inference on video frames. This script assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`).
!!! example "Streaming for-loop" !!! example "Streaming for-loop"
@ -524,3 +524,5 @@ Here's a Python script using OpenCV (cv2) and YOLOv8 to run inference on video f
cap.release() cap.release()
cv2.destroyAllWindows() cv2.destroyAllWindows()
``` ```
This script will run predictions on each frame of the video, visualize the results, and display them in a window. The loop can be exited by pressing 'q'.

@ -6,23 +6,22 @@ keywords: Ultralytics, YOLO, object tracking, video streams, BoT-SORT, ByteTrack
<img width="1024" src="https://user-images.githubusercontent.com/26833433/243418637-1d6250fd-1515-4c10-a844-a32818ae6d46.png"> <img width="1024" src="https://user-images.githubusercontent.com/26833433/243418637-1d6250fd-1515-4c10-a844-a32818ae6d46.png">
Object tracking is a task that involves identifying the location and class of objects, then assigning a unique ID to Object tracking is a task that involves identifying the location and class of objects, then assigning a unique ID to that detection in video streams.
that detection in video streams.
The output of tracker is the same as detection with an added object ID. The output of tracker is the same as detection with an added object ID.
## Available Trackers ## Available Trackers
The following tracking algorithms have been implemented and can be enabled by passing `tracker=tracker_type.yaml` Ultralytics YOLO supports the following tracking algorithms. They can be enabled by passing the relevant YAML configuration file such as `tracker=tracker_type.yaml`:
* [BoT-SORT](https://github.com/NirAharon/BoT-SORT) - `botsort.yaml` * [BoT-SORT](https://github.com/NirAharon/BoT-SORT) - Use `botsort.yaml` to enable this tracker.
* [ByteTrack](https://github.com/ifzhang/ByteTrack) - `bytetrack.yaml` * [ByteTrack](https://github.com/ifzhang/ByteTrack) - Use `bytetrack.yaml` to enable this tracker.
The default tracker is BoT-SORT. The default tracker is BoT-SORT.
## Tracking ## Tracking
Use a trained YOLOv8n/YOLOv8n-seg model to run tracker on video streams. To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLOv8n, YOLOv8n-seg and YOLOv8n-pose.
!!! example "" !!! example ""
@ -31,34 +30,38 @@ Use a trained YOLOv8n/YOLOv8n-seg model to run tracker on video streams.
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
# Load a model # Load an official or custom model
model = YOLO('yolov8n.pt') # load an official detection model model = YOLO('yolov8n.pt') # Load an official Detect model
model = YOLO('yolov8n-seg.pt') # load an official segmentation model model = YOLO('yolov8n-seg.pt') # Load an official Segment model
model = YOLO('path/to/best.pt') # load a custom model model = YOLO('yolov8n-pose.pt') # Load an official Pose model
model = YOLO('path/to/best.pt') # Load a custom trained model
# Track with the model # Perform tracking with the model
results = model.track(source="https://youtu.be/Zgi9g1ksQHc", show=True) results = model.track(source="https://youtu.be/Zgi9g1ksQHc", show=True) # Tracking with default tracker
results = model.track(source="https://youtu.be/Zgi9g1ksQHc", show=True, tracker="bytetrack.yaml") results = model.track(source="https://youtu.be/Zgi9g1ksQHc", show=True, tracker="bytetrack.yaml") # Tracking with ByteTrack tracker
``` ```
=== "CLI" === "CLI"
```bash ```bash
yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" # official detection model # Perform tracking with various models using the command line interface
yolo track model=yolov8n-seg.pt source=... # official segmentation model yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" # Official Detect model
yolo track model=path/to/best.pt source=... # custom model yolo track model=yolov8n-seg.pt source="https://youtu.be/Zgi9g1ksQHc" # Official Segment model
yolo track model=path/to/best.pt tracker="bytetrack.yaml" # bytetrack tracker yolo track model=yolov8n-pose.pt source="https://youtu.be/Zgi9g1ksQHc" # Official Pose model
yolo track model=path/to/best.pt source="https://youtu.be/Zgi9g1ksQHc" # Custom trained model
# Track using ByteTrack tracker
yolo track model=path/to/best.pt tracker="bytetrack.yaml"
``` ```
As in the above usage, we support both the detection and segmentation models for tracking and the only thing you need to As can be seen in the above usage, tracking is available for all Detect, Segment and Pose models run on videos or streaming sources.
do is loading the corresponding (detection or segmentation) model.
## Configuration ## Configuration
### Tracking ### Tracking Arguments
Tracking configuration shares properties with Predict mode, such as `conf`, `iou`, and `show`. For further configurations, refer to the [Predict](https://docs.ultralytics.com/modes/predict/) model page.
Tracking shares the configuration with predict, i.e `conf`, `iou`, `show`. More configurations please refer
to [predict page](https://docs.ultralytics.com/modes/predict/).
!!! example "" !!! example ""
=== "Python" === "Python"
@ -66,21 +69,22 @@ to [predict page](https://docs.ultralytics.com/modes/predict/).
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
# Configure the tracking parameters and run the tracker
model = YOLO('yolov8n.pt') model = YOLO('yolov8n.pt')
results = model.track(source="https://youtu.be/Zgi9g1ksQHc", conf=0.3, iou=0.5, show=True) results = model.track(source="https://youtu.be/Zgi9g1ksQHc", conf=0.3, iou=0.5, show=True)
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Configure tracking parameters and run the tracker using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" conf=0.3, iou=0.5 show yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" conf=0.3, iou=0.5 show
``` ```
### Tracker ### Tracker Selection
Ultralytics also allows you to use a modified tracker configuration file. To do this, simply make a copy of a tracker config file (for example, `custom_tracker.yaml`) from [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modify any configurations (except the `tracker_type`) as per your needs.
We also support using a modified tracker config file, just copy a config file i.e `custom_tracker.yaml`
from [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modify
any configurations(expect the `tracker_type`) you need to.
!!! example "" !!! example ""
=== "Python" === "Python"
@ -88,14 +92,126 @@ any configurations(expect the `tracker_type`) you need to.
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
# Load the model and run the tracker with a custom configuration file
model = YOLO('yolov8n.pt') model = YOLO('yolov8n.pt')
results = model.track(source="https://youtu.be/Zgi9g1ksQHc", tracker='custom_tracker.yaml') results = model.track(source="https://youtu.be/Zgi9g1ksQHc", tracker='custom_tracker.yaml')
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Load the model and run the tracker with a custom configuration file using the command line interface
yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" tracker='custom_tracker.yaml' yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" tracker='custom_tracker.yaml'
``` ```
Please refer to [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) For a comprehensive list of tracking arguments, refer to the [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) page.
page
## Python Examples
### Persisting Tracks Loop
Here is a Python script using OpenCV (`cv2`) and YOLOv8 to run object tracking on video frames. This script still assumes you have already installed the necessary packages (`opencv-python` and `ultralytics`).
!!! example "Streaming for-loop with tracking"
```python
import cv2
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
# Open the video file
video_path = "path/to/your/video/file.mp4"
cap = cv2.VideoCapture(video_path)
# Loop through the video frames
while cap.isOpened():
# Read a frame from the video
success, frame = cap.read()
if success:
# Run YOLOv8 tracking on the frame, persisting tracks between frames
results = model.track(frame, persist=True)
# Visualize the results on the frame
annotated_frame = results[0].plot()
# Display the annotated frame
cv2.imshow("YOLOv8 Tracking", annotated_frame)
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
# Break the loop if the end of the video is reached
break
# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()
```
Please note the change from `model(frame)` to `model.track(frame)`, which enables object tracking instead of simple detection. This modified script will run the tracker on each frame of the video, visualize the results, and display them in a window. The loop can be exited by pressing 'q'.
### Multithreaded Tracking
Multithreaded tracking provides the capability to run object tracking on multiple video streams simultaneously. This is particularly useful when handling multiple video inputs, such as from multiple surveillance cameras, where concurrent processing can greatly enhance efficiency and performance.
In the provided Python script, we make use of Python's `threading` module to run multiple instances of the tracker concurrently. Each thread is responsible for running the tracker on one video file, and all the threads run simultaneously in the background.
To ensure that each thread receives the correct parameters (the video file and the model to use), we define a function `run_tracker_in_thread` that accepts these parameters and contains the main tracking loop. This function reads the video frame by frame, runs the tracker, and displays the results.
Two different models are used in this example: `yolov8n.pt` and `yolov8n-seg.pt`, each tracking objects in a different video file. The video files are specified in `video_file1` and `video_file2`.
The `daemon=True` parameter in `threading.Thread` means that these threads will be closed as soon as the main program finishes. We then start the threads with `start()` and use `join()` to make the main thread wait until both tracker threads have finished.
Finally, after all threads have completed their task, the windows displaying the results are closed using `cv2.destroyAllWindows()`.
!!! example "Streaming for-loop with tracking"
```python
import threading
import cv2
from ultralytics import YOLO
def run_tracker_in_thread(filename, model):
video = cv2.VideoCapture(filename)
frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
for _ in range(frames):
ret, frame = video.read()
if ret:
results = model.track(source=frame, persist=True)
res_plotted = results[0].plot()
cv2.imshow('p', res_plotted)
if cv2.waitKey(1) == ord('q'):
break
# Load the models
model1 = YOLO('yolov8n.pt')
model2 = YOLO('yolov8n-seg.pt')
# Define the video files for the trackers
video_file1 = 'path/to/video1.mp4'
video_file2 = 'path/to/video2.mp4'
# Create the tracker threads
tracker_thread1 = threading.Thread(target=run_tracker_in_thread, args=(video_file1, model1), daemon=True)
tracker_thread2 = threading.Thread(target=run_tracker_in_thread, args=(video_file2, model2), daemon=True)
# Start the tracker threads
tracker_thread1.start()
tracker_thread2.start()
# Wait for the tracker threads to finish
tracker_thread1.join()
tracker_thread2.join()
# Clean up and close windows
cv2.destroyAllWindows()
```
This example can easily be extended to handle more video files and models by creating more threads and applying the same methodology.

@ -165,23 +165,21 @@ Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument, Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument,
i.e. `format='onnx'` or `format='engine'`. i.e. `format='onnx'` or `format='engine'`.
| Format | `format` Argument | Model | Metadata | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|---------------------------|----------| |--------------------------------------------------------------------|-------------------|---------------------------|----------|-----------------------------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | | [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | - |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | | [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | `imgsz`, `optimize` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | | [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | `imgsz`, `half` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | | [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | | [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | `imgsz`, `half`, `int8`, `nms` |
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | | [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | `imgsz`, `keras` |
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | | [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | `imgsz` |
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | | [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8` |
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |
| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | | [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` |
---
## Overriding default arguments ## Overriding default arguments
@ -207,8 +205,6 @@ Default arguments can be overridden by simply passing them as arguments in the C
yolo detect val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640 yolo detect val model=yolov8n.pt data=coco128.yaml batch=1 imgsz=640
``` ```
---
## Overriding default config file ## Overriding default config file
You can override the `default.yaml` config file entirely by passing a new file with the `cfg` arguments, You can override the `default.yaml` config file entirely by passing a new file with the `cfg` arguments,

@ -148,7 +148,7 @@ The 3 exported models will be saved alongside the original PyTorch model:
```bash ```bash
python detect.py --weights yolov5s.pt # PyTorch python detect.py --weights yolov5s.pt # PyTorch
yolov5s.torchscript # TorchScript yolov5s.torchscript # TorchScript
yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn yolov5s.onnx # ONNX Runtime or OpenCV DNN with dnn=True
yolov5s_openvino_model # OpenVINO yolov5s_openvino_model # OpenVINO
yolov5s.engine # TensorRT yolov5s.engine # TensorRT
yolov5s.mlmodel # CoreML (macOS only) yolov5s.mlmodel # CoreML (macOS only)
@ -164,7 +164,7 @@ python detect.py --weights yolov5s.pt # PyTorch
```bash ```bash
python val.py --weights yolov5s.pt # PyTorch python val.py --weights yolov5s.pt # PyTorch
yolov5s.torchscript # TorchScript yolov5s.torchscript # TorchScript
yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn yolov5s.onnx # ONNX Runtime or OpenCV DNN with dnn=True
yolov5s_openvino_model # OpenVINO yolov5s_openvino_model # OpenVINO
yolov5s.engine # TensorRT yolov5s.engine # TensorRT
yolov5s.mlmodel # CoreML (macOS Only) yolov5s.mlmodel # CoreML (macOS Only)

@ -59,21 +59,21 @@
"colab": { "colab": {
"base_uri": "https://localhost:8080/" "base_uri": "https://localhost:8080/"
}, },
"outputId": "2ea6e0b9-1a62-4355-c246-5e8b7b1dafff" "outputId": "27ca383c-0a97-4679-f1c5-ba843f033de7"
}, },
"source": [ "source": [
"%pip install ultralytics\n", "%pip install ultralytics\n",
"import ultralytics\n", "import ultralytics\n",
"ultralytics.checks()" "ultralytics.checks()"
], ],
"execution_count": null, "execution_count": 1,
"outputs": [ "outputs": [
{ {
"output_type": "stream", "output_type": "stream",
"name": "stderr", "name": "stderr",
"text": [ "text": [
"Ultralytics YOLOv8.0.71 🚀 Python-3.9.16 torch-2.0.0+cu118 CUDA:0 (Tesla T4, 15102MiB)\n", "Ultralytics YOLOv8.0.145 🚀 Python-3.10.6 torch-2.0.1+cu118 CUDA:0 (Tesla T4, 15102MiB)\n",
"Setup complete ✅ (2 CPUs, 12.7 GB RAM, 23.3/166.8 GB disk)\n" "Setup complete ✅ (2 CPUs, 12.7 GB RAM, 24.2/78.2 GB disk)\n"
] ]
} }
] ]
@ -96,27 +96,27 @@
"colab": { "colab": {
"base_uri": "https://localhost:8080/" "base_uri": "https://localhost:8080/"
}, },
"outputId": "c578afbd-47cd-4d11-beec-8b5c31fcfba8" "outputId": "64489d1f-e71a-44b5-92f6-2088781ca096"
}, },
"source": [ "source": [
"# Run inference on an image with YOLOv8n\n", "# Run inference on an image with YOLOv8n\n",
"!yolo predict model=yolov8n.pt source='https://ultralytics.com/images/zidane.jpg'" "!yolo predict model=yolov8n.pt source='https://ultralytics.com/images/zidane.jpg'"
], ],
"execution_count": null, "execution_count": 2,
"outputs": [ "outputs": [
{ {
"output_type": "stream", "output_type": "stream",
"name": "stdout", "name": "stdout",
"text": [ "text": [
"Downloading https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt to yolov8n.pt...\n", "Downloading https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt to 'yolov8n.pt'...\n",
"100% 6.23M/6.23M [00:00<00:00, 195MB/s]\n", "100% 6.23M/6.23M [00:00<00:00, 77.2MB/s]\n",
"Ultralytics YOLOv8.0.71 🚀 Python-3.9.16 torch-2.0.0+cu118 CUDA:0 (Tesla T4, 15102MiB)\n", "Ultralytics YOLOv8.0.145 🚀 Python-3.10.6 torch-2.0.1+cu118 CUDA:0 (Tesla T4, 15102MiB)\n",
"YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs\n", "YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients\n",
"\n", "\n",
"Downloading https://ultralytics.com/images/zidane.jpg to zidane.jpg...\n", "Downloading https://ultralytics.com/images/zidane.jpg to 'zidane.jpg'...\n",
"100% 165k/165k [00:00<00:00, 51.7MB/s]\n", "100% 165k/165k [00:00<00:00, 7.46MB/s]\n",
"image 1/1 /content/zidane.jpg: 384x640 2 persons, 1 tie, 60.9ms\n", "image 1/1 /content/zidane.jpg: 384x640 2 persons, 1 tie, 365.8ms\n",
"Speed: 0.6ms preprocess, 60.9ms inference, 301.3ms postprocess per image at shape (1, 3, 640, 640)\n", "Speed: 13.7ms preprocess, 365.8ms inference, 431.7ms postprocess per image at shape (1, 3, 384, 640)\n",
"Results saved to \u001b[1mruns/detect/predict\u001b[0m\n" "Results saved to \u001b[1mruns/detect/predict\u001b[0m\n"
] ]
} }
@ -139,7 +139,7 @@
}, },
"source": [ "source": [
"# 2. Val\n", "# 2. Val\n",
"Validate a model's accuracy on the [COCO](https://cocodataset.org/#home) dataset's `val` or `test` splits. The latest YOLOv8 [models](https://github.com/ultralytics/ultralytics#models) are downloaded automatically the first time they are used. See [YOLOv8 Val Docs](https://docs.ultralytics.com/modes/val/) for more information." "Validate a model's accuracy on the [COCO](https://docs.ultralytics.com/datasets/detect/coco/) dataset's `val` or `test` splits. The latest YOLOv8 [models](https://github.com/ultralytics/ultralytics#models) are downloaded automatically the first time they are used. See [YOLOv8 Val Docs](https://docs.ultralytics.com/modes/val/) for more information."
] ]
}, },
{ {
@ -160,108 +160,43 @@
"cell_type": "code", "cell_type": "code",
"metadata": { "metadata": {
"id": "X58w8JLpMnjH", "id": "X58w8JLpMnjH",
"outputId": "3e5a9c48-8eba-45eb-d92f-8456cf94b60e", "outputId": "e3aacd98-ceca-49b7-e112-a0c25979ad6c",
"colab": { "colab": {
"base_uri": "https://localhost:8080/" "base_uri": "https://localhost:8080/"
} }
}, },
"source": [ "source": [
"# Validate YOLOv8n on COCO128 val\n", "# Validate YOLOv8n on COCO8 val\n",
"!yolo val model=yolov8n.pt data=coco128.yaml" "!yolo val model=yolov8n.pt data=coco8.yaml"
], ],
"execution_count": null, "execution_count": 3,
"outputs": [ "outputs": [
{ {
"output_type": "stream", "output_type": "stream",
"name": "stdout", "name": "stdout",
"text": [ "text": [
"Ultralytics YOLOv8.0.71 🚀 Python-3.9.16 torch-2.0.0+cu118 CUDA:0 (Tesla T4, 15102MiB)\n", "Ultralytics YOLOv8.0.145 🚀 Python-3.10.6 torch-2.0.1+cu118 CUDA:0 (Tesla T4, 15102MiB)\n",
"YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs\n", "YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients\n",
"\n", "\n",
"Dataset 'coco128.yaml' images not found ⚠️, missing paths ['/content/datasets/coco128/images/train2017']\n", "Dataset 'coco8.yaml' images not found ⚠️, missing path '/content/datasets/coco8/images/val'\n",
"Downloading https://ultralytics.com/assets/coco128.zip to /content/datasets/coco128.zip...\n", "Downloading https://ultralytics.com/assets/coco8.zip to '/content/datasets/coco8.zip'...\n",
"100% 6.66M/6.66M [00:01<00:00, 6.80MB/s]\n", "100% 433k/433k [00:00<00:00, 12.4MB/s]\n",
"Unzipping /content/datasets/coco128.zip to /content/datasets...\n", "Unzipping /content/datasets/coco8.zip to /content/datasets...\n",
"Dataset download success ✅ (2.2s), saved to \u001b[1m/content/datasets\u001b[0m\n", "Dataset download success ✅ (0.7s), saved to \u001b[1m/content/datasets\u001b[0m\n",
"\n", "\n",
"Downloading https://ultralytics.com/assets/Arial.ttf to /root/.config/Ultralytics/Arial.ttf...\n", "Downloading https://ultralytics.com/assets/Arial.ttf to '/root/.config/Ultralytics/Arial.ttf'...\n",
"100% 755k/755k [00:00<00:00, 107MB/s]\n", "100% 755k/755k [00:00<00:00, 17.5MB/s]\n",
"\u001b[34m\u001b[1mval: \u001b[0mScanning /content/datasets/coco128/labels/train2017... 126 images, 2 backgrounds, 80 corrupt: 100% 128/128 [00:00<00:00, 1183.28it/s]\n", "\u001b[34m\u001b[1mval: \u001b[0mScanning /content/datasets/coco8/labels/val... 4 images, 0 backgrounds, 0 corrupt: 100% 4/4 [00:00<00:00, 276.04it/s]\n",
"\u001b[34m\u001b[1mval: \u001b[0mNew cache created: /content/datasets/coco128/labels/train2017.cache\n", "\u001b[34m\u001b[1mval: \u001b[0mNew cache created: /content/datasets/coco8/labels/val.cache\n",
" Class Images Instances Box(P R mAP50 mAP50-95): 100% 8/8 [00:12<00:00, 1.54s/it]\n", " Class Images Instances Box(P R mAP50 mAP50-95): 100% 1/1 [00:03<00:00, 3.84s/it]\n",
" all 128 929 0.64 0.537 0.605 0.446\n", " all 4 17 0.621 0.833 0.888 0.63\n",
" person 128 254 0.797 0.677 0.764 0.538\n", " person 4 10 0.721 0.5 0.519 0.269\n",
" bicycle 128 6 0.514 0.333 0.315 0.264\n", " dog 4 1 0.37 1 0.995 0.597\n",
" car 128 46 0.813 0.217 0.273 0.168\n", " horse 4 2 0.751 1 0.995 0.631\n",
" motorcycle 128 5 0.687 0.887 0.898 0.685\n", " elephant 4 2 0.505 0.5 0.828 0.394\n",
" airplane 128 6 0.82 0.833 0.927 0.675\n", " umbrella 4 1 0.564 1 0.995 0.995\n",
" bus 128 7 0.491 0.714 0.728 0.671\n", " potted plant 4 1 0.814 1 0.995 0.895\n",
" train 128 3 0.534 0.667 0.706 0.604\n", "Speed: 0.3ms preprocess, 78.7ms inference, 0.0ms loss, 65.4ms postprocess per image\n",
" truck 128 12 1 0.332 0.473 0.297\n",
" boat 128 6 0.226 0.167 0.316 0.134\n",
" traffic light 128 14 0.734 0.2 0.202 0.139\n",
" stop sign 128 2 1 0.992 0.995 0.701\n",
" bench 128 9 0.839 0.582 0.62 0.365\n",
" bird 128 16 0.921 0.728 0.864 0.51\n",
" cat 128 4 0.875 1 0.995 0.791\n",
" dog 128 9 0.603 0.889 0.785 0.585\n",
" horse 128 2 0.597 1 0.995 0.518\n",
" elephant 128 17 0.849 0.765 0.9 0.679\n",
" bear 128 1 0.593 1 0.995 0.995\n",
" zebra 128 4 0.848 1 0.995 0.965\n",
" giraffe 128 9 0.72 1 0.951 0.722\n",
" backpack 128 6 0.589 0.333 0.376 0.232\n",
" umbrella 128 18 0.804 0.5 0.643 0.414\n",
" handbag 128 19 0.424 0.0526 0.165 0.0889\n",
" tie 128 7 0.804 0.714 0.674 0.476\n",
" suitcase 128 4 0.635 0.883 0.745 0.534\n",
" frisbee 128 5 0.675 0.8 0.759 0.688\n",
" skis 128 1 0.567 1 0.995 0.497\n",
" snowboard 128 7 0.742 0.714 0.747 0.5\n",
" sports ball 128 6 0.716 0.433 0.485 0.278\n",
" kite 128 10 0.817 0.45 0.569 0.184\n",
" baseball bat 128 4 0.551 0.25 0.353 0.175\n",
" baseball glove 128 7 0.624 0.429 0.429 0.293\n",
" skateboard 128 5 0.846 0.6 0.6 0.41\n",
" tennis racket 128 7 0.726 0.387 0.487 0.33\n",
" bottle 128 18 0.448 0.389 0.376 0.208\n",
" wine glass 128 16 0.743 0.362 0.584 0.333\n",
" cup 128 36 0.58 0.278 0.404 0.29\n",
" fork 128 6 0.527 0.167 0.246 0.184\n",
" knife 128 16 0.564 0.5 0.59 0.36\n",
" spoon 128 22 0.597 0.182 0.328 0.19\n",
" bowl 128 28 0.648 0.643 0.618 0.491\n",
" banana 128 1 0 0 0.124 0.0379\n",
" sandwich 128 2 0.249 0.5 0.308 0.308\n",
" orange 128 4 1 0.31 0.995 0.623\n",
" broccoli 128 11 0.374 0.182 0.249 0.203\n",
" carrot 128 24 0.648 0.458 0.572 0.362\n",
" hot dog 128 2 0.351 0.553 0.745 0.721\n",
" pizza 128 5 0.644 1 0.995 0.843\n",
" donut 128 14 0.657 1 0.94 0.864\n",
" cake 128 4 0.618 1 0.945 0.845\n",
" chair 128 35 0.506 0.514 0.442 0.239\n",
" couch 128 6 0.463 0.5 0.706 0.555\n",
" potted plant 128 14 0.65 0.643 0.711 0.472\n",
" bed 128 3 0.698 0.667 0.789 0.625\n",
" dining table 128 13 0.432 0.615 0.485 0.366\n",
" toilet 128 2 0.615 0.5 0.695 0.676\n",
" tv 128 2 0.373 0.62 0.745 0.696\n",
" laptop 128 3 1 0 0.451 0.361\n",
" mouse 128 2 1 0 0.0625 0.00625\n",
" remote 128 8 0.843 0.5 0.605 0.529\n",
" cell phone 128 8 0 0 0.0549 0.0393\n",
" microwave 128 3 0.435 0.667 0.806 0.718\n",
" oven 128 5 0.412 0.4 0.339 0.27\n",
" sink 128 6 0.35 0.167 0.182 0.129\n",
" refrigerator 128 5 0.589 0.4 0.604 0.452\n",
" book 128 29 0.629 0.103 0.346 0.178\n",
" clock 128 9 0.788 0.83 0.875 0.74\n",
" vase 128 2 0.376 1 0.828 0.795\n",
" scissors 128 1 1 0 0.249 0.0746\n",
" teddy bear 128 21 0.877 0.333 0.591 0.394\n",
" toothbrush 128 5 0.743 0.6 0.638 0.374\n",
"Speed: 5.3ms preprocess, 20.1ms inference, 0.0ms loss, 11.7ms postprocess per image\n",
"Results saved to \u001b[1mruns/detect/val\u001b[0m\n" "Results saved to \u001b[1mruns/detect/val\u001b[0m\n"
] ]
} }
@ -284,160 +219,98 @@
"cell_type": "code", "cell_type": "code",
"metadata": { "metadata": {
"id": "1NcFxRcFdJ_O", "id": "1NcFxRcFdJ_O",
"outputId": "b60a1f74-8035-4f9e-b4b0-604f9cf76231", "outputId": "b750f2fe-c4d9-4764-b8d5-ed7bd920697b",
"colab": { "colab": {
"base_uri": "https://localhost:8080/" "base_uri": "https://localhost:8080/"
} }
}, },
"source": [ "source": [
"# Train YOLOv8n on COCO128 for 3 epochs\n", "# Train YOLOv8n on COCO8 for 3 epochs\n",
"!yolo train model=yolov8n.pt data=coco128.yaml epochs=3 imgsz=640" "!yolo train model=yolov8n.pt data=coco8.yaml epochs=3 imgsz=640"
], ],
"execution_count": null, "execution_count": 4,
"outputs": [ "outputs": [
{ {
"output_type": "stream", "output_type": "stream",
"name": "stdout", "name": "stdout",
"text": [ "text": [
"Ultralytics YOLOv8.0.71 🚀 Python-3.9.16 torch-2.0.0+cu118 CUDA:0 (Tesla T4, 15102MiB)\n", "Ultralytics YOLOv8.0.145 🚀 Python-3.10.6 torch-2.0.1+cu118 CUDA:0 (Tesla T4, 15102MiB)\n",
"\u001b[34m\u001b[1mengine/trainer: \u001b[0mtask=detect, mode=train, model=yolov8n.pt, data=coco128.yaml, epochs=3, patience=50, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=None, exist_ok=False, pretrained=False, optimizer=SGD, verbose=True, seed=0, deterministic=True, single_cls=False, image_weights=False, rect=False, cos_lr=False, close_mosaic=0, resume=False, amp=True, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_width=3, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, tracker=botsort.yaml, save_dir=runs/detect/train\n", "\u001b[34m\u001b[1mengine/trainer: \u001b[0mtask=detect, mode=train, model=yolov8n.pt, data=coco8.yaml, epochs=3, patience=50, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=None, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_width=None, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, tracker=botsort.yaml, save_dir=runs/detect/train\n",
"\n", "\n",
" from n params module arguments \n", " from n params module arguments \n",
" 0 -1 1 464 ultralytics.nn.modules.Conv [3, 16, 3, 2] \n", " 0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2] \n",
" 1 -1 1 4672 ultralytics.nn.modules.Conv [16, 32, 3, 2] \n", " 1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2] \n",
" 2 -1 1 7360 ultralytics.nn.modules.C2f [32, 32, 1, True] \n", " 2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True] \n",
" 3 -1 1 18560 ultralytics.nn.modules.Conv [32, 64, 3, 2] \n", " 3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2] \n",
" 4 -1 2 49664 ultralytics.nn.modules.C2f [64, 64, 2, True] \n", " 4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True] \n",
" 5 -1 1 73984 ultralytics.nn.modules.Conv [64, 128, 3, 2] \n", " 5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2] \n",
" 6 -1 2 197632 ultralytics.nn.modules.C2f [128, 128, 2, True] \n", " 6 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True] \n",
" 7 -1 1 295424 ultralytics.nn.modules.Conv [128, 256, 3, 2] \n", " 7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2] \n",
" 8 -1 1 460288 ultralytics.nn.modules.C2f [256, 256, 1, True] \n", " 8 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True] \n",
" 9 -1 1 164608 ultralytics.nn.modules.SPPF [256, 256, 5] \n", " 9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5] \n",
" 10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n", " 10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n",
" 11 [-1, 6] 1 0 ultralytics.nn.modules.Concat [1] \n", " 11 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1] \n",
" 12 -1 1 148224 ultralytics.nn.modules.C2f [384, 128, 1] \n", " 12 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1] \n",
" 13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n", " 13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] \n",
" 14 [-1, 4] 1 0 ultralytics.nn.modules.Concat [1] \n", " 14 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1] \n",
" 15 -1 1 37248 ultralytics.nn.modules.C2f [192, 64, 1] \n", " 15 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1] \n",
" 16 -1 1 36992 ultralytics.nn.modules.Conv [64, 64, 3, 2] \n", " 16 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2] \n",
" 17 [-1, 12] 1 0 ultralytics.nn.modules.Concat [1] \n", " 17 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1] \n",
" 18 -1 1 123648 ultralytics.nn.modules.C2f [192, 128, 1] \n", " 18 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1] \n",
" 19 -1 1 147712 ultralytics.nn.modules.Conv [128, 128, 3, 2] \n", " 19 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2] \n",
" 20 [-1, 9] 1 0 ultralytics.nn.modules.Concat [1] \n", " 20 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1] \n",
" 21 -1 1 493056 ultralytics.nn.modules.C2f [384, 256, 1] \n", " 21 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1] \n",
" 22 [15, 18, 21] 1 897664 ultralytics.nn.modules.Detect [80, [64, 128, 256]] \n", " 22 [15, 18, 21] 1 897664 ultralytics.nn.modules.head.Detect [80, [64, 128, 256]] \n",
"Model summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs\n", "Model summary: 225 layers, 3157200 parameters, 3157184 gradients\n",
"\n", "\n",
"Transferred 355/355 items from pretrained weights\n", "Transferred 355/355 items from pretrained weights\n",
"\u001b[34m\u001b[1mTensorBoard: \u001b[0mStart with 'tensorboard --logdir runs/detect/train', view at http://localhost:6006/\n", "\u001b[34m\u001b[1mTensorBoard: \u001b[0mStart with 'tensorboard --logdir runs/detect/train', view at http://localhost:6006/\n",
"\u001b[34m\u001b[1mAMP: \u001b[0mrunning Automatic Mixed Precision (AMP) checks with YOLOv8n...\n", "\u001b[34m\u001b[1mAMP: \u001b[0mrunning Automatic Mixed Precision (AMP) checks with YOLOv8n...\n",
"\u001b[34m\u001b[1mAMP: \u001b[0mchecks passed ✅\n", "\u001b[34m\u001b[1mAMP: \u001b[0mchecks passed ✅\n",
"\u001b[34m\u001b[1moptimizer:\u001b[0m SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 64 weight(decay=0.0005), 63 bias\n", "\u001b[34m\u001b[1mtrain: \u001b[0mScanning /content/datasets/coco8/labels/train... 4 images, 0 backgrounds, 0 corrupt: 100% 4/4 [00:00<00:00, 860.11it/s]\n",
"\u001b[34m\u001b[1mtrain: \u001b[0mScanning /content/datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 80 corrupt: 100% 128/128 [00:00<?, ?it/s]\n", "\u001b[34m\u001b[1mtrain: \u001b[0mNew cache created: /content/datasets/coco8/labels/train.cache\n",
"\u001b[34m\u001b[1malbumentations: \u001b[0mBlur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01), CLAHE(p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8))\n", "\u001b[34m\u001b[1malbumentations: \u001b[0mBlur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01), CLAHE(p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8))\n",
"\u001b[34m\u001b[1mval: \u001b[0mScanning /content/datasets/coco128/labels/train2017.cache... 126 images, 2 backgrounds, 80 corrupt: 100% 128/128 [00:00<?, ?it/s]\n", "\u001b[34m\u001b[1mval: \u001b[0mScanning /content/datasets/coco8/labels/val.cache... 4 images, 0 backgrounds, 0 corrupt: 100% 4/4 [00:00<?, ?it/s]\n",
"Plotting labels to runs/detect/train/labels.jpg... \n", "Plotting labels to runs/detect/train/labels.jpg... \n",
"\u001b[34m\u001b[1moptimizer:\u001b[0m AdamW(lr=0.000119, momentum=0.9) with parameter groups 57 weight(decay=0.0), 64 weight(decay=0.0005), 63 bias(decay=0.0)\n",
"Image sizes 640 train, 640 val\n", "Image sizes 640 train, 640 val\n",
"Using 2 dataloader workers\n", "Using 2 dataloader workers\n",
"Logging results to \u001b[1mruns/detect/train\u001b[0m\n", "Logging results to \u001b[1mruns/detect/train\u001b[0m\n",
"Starting training for 3 epochs...\n", "Starting training for 3 epochs...\n",
"\n", "\n",
" Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size\n", " Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size\n",
" 1/3 2.78G 1.177 1.338 1.25 230 640: 100% 8/8 [00:06<00:00, 1.21it/s]\n", " 1/3 0.761G 0.9273 3.155 1.291 32 640: 100% 1/1 [00:01<00:00, 1.23s/it]\n",
" Class Images Instances Box(P R mAP50 mAP50-95): 100% 4/4 [00:04<00:00, 1.21s/it]\n", "/usr/local/lib/python3.10/dist-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\n",
" all 128 929 0.631 0.549 0.614 0.455\n", " warnings.warn(\"Detected call of `lr_scheduler.step()` before `optimizer.step()`. \"\n",
" Class Images Instances Box(P R mAP50 mAP50-95): 100% 1/1 [00:00<00:00, 2.21it/s]\n",
" all 4 17 0.613 0.899 0.888 0.621\n",
"\n", "\n",
" Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size\n", " Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size\n",
" 2/3 2.69G 1.131 1.405 1.24 179 640: 100% 8/8 [00:02<00:00, 3.13it/s]\n", " 2/3 0.78G 1.161 3.126 1.517 33 640: 100% 1/1 [00:00<00:00, 9.06it/s]\n",
" Class Images Instances Box(P R mAP50 mAP50-95): 100% 4/4 [00:02<00:00, 1.51it/s]\n", " Class Images Instances Box(P R mAP50 mAP50-95): 100% 1/1 [00:00<00:00, 7.18it/s]\n",
" all 128 929 0.669 0.569 0.634 0.478\n", " all 4 17 0.601 0.896 0.888 0.613\n",
"\n", "\n",
" Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size\n", " Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size\n",
" 3/3 2.84G 1.151 1.281 1.212 214 640: 100% 8/8 [00:02<00:00, 3.27it/s]\n", " 3/3 0.757G 0.9264 2.508 1.254 17 640: 100% 1/1 [00:00<00:00, 7.32it/s]\n",
" Class Images Instances Box(P R mAP50 mAP50-95): 100% 4/4 [00:09<00:00, 2.42s/it]\n", " Class Images Instances Box(P R mAP50 mAP50-95): 100% 1/1 [00:00<00:00, 5.26it/s]\n",
" all 128 929 0.687 0.58 0.65 0.488\n", " all 4 17 0.598 0.892 0.886 0.613\n",
"\n", "\n",
"3 epochs completed in 0.010 hours.\n", "3 epochs completed in 0.003 hours.\n",
"Optimizer stripped from runs/detect/train/weights/last.pt, 6.5MB\n", "Optimizer stripped from runs/detect/train/weights/last.pt, 6.5MB\n",
"Optimizer stripped from runs/detect/train/weights/best.pt, 6.5MB\n", "Optimizer stripped from runs/detect/train/weights/best.pt, 6.5MB\n",
"\n", "\n",
"Validating runs/detect/train/weights/best.pt...\n", "Validating runs/detect/train/weights/best.pt...\n",
"Ultralytics YOLOv8.0.71 🚀 Python-3.9.16 torch-2.0.0+cu118 CUDA:0 (Tesla T4, 15102MiB)\n", "Ultralytics YOLOv8.0.145 🚀 Python-3.10.6 torch-2.0.1+cu118 CUDA:0 (Tesla T4, 15102MiB)\n",
"Model summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs\n", "Model summary (fused): 168 layers, 3151904 parameters, 0 gradients\n",
" Class Images Instances Box(P R mAP50 mAP50-95): 100% 4/4 [00:06<00:00, 1.63s/it]\n", " Class Images Instances Box(P R mAP50 mAP50-95): 100% 1/1 [00:00<00:00, 16.58it/s]\n",
" all 128 929 0.689 0.578 0.65 0.486\n", " all 4 17 0.613 0.898 0.888 0.621\n",
" person 128 254 0.763 0.673 0.769 0.544\n", " person 4 10 0.661 0.5 0.52 0.285\n",
" bicycle 128 6 1 0.328 0.379 0.332\n", " dog 4 1 0.337 1 0.995 0.597\n",
" car 128 46 0.84 0.217 0.292 0.18\n", " horse 4 2 0.723 1 0.995 0.631\n",
" motorcycle 128 5 0.612 0.8 0.872 0.709\n", " elephant 4 2 0.629 0.886 0.828 0.319\n",
" airplane 128 6 0.766 0.833 0.894 0.694\n", " umbrella 4 1 0.55 1 0.995 0.995\n",
" bus 128 7 0.748 0.714 0.721 0.675\n", " potted plant 4 1 0.776 1 0.995 0.895\n",
" train 128 3 0.686 1 0.913 0.83\n", "Speed: 0.2ms preprocess, 4.6ms inference, 0.0ms loss, 1.1ms postprocess per image\n",
" truck 128 12 0.889 0.5 0.529 0.342\n",
" boat 128 6 0.393 0.333 0.44 0.216\n",
" traffic light 128 14 1 0.21 0.224 0.142\n",
" stop sign 128 2 1 0.977 0.995 0.697\n",
" bench 128 9 0.795 0.434 0.658 0.418\n",
" bird 128 16 0.933 0.868 0.955 0.656\n",
" cat 128 4 0.796 1 0.995 0.786\n",
" dog 128 9 0.713 0.889 0.823 0.608\n",
" horse 128 2 0.576 1 0.995 0.547\n",
" elephant 128 17 0.786 0.824 0.911 0.719\n",
" bear 128 1 0.432 1 0.995 0.895\n",
" zebra 128 4 0.86 1 0.995 0.935\n",
" giraffe 128 9 0.966 1 0.995 0.727\n",
" backpack 128 6 0.534 0.333 0.399 0.227\n",
" umbrella 128 18 0.757 0.519 0.665 0.447\n",
" handbag 128 19 0.939 0.105 0.25 0.14\n",
" tie 128 7 0.677 0.602 0.682 0.505\n",
" suitcase 128 4 0.636 1 0.995 0.646\n",
" frisbee 128 5 1 0.789 0.799 0.689\n",
" skis 128 1 0.794 1 0.995 0.497\n",
" snowboard 128 7 0.575 0.714 0.762 0.48\n",
" sports ball 128 6 0.703 0.407 0.514 0.288\n",
" kite 128 10 0.645 0.4 0.506 0.206\n",
" baseball bat 128 4 0.436 0.404 0.253 0.125\n",
" baseball glove 128 7 0.786 0.429 0.43 0.303\n",
" skateboard 128 5 0.752 0.6 0.6 0.433\n",
" tennis racket 128 7 0.707 0.286 0.508 0.313\n",
" bottle 128 18 0.484 0.389 0.43 0.271\n",
" wine glass 128 16 0.471 0.562 0.584 0.327\n",
" cup 128 36 0.569 0.278 0.404 0.286\n",
" fork 128 6 0.529 0.167 0.207 0.192\n",
" knife 128 16 0.697 0.562 0.594 0.377\n",
" spoon 128 22 0.68 0.182 0.376 0.213\n",
" bowl 128 28 0.623 0.679 0.653 0.536\n",
" banana 128 1 0 0 0.142 0.0363\n",
" sandwich 128 2 1 0 0.745 0.745\n",
" orange 128 4 1 0.457 0.849 0.56\n",
" broccoli 128 11 0.465 0.273 0.284 0.246\n",
" carrot 128 24 0.581 0.751 0.745 0.489\n",
" hot dog 128 2 0.654 0.961 0.828 0.763\n",
" pizza 128 5 0.631 1 0.995 0.854\n",
" donut 128 14 0.583 1 0.933 0.84\n",
" cake 128 4 0.643 1 0.995 0.88\n",
" chair 128 35 0.5 0.543 0.459 0.272\n",
" couch 128 6 0.488 0.5 0.624 0.47\n",
" potted plant 128 14 0.645 0.714 0.747 0.542\n",
" bed 128 3 0.718 1 0.995 0.798\n",
" dining table 128 13 0.448 0.615 0.538 0.437\n",
" toilet 128 2 1 0.884 0.995 0.946\n",
" tv 128 2 0.548 0.644 0.828 0.762\n",
" laptop 128 3 1 0.563 0.72 0.639\n",
" mouse 128 2 1 0 0.0623 0.0125\n",
" remote 128 8 0.697 0.5 0.578 0.496\n",
" cell phone 128 8 0 0 0.102 0.0471\n",
" microwave 128 3 0.651 0.667 0.863 0.738\n",
" oven 128 5 0.471 0.4 0.415 0.309\n",
" sink 128 6 0.45 0.284 0.268 0.159\n",
" refrigerator 128 5 0.679 0.4 0.695 0.537\n",
" book 128 29 0.656 0.133 0.424 0.227\n",
" clock 128 9 0.878 0.778 0.898 0.759\n",
" vase 128 2 0.413 1 0.828 0.745\n",
" scissors 128 1 1 0 0.199 0.0597\n",
" teddy bear 128 21 0.553 0.472 0.669 0.447\n",
" toothbrush 128 5 1 0.518 0.8 0.521\n",
"Speed: 2.7ms preprocess, 3.5ms inference, 0.0ms loss, 3.2ms postprocess per image\n",
"Results saved to \u001b[1mruns/detect/train\u001b[0m\n" "Results saved to \u001b[1mruns/detect/train\u001b[0m\n"
] ]
} }
@ -454,21 +327,21 @@
"- 💡 ProTip: Export to [TensorRT](https://developer.nvidia.com/tensorrt) for up to 5x GPU speedup.\n", "- 💡 ProTip: Export to [TensorRT](https://developer.nvidia.com/tensorrt) for up to 5x GPU speedup.\n",
"\n", "\n",
"\n", "\n",
"| Format | `format` Argument | Model |\n", "| Format | `format` Argument | Model | Metadata | Arguments |\n",
"|----------------------------------------------------------------------------|-------------------|---------------------------|\n", "|--------------------------------------------------------------------|-------------------|---------------------------|----------|-----------------------------------------------------|\n",
"| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` |\n", "| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | - |\n",
"| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` |\n", "| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | `imgsz`, `optimize` |\n",
"| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` |\n", "| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |\n",
"| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` |\n", "| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | `imgsz`, `half` |\n",
"| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` |\n", "| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |\n",
"| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` |\n", "| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | `imgsz`, `half`, `int8`, `nms` |\n",
"| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` |\n", "| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | `imgsz`, `keras` |\n",
"| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` |\n", "| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | `imgsz` |\n",
"| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` |\n", "| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8` |\n",
"| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` |\n", "| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |\n",
"| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` |\n", "| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz` |\n",
"| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` |\n", "| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |\n",
"| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` |\n" "| [ncnn](https://github.com/Tencent/ncnn) | `ncnn` | `yolov8n_ncnn_model/` | ✅ | `imgsz`, `half` |\n"
], ],
"metadata": { "metadata": {
"id": "nPZZeNrLCQG6" "id": "nPZZeNrLCQG6"
@ -484,26 +357,26 @@
"base_uri": "https://localhost:8080/" "base_uri": "https://localhost:8080/"
}, },
"id": "CYIjW4igCjqD", "id": "CYIjW4igCjqD",
"outputId": "fc41bf7a-0ea2-41a6-9ec5-dd0455af43bc" "outputId": "2b65e381-717b-4a6f-d6f5-5254c867f3a4"
}, },
"execution_count": null, "execution_count": 5,
"outputs": [ "outputs": [
{ {
"output_type": "stream", "output_type": "stream",
"name": "stdout", "name": "stdout",
"text": [ "text": [
"Ultralytics YOLOv8.0.71 🚀 Python-3.9.16 torch-2.0.0+cu118 CPU\n", "Ultralytics YOLOv8.0.145 🚀 Python-3.10.6 torch-2.0.1+cu118 CPU (Intel Xeon 2.30GHz)\n",
"YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs\n", "YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients\n",
"\n", "\n",
"\u001b[34m\u001b[1mPyTorch:\u001b[0m starting from yolov8n.pt with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (6.2 MB)\n", "\u001b[34m\u001b[1mPyTorch:\u001b[0m starting from 'yolov8n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (6.2 MB)\n",
"\n", "\n",
"\u001b[34m\u001b[1mTorchScript:\u001b[0m starting export with torch 2.0.0+cu118...\n", "\u001b[34m\u001b[1mTorchScript:\u001b[0m starting export with torch 2.0.1+cu118...\n",
"\u001b[34m\u001b[1mTorchScript:\u001b[0m export success ✅ 2.3s, saved as yolov8n.torchscript (12.4 MB)\n", "\u001b[34m\u001b[1mTorchScript:\u001b[0m export success ✅ 2.8s, saved as 'yolov8n.torchscript' (12.4 MB)\n",
"\n", "\n",
"Export complete (3.1s)\n", "Export complete (4.6s)\n",
"Results saved to \u001b[1m/content\u001b[0m\n", "Results saved to \u001b[1m/content\u001b[0m\n",
"Predict: yolo predict task=detect model=yolov8n.torchscript imgsz=640 \n", "Predict: yolo predict task=detect model=yolov8n.torchscript imgsz=640 \n",
"Validate: yolo val task=detect model=yolov8n.torchscript imgsz=640 data=coco.yaml \n", "Validate: yolo val task=detect model=yolov8n.torchscript imgsz=640 data=None \n",
"Visualize: https://netron.app\n" "Visualize: https://netron.app\n"
] ]
} }

@ -32,7 +32,7 @@ CLI:
Inference: Inference:
$ yolo predict model=yolov8n.pt # PyTorch $ yolo predict model=yolov8n.pt # PyTorch
yolov8n.torchscript # TorchScript yolov8n.torchscript # TorchScript
yolov8n.onnx # ONNX Runtime or OpenCV DNN with --dnn yolov8n.onnx # ONNX Runtime or OpenCV DNN with dnn=True
yolov8n_openvino_model # OpenVINO yolov8n_openvino_model # OpenVINO
yolov8n.engine # TensorRT yolov8n.engine # TensorRT
yolov8n.mlmodel # CoreML (macOS-only) yolov8n.mlmodel # CoreML (macOS-only)

Loading…
Cancel
Save