`ultralytics 8.0.93` HUB docs and JSON2YOLO converter (#2431)

Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: 李际朝 <tubkninght@gmail.com>
Co-authored-by: Danny Kim <imbird0312@gmail.com>
single_channel
Glenn Jocher 2 years ago committed by GitHub
parent 0ebd3f2959
commit ddb354ce5e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,116 +0,0 @@
---
comments: true
---
# Ultralytics HUB
<a href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
<br>
<div align="center">
<a href="https://github.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.linkedin.com/company/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://twitter.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://youtube.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.tiktok.com/@ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.instagram.com/ultralytics/" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-instagram.png" width="2%" alt="" /></a>
<br>
<br>
<a href="https://github.com/ultralytics/hub/actions/workflows/ci.yaml">
<img src="https://github.com/ultralytics/hub/actions/workflows/ci.yaml/badge.svg" alt="CI CPU"></a>
<a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
</div>
[Ultralytics HUB](https://hub.ultralytics.com) is a new no-code online tool developed
by [Ultralytics](https://ultralytics.com), the creators of the popular [YOLOv5](https://github.com/ultralytics/yolov5)
object detection and image segmentation models. With Ultralytics HUB, users can easily train and deploy YOLO models
without any coding or technical expertise.
Ultralytics HUB is designed to be user-friendly and intuitive, with a drag-and-drop interface that allows users to
easily upload their data and select their model configurations. It also offers a range of pre-trained models and
templates to choose from, making it easy for users to get started with training their own models. Once a model is
trained, it can be easily deployed and used for real-time object detection and image segmentation tasks. Overall,
Ultralytics HUB is an essential tool for anyone looking to use YOLO for their object detection and image segmentation
projects.
**[Get started now](https://hub.ultralytics.com)** and experience the power and simplicity of Ultralytics HUB for
yourself. Sign up for a free account and start building, training, and deploying YOLOv5 and YOLOv8 models today.
## 1. Upload a Dataset
Ultralytics HUB datasets are just like YOLOv5 🚀 datasets, they use the same structure and the same label formats to keep
everything simple.
When you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML inside the dataset root directory**
as in the example shown below, and then zip for upload to https://hub.ultralytics.com/. Your **dataset YAML, directory
and zip** should all share the same name. For example, if your dataset is called 'coco6' as in our
example [ultralytics/hub/coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip), then you should have a
coco6.yaml inside your coco6/ directory, which should zip to create coco6.zip for upload:
```bash
zip -r coco6.zip coco6
```
The example [coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip) dataset in this repository can be
downloaded and unzipped to see exactly how to structure your custom dataset.
<p align="center">
<img width="80%" src="https://user-images.githubusercontent.com/26833433/201424843-20fa081b-ad4b-4d6c-a095-e810775908d8.png" title="COCO6" />
</p>
The dataset YAML is the same standard YOLOv5 YAML format. See
the [YOLOv5 Train Custom Data tutorial](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data) for full details.
```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: # dataset root dir (leave empty for HUB)
train: images/train # train images (relative to 'path') 8 images
val: images/val # val images (relative to 'path') 8 images
test: # test images (optional)
# Classes
names:
0: person
1: bicycle
2: car
3: motorcycle
...
```
After zipping your dataset, sign in to [Ultralytics HUB](https://bit.ly/ultralytics_hub) and click the Datasets tab.
Click 'Upload Dataset' to upload, scan and visualize your new dataset before training new YOLOv5 models on it!
<img width="100%" alt="HUB Dataset Upload" src="https://user-images.githubusercontent.com/26833433/216763338-9a8812c8-a4e5-4362-8102-40dad7818396.png">
## 2. Train a Model
Connect to the Ultralytics HUB notebook and use your model API key to begin training!
<a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb" target="_blank">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
## 3. Deploy to Real World
Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run
models directly on your [iOS](https://apps.apple.com/xk/app/ultralytics/id1583935240) or
[Android](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app) mobile device by downloading
the [Ultralytics App](https://ultralytics.com/app_install)!
## ❓ Issues
If you are a new [Ultralytics HUB](https://bit.ly/ultralytics_hub) user and have questions or comments, you are in the
right place! Please raise a [New Issue](https://github.com/ultralytics/hub/issues/new/choose) and let us know what we
can do to make your life better 😃!

@ -0,0 +1,7 @@
---
comments: true
---
# 🚧 Page Under Construction ⚒
This page is currently under construction! 👷Please check back later for updates. 😃🔜

@ -2,7 +2,7 @@
comments: true comments: true
--- ---
# Ultralytics HUB App for YOLOv8 # Ultralytics HUB App
<a href="https://bit.ly/ultralytics_hub" target="_blank"> <a href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a> <img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>

@ -0,0 +1,7 @@
---
comments: true
---
# 🚧 Page Under Construction ⚒
This page is currently under construction! 👷Please check back later for updates. 😃🔜

@ -0,0 +1,51 @@
---
comments: true
---
# HUB Datasets
## Upload a Dataset
Ultralytics HUB datasets are just like YOLOv5 and YOLOv8 🚀 datasets, they use the same structure and the same label formats to keep
everything simple.
When you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML inside the dataset root directory**
as in the example shown below, and then zip for upload to https://hub.ultralytics.com/. Your **dataset YAML, directory
and zip** should all share the same name. For example, if your dataset is called 'coco6' as in our
example [ultralytics/hub/coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip), then you should have a
coco6.yaml inside your coco6/ directory, which should zip to create coco6.zip for upload:
```bash
zip -r coco6.zip coco6
```
The example [coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip) dataset in this repository can be
downloaded and unzipped to see exactly how to structure your custom dataset.
<p align="center">
<img width="80%" src="https://user-images.githubusercontent.com/26833433/201424843-20fa081b-ad4b-4d6c-a095-e810775908d8.png" title="COCO6" />
</p>
The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format. See
the [YOLOv5 and YOLOv8 Train Custom Data tutorial](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/) for full details.
```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: # dataset root dir (leave empty for HUB)
train: images/train # train images (relative to 'path') 8 images
val: images/val # val images (relative to 'path') 8 images
test: # test images (optional)
# Classes
names:
0: person
1: bicycle
2: car
3: motorcycle
...
```
After zipping your dataset, sign in to [Ultralytics HUB](https://bit.ly/ultralytics_hub) and click the Datasets tab.
Click 'Upload Dataset' to upload, scan and visualize your new dataset before training new YOLOv5 or YOLOv8 models on it!
<img width="100%" alt="HUB Dataset Upload" src="https://user-images.githubusercontent.com/26833433/216763338-9a8812c8-a4e5-4362-8102-40dad7818396.png">

@ -0,0 +1,41 @@
---
comments: true
---
# Ultralytics HUB
<a href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
<br>
<br>
<div align="center">
<a href="https://github.com/ultralytics/hub/actions/workflows/ci.yaml">
<img src="https://github.com/ultralytics/hub/actions/workflows/ci.yaml/badge.svg" alt="CI CPU"></a>
<a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
</div>
<br>
👋 Hello from the [Ultralytics](https://ultralytics.com/) Team! We've been working hard these last few months to
launch [Ultralytics HUB](https://bit.ly/ultralytics_hub), a new web tool for training and deploying all your YOLOv5 and YOLOv8 🚀
models from one spot!
## Introduction
HUB is designed to be user-friendly and intuitive, with a drag-and-drop interface that allows users to
easily upload their data and train new models quickly. It offers a range of pre-trained models and
templates to choose from, making it easy for users to get started with training their own models. Once a model is
trained, it can be easily deployed and used for real-time object detection, instance segmentation and classification tasks.
We hope that the resources here will help you get the most out of HUB. Please browse the HUB <a href="https://docs.ultralytics.com/hub">Docs</a> for details, raise an issue on <a href="https://github.com/ultralytics/hub/issues/new/choose">GitHub</a> for support, and join our <a href="https://discord.gg/n6cFeSPZdD">Discord</a> community for questions and discussions!
- [**Quickstart**](./quickstart.md). Start training and deploying YOLO models with HUB in seconds.
- [**Datasets: Preparing and Uploading**](./datasets.md). Learn how to prepare and upload your datasets to HUB in YOLO format.
- [**Projects: Creating and Managing**](./projects.md). Group your models into projects for improved organization.
- [**Models: Training and Exporting**](./models.md). Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
- [**Integrations: Options**](./integrations.md). Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
- [**Ultralytics HUB App**](./app/index.md). Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
- [**iOS**](./app/ios.md)
- [**Android**](./app/android.md)
- [**Inference API**](./inference_api.md). Understand how to use the Inference API for running your trained models in the cloud to generate predictions.

@ -0,0 +1,460 @@
---
comments: true
---
# 🚧 Page Under Construction ⚒
This page is currently under construction! 👷Please check back later for updates. 😃🔜
# YOLO Inference API
The YOLO Inference API allows you to access the YOLOv8 object detection capabilities via a RESTful API. This enables you to run object detection on images without the need to install and set up the YOLOv8 environment locally.
## API URL
The API URL is the address used to access the YOLO Inference API. In this case, the base URL is:
```
https://api.ultralytics.com/v1/predict
```
## Example Usage in Python
To access the YOLO Inference API with the specified model and API key using Python, you can use the following code:
```python
import requests
# API URL, use actual MODEL_ID
url = f"https://api.ultralytics.com/v1/predict/MODEL_ID"
# Headers, use actual API_KEY
headers = {"x-api-key": "API_KEY"}
# Inference arguments (optional)
data = {"size": 640, "confidence": 0.25, "iou": 0.45}
# Load image and send request
with open("path/to/image.jpg", "rb") as image_file:
files = {"image": image_file}
response = requests.post(url, headers=headers, files=files, data=data)
print(response.json())
```
In this example, replace `API_KEY` with your actual API key, `MODEL_ID` with the desired model ID, and `path/to/image.jpg` with the path to the image you want to analyze.
## Example Usage with CLI
You can use the YOLO Inference API with the command-line interface (CLI) by utilizing the `curl` command. Replace `API_KEY` with your actual API key, `MODEL_ID` with the desired model ID, and `image.jpg` with the path to the image you want to analyze:
```commandline
curl -X POST "https://api.ultralytics.com/v1/predict/MODEL_ID" \
-H "x-api-key: API_KEY" \
-F "image=@/path/to/image.jpg" \
-F "size=640" \
-F "confidence=0.25" \
-F "iou=0.45"
```
## Passing Arguments
This command sends a POST request to the YOLO Inference API with the specified `MODEL_ID` in the URL and the `API_KEY` in the request `headers`, along with the image file specified by `@path/to/image.jpg`.
Here's an example of passing the `size`, `confidence`, and `iou` arguments via the API URL using the `requests` library in Python:
```python
import requests
# API URL, use actual MODEL_ID
url = f"https://api.ultralytics.com/v1/predict/MODEL_ID"
# Headers, use actual API_KEY
headers = {"x-api-key": "API_KEY"}
# Inference arguments (optional)
data = {"size": 640, "confidence": 0.25, "iou": 0.45}
# Load image and send request
with open("path/to/image.jpg", "rb") as image_file:
files = {"image": image_file}
response = requests.post(url, headers=headers, files=files, data=data)
print(response.json())
```
In this example, the `data` dictionary contains the query arguments `size`, `confidence`, and `iou`, which tells the API to run inference at image size 640 with confidence and IoU thresholds of 0.25 and 0.45.
This will send the query parameters along with the file in the POST request. See the table below for a full list of available inference arguments.
| Argument | Default | Type | Notes |
|--------------|---------|---------|-----------------------------------------|
| `size` | `640` | `int` | allowable range is `32` - `1280` pixels |
| `confidence` | `0.25` | `float` | allowable range is `0.01` - `1.0` |
| `iou` | `0.45` | `float` | allowable range is `0.0` - `0.95` |
| `url` | `''` | `str` | |
| `normalize` | `False` | `bool` | |
## Return JSON format
The YOLO Inference API returns a JSON list with the detection results. The format of the JSON list will be the same as the one produced locally by the `results[0].tojson()` command.
The JSON list contains information about the detected objects, their coordinates, classes, and confidence scores.
### Detect Model Format
YOLO detection models, such as `yolov8n.pt`, can return JSON responses from local inference, CLI API inference, and Python API inference. All of these methods produce the same JSON response format.
!!! example "Detect Model JSON Response"
=== "Local"
```python
from ultralytics import YOLO
# Load model
model = YOLO('yolov8n.pt')
# Run inference
results = model('image.jpg')
# Print image.jpg results in JSON format
print(results[0].tojson())
```
=== "CLI API"
```commandline
curl -X POST "https://api.ultralytics.com/v1/predict/MODEL_ID" \
-H "x-api-key: API_KEY" \
-F "image=@/path/to/image.jpg" \
-F "size=640" \
-F "confidence=0.25" \
-F "iou=0.45"
```
=== "Python API"
```python
import requests
# API URL, use actual MODEL_ID
url = f"https://api.ultralytics.com/v1/predict/MODEL_ID"
# Headers, use actual API_KEY
headers = {"x-api-key": "API_KEY"}
# Inference arguments (optional)
data = {"size": 640, "confidence": 0.25, "iou": 0.45}
# Load image and send request
with open("path/to/image.jpg", "rb") as image_file:
files = {"image": image_file}
response = requests.post(url, headers=headers, files=files, data=data)
print(response.json())
```
=== "JSON Response"
```json
{
"success": True,
"message": "Inference complete.",
"data": [
{
"name": "person",
"class": 0,
"confidence": 0.8359682559967041,
"box": {
"x1": 0.08974208831787109,
"y1": 0.27418340047200523,
"x2": 0.8706787109375,
"y2": 0.9887352837456598
}
},
{
"name": "person",
"class": 0,
"confidence": 0.8189555406570435,
"box": {
"x1": 0.5847355842590332,
"y1": 0.05813225640190972,
"x2": 0.8930277824401855,
"y2": 0.9903111775716146
}
},
{
"name": "tie",
"class": 27,
"confidence": 0.2909725308418274,
"box": {
"x1": 0.3433395862579346,
"y1": 0.6070465511745877,
"x2": 0.40964522361755373,
"y2": 0.9849439832899306
}
}
]
}
```
### Segment Model Format
YOLO segmentation models, such as `yolov8n-seg.pt`, can return JSON responses from local inference, CLI API inference, and Python API inference. All of these methods produce the same JSON response format.
!!! example "Segment Model JSON Response"
=== "Local"
```python
from ultralytics import YOLO
# Load model
model = YOLO('yolov8n-seg.pt')
# Run inference
results = model('image.jpg')
# Print image.jpg results in JSON format
print(results[0].tojson())
```
=== "CLI API"
```commandline
curl -X POST "https://api.ultralytics.com/v1/predict/MODEL_ID" \
-H "x-api-key: API_KEY" \
-F "image=@/path/to/image.jpg" \
-F "size=640" \
-F "confidence=0.25" \
-F "iou=0.45"
```
=== "Python API"
```python
import requests
# API URL, use actual MODEL_ID
url = f"https://api.ultralytics.com/v1/predict/MODEL_ID"
# Headers, use actual API_KEY
headers = {"x-api-key": "API_KEY"}
# Inference arguments (optional)
data = {"size": 640, "confidence": 0.25, "iou": 0.45}
# Load image and send request
with open("path/to/image.jpg", "rb") as image_file:
files = {"image": image_file}
response = requests.post(url, headers=headers, files=files, data=data)
print(response.json())
```
=== "JSON Response"
Note `segments` `x` and `y` lengths may vary from one object to another. Larger or more complex objects may have more segment points.
```json
{
"success": True,
"message": "Inference complete.",
"data": [
{
"name": "person",
"class": 0,
"confidence": 0.856913149356842,
"box": {
"x1": 0.1064866065979004,
"y1": 0.2798851860894097,
"x2": 0.8738358497619629,
"y2": 0.9894873725043403
},
"segments": {
"x": [
0.421875,
0.4203124940395355,
0.41718751192092896
...
],
"y": [
0.2888889014720917,
0.2916666567325592,
0.2916666567325592
...
]
}
},
{
"name": "person",
"class": 0,
"confidence": 0.8512625694274902,
"box": {
"x1": 0.5757311820983887,
"y1": 0.053943040635850696,
"x2": 0.8960096359252929,
"y2": 0.985154045952691
},
"segments": {
"x": [
0.7515624761581421,
0.75,
0.7437499761581421
...
],
"y": [
0.0555555559694767,
0.05833333358168602,
0.05833333358168602
...
]
}
},
{
"name": "tie",
"class": 27,
"confidence": 0.6485961675643921,
"box": {
"x1": 0.33911995887756347,
"y1": 0.6057066175672743,
"x2": 0.4081430912017822,
"y2": 0.9916408962673611
},
"segments": {
"x": [
0.37187498807907104,
0.37031251192092896,
0.3687500059604645
...
],
"y": [
0.6111111044883728,
0.6138888597488403,
0.6138888597488403
...
]
}
}
]
}
```
### Pose Model Format
YOLO pose models, such as `yolov8n-pose.pt`, can return JSON responses from local inference, CLI API inference, and Python API inference. All of these methods produce the same JSON response format.
!!! example "Pose Model JSON Response"
=== "Local"
```python
from ultralytics import YOLO
# Load model
model = YOLO('yolov8n-seg.pt')
# Run inference
results = model('image.jpg')
# Print image.jpg results in JSON format
print(results[0].tojson())
```
=== "CLI API"
```commandline
curl -X POST "https://api.ultralytics.com/v1/predict/MODEL_ID" \
-H "x-api-key: API_KEY" \
-F "image=@/path/to/image.jpg" \
-F "size=640" \
-F "confidence=0.25" \
-F "iou=0.45"
```
=== "Python API"
```python
import requests
# API URL, use actual MODEL_ID
url = f"https://api.ultralytics.com/v1/predict/MODEL_ID"
# Headers, use actual API_KEY
headers = {"x-api-key": "API_KEY"}
# Inference arguments (optional)
data = {"size": 640, "confidence": 0.25, "iou": 0.45}
# Load image and send request
with open("path/to/image.jpg", "rb") as image_file:
files = {"image": image_file}
response = requests.post(url, headers=headers, files=files, data=data)
print(response.json())
```
=== "JSON Response"
Note COCO-keypoints pretrained models will have 17 human keypoints. The `visible` part of the keypoints indicates whether a keypoint is visible or obscured. Obscured keypoints may be outside the image or may not be visible, i.e. a person's eyes facing away from the camera.
```json
{
"success": True,
"message": "Inference complete.",
"data": [
{
"name": "person",
"class": 0,
"confidence": 0.8439509868621826,
"box": {
"x1": 0.1125,
"y1": 0.28194444444444444,
"x2": 0.7953125,
"y2": 0.9902777777777778
},
"keypoints": {
"x": [
0.5058594942092896,
0.5103894472122192,
0.4920862317085266
...
],
"y": [
0.48964157700538635,
0.4643048942089081,
0.4465252459049225
...
],
"visible": [
0.8726999163627625,
0.653947651386261,
0.9130823612213135
...
]
}
},
{
"name": "person",
"class": 0,
"confidence": 0.7474289536476135,
"box": {
"x1": 0.58125,
"y1": 0.0625,
"x2": 0.8859375,
"y2": 0.9888888888888889
},
"keypoints": {
"x": [
0.778544008731842,
0.7976160049438477,
0.7530890107154846
...
],
"y": [
0.27595141530036926,
0.2378823608160019,
0.23644638061523438
...
],
"visible": [
0.8900790810585022,
0.789978563785553,
0.8974530100822449
...
]
}
}
]
}
```

@ -0,0 +1,7 @@
---
comments: true
---
# 🚧 Page Under Construction ⚒
This page is currently under construction! 👷Please check back later for updates. 😃🔜

@ -0,0 +1,20 @@
---
comments: true
---
# HUB Models
## Train a Model
Connect to the Ultralytics HUB notebook and use your model API key to begin training!
<a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb" target="_blank">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
## Deploy to Real World
Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run
models directly on your [iOS](https://apps.apple.com/xk/app/ultralytics/id1583935240) or
[Android](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app) mobile device by downloading
the [Ultralytics App](https://ultralytics.com/app_install)!

@ -0,0 +1,7 @@
---
comments: true
---
# 🚧 Page Under Construction ⚒
This page is currently under construction! 👷Please check back later for updates. 😃🔜

@ -1,429 +0,0 @@
---
comments: true
---
# YOLO Inference API (UNDER CONSTRUCTION)
The YOLO Inference API allows you to access the YOLOv8 object detection capabilities via a RESTful API. This enables you to run object detection on images without the need to install and set up the YOLOv8 environment locally.
## API URL
The API URL is the address used to access the YOLO Inference API. In this case, the base URL is:
```
https://api.ultralytics.com/inference/v1
```
To access the API with a specific model and your API key, you can include them as query parameters in the API URL. The `model` parameter refers to the `MODEL_ID` you want to use for inference, and the `key` parameter corresponds to your `API_KEY`.
The complete API URL with the model and API key parameters would be:
```
https://api.ultralytics.com/inference/v1?model=MODEL_ID&key=API_KEY
```
Replace `MODEL_ID` with the ID of the model you want to use and `API_KEY` with your actual API key from [https://hub.ultralytics.com/settings?tab=api+keys](https://hub.ultralytics.com/settings?tab=api+keys).
## Example Usage in Python
To access the YOLO Inference API with the specified model and API key using Python, you can use the following code:
```python
import requests
api_key = "API_KEY"
model_id = "MODEL_ID"
url = f"https://api.ultralytics.com/inference/v1?model={model_id}&key={api_key}"
image_path = "image.jpg"
with open(image_path, "rb") as image_file:
files = {"image": image_file}
response = requests.post(url, files=files)
print(response.text)
```
In this example, replace `API_KEY` with your actual API key, `MODEL_ID` with the desired model ID, and `image.jpg` with the path to the image you want to analyze.
## Example Usage with CLI
You can use the YOLO Inference API with the command-line interface (CLI) by utilizing the `curl` command. Replace `API_KEY` with your actual API key, `MODEL_ID` with the desired model ID, and `image.jpg` with the path to the image you want to analyze:
```commandline
curl -X POST -F image=@image.jpg "https://api.ultralytics.com/inference/v1?model=MODEL_ID&key=API_KEY"
```
## Passing Arguments
This command sends a POST request to the YOLO Inference API with the specified `model` and `key` parameters in the URL, along with the image file specified by `@image.jpg`.
Here's an example of passing the `model`, `key`, and `normalize` arguments via the API URL using the `requests` library in Python:
```python
import requests
api_key = "API_KEY"
model_id = "MODEL_ID"
url = "https://api.ultralytics.com/inference/v1"
# Define your query parameters
params = {
"key": api_key,
"model": model_id,
"normalize": "True"
}
image_path = "image.jpg"
with open(image_path, "rb") as image_file:
files = {"image": image_file}
response = requests.post(url, files=files, params=params)
print(response.text)
```
In this example, the `params` dictionary contains the query parameters `key`, `model`, and `normalize`, which tells the API to return all values in normalized image coordinates from 0 to 1. The `normalize` parameter is set to `"True"` as a string since query parameters should be passed as strings. These query parameters are then passed to the `requests.post()` function.
This will send the query parameters along with the file in the POST request. Make sure to consult the API documentation for the list of available arguments and their expected values.
## Return JSON format
The YOLO Inference API returns a JSON list with the detection results. The format of the JSON list will be the same as the one produced locally by the `results[0].tojson()` command.
The JSON list contains information about the detected objects, their coordinates, classes, and confidence scores.
### Detect Model Format
YOLO detection models, such as `yolov8n.pt`, can return JSON responses from local inference, CLI API inference, and Python API inference. All of these methods produce the same JSON response format.
!!! example "Detect Model JSON Response"
=== "Local"
```python
from ultralytics import YOLO
# Load model
model = YOLO('yolov8n.pt')
# Run inference
results = model('image.jpg')
# Print image.jpg results in JSON format
print(results[0].tojson())
```
=== "CLI API"
```commandline
curl -X POST -F image=@image.jpg https://api.ultralytics.com/inference/v1?model=MODEL_ID,key=API_KEY
```
=== "Python API"
```python
import requests
api_key = "API_KEY"
model_id = "MODEL_ID"
url = "https://api.ultralytics.com/inference/v1"
# Define your query parameters
params = {
"key": api_key,
"model": model_id,
}
image_path = "image.jpg"
with open(image_path, "rb") as image_file:
files = {"image": image_file}
response = requests.post(url, files=files, params=params)
print(response.text)
```
=== "JSON Response"
```json
[
{
"name": "person",
"class": 0,
"confidence": 0.8359682559967041,
"box": {
"x1": 0.08974208831787109,
"y1": 0.27418340047200523,
"x2": 0.8706787109375,
"y2": 0.9887352837456598
}
},
{
"name": "person",
"class": 0,
"confidence": 0.8189555406570435,
"box": {
"x1": 0.5847355842590332,
"y1": 0.05813225640190972,
"x2": 0.8930277824401855,
"y2": 0.9903111775716146
}
},
{
"name": "tie",
"class": 27,
"confidence": 0.2909725308418274,
"box": {
"x1": 0.3433395862579346,
"y1": 0.6070465511745877,
"x2": 0.40964522361755373,
"y2": 0.9849439832899306
}
}
]
```
### Segment Model Format
YOLO segmentation models, such as `yolov8n-seg.pt`, can return JSON responses from local inference, CLI API inference, and Python API inference. All of these methods produce the same JSON response format.
!!! example "Segment Model JSON Response"
=== "Local"
```python
from ultralytics import YOLO
# Load model
model = YOLO('yolov8n-seg.pt')
# Run inference
results = model('image.jpg')
# Print image.jpg results in JSON format
print(results[0].tojson())
```
=== "CLI API"
```commandline
curl -X POST -F image=@image.jpg https://api.ultralytics.com/inference/v1?model=MODEL_ID,key=API_KEY
```
=== "Python API"
```python
import requests
api_key = "API_KEY"
model_id = "MODEL_ID"
url = "https://api.ultralytics.com/inference/v1"
# Define your query parameters
params = {
"key": api_key,
"model": model_id,
}
image_path = "image.jpg"
with open(image_path, "rb") as image_file:
files = {"image": image_file}
response = requests.post(url, files=files, params=params)
print(response.text)
```
=== "JSON Response"
Note `segments` `x` and `y` lengths may vary from one object to another. Larger or more complex objects may have more segment points.
```json
[
{
"name": "person",
"class": 0,
"confidence": 0.856913149356842,
"box": {
"x1": 0.1064866065979004,
"y1": 0.2798851860894097,
"x2": 0.8738358497619629,
"y2": 0.9894873725043403
},
"segments": {
"x": [
0.421875,
0.4203124940395355,
0.41718751192092896
...
],
"y": [
0.2888889014720917,
0.2916666567325592,
0.2916666567325592
...
]
}
},
{
"name": "person",
"class": 0,
"confidence": 0.8512625694274902,
"box": {
"x1": 0.5757311820983887,
"y1": 0.053943040635850696,
"x2": 0.8960096359252929,
"y2": 0.985154045952691
},
"segments": {
"x": [
0.7515624761581421,
0.75,
0.7437499761581421
...
],
"y": [
0.0555555559694767,
0.05833333358168602,
0.05833333358168602
...
]
}
},
{
"name": "tie",
"class": 27,
"confidence": 0.6485961675643921,
"box": {
"x1": 0.33911995887756347,
"y1": 0.6057066175672743,
"x2": 0.4081430912017822,
"y2": 0.9916408962673611
},
"segments": {
"x": [
0.37187498807907104,
0.37031251192092896,
0.3687500059604645
...
],
"y": [
0.6111111044883728,
0.6138888597488403,
0.6138888597488403
...
]
}
}
]
```
### Pose Model Format
YOLO pose models, such as `yolov8n-pose.pt`, can return JSON responses from local inference, CLI API inference, and Python API inference. All of these methods produce the same JSON response format.
!!! example "Pose Model JSON Response"
=== "Local"
```python
from ultralytics import YOLO
# Load model
model = YOLO('yolov8n-seg.pt')
# Run inference
results = model('image.jpg')
# Print image.jpg results in JSON format
print(results[0].tojson())
```
=== "CLI API"
```commandline
curl -X POST -F image=@image.jpg https://api.ultralytics.com/inference/v1?model=MODEL_ID,key=API_KEY
```
=== "Python API"
```python
import requests
api_key = "API_KEY"
model_id = "MODEL_ID"
url = "https://api.ultralytics.com/inference/v1"
# Define your query parameters
params = {
"key": api_key,
"model": model_id,
}
image_path = "image.jpg"
with open(image_path, "rb") as image_file:
files = {"image": image_file}
response = requests.post(url, files=files, params=params)
print(response.text)
```
=== "JSON Response"
Note COCO-keypoints pretrained models will have 17 human keypoints. The `visible` part of the keypoints indicates whether a keypoint is visible or obscured. Obscured keypoints may be outside the image or may not be visible, i.e. a person's eyes facing away from the camera.
```json
[
{
"name": "person",
"class": 0,
"confidence": 0.8439509868621826,
"box": {
"x1": 0.1125,
"y1": 0.28194444444444444,
"x2": 0.7953125,
"y2": 0.9902777777777778
},
"keypoints": {
"x": [
0.5058594942092896,
0.5103894472122192,
0.4920862317085266
...
],
"y": [
0.48964157700538635,
0.4643048942089081,
0.4465252459049225
...
],
"visible": [
0.8726999163627625,
0.653947651386261,
0.9130823612213135
...
]
}
},
{
"name": "person",
"class": 0,
"confidence": 0.7474289536476135,
"box": {
"x1": 0.58125,
"y1": 0.0625,
"x2": 0.8859375,
"y2": 0.9888888888888889
},
"keypoints": {
"x": [
0.778544008731842,
0.7976160049438477,
0.7530890107154846
...
],
"y": [
0.27595141530036926,
0.2378823608160019,
0.23644638061523438
...
],
"visible": [
0.8900790810585022,
0.789978563785553,
0.8974530100822449
...
]
}
}
]
```

@ -58,8 +58,8 @@ the intended use case and can be used effectively in the target environment.
| `optimize` | `False` | TorchScript: optimize for mobile | | `optimize` | `False` | TorchScript: optimize for mobile |
| `half` | `False` | FP16 quantization | | `half` | `False` | FP16 quantization |
| `int8` | `False` | INT8 quantization | | `int8` | `False` | INT8 quantization |
| `dynamic` | `False` | ONNX/TF/TensorRT: dynamic axes | | `dynamic` | `False` | ONNX/TensorRT: dynamic axes |
| `simplify` | `False` | ONNX: simplify model | | `simplify` | `False` | ONNX/TensorRT: simplify model |
| `opset` | `None` | ONNX: opset version (optional, defaults to latest) | | `opset` | `None` | ONNX: opset version (optional, defaults to latest) |
| `workspace` | `4` | TensorRT: workspace size (GB) | | `workspace` | `4` | TensorRT: workspace size (GB) |
| `nms` | `False` | CoreML: add NMS | | `nms` | `False` | CoreML: add NMS |
@ -69,17 +69,17 @@ the intended use case and can be used effectively in the target environment.
Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument, Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument,
i.e. `format='onnx'` or `format='engine'`. i.e. `format='onnx'` or `format='engine'`.
| Format | `format` Argument | Model | Metadata | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|---------------------------|----------| |--------------------------------------------------------------------|-------------------|---------------------------|----------|-----------------------------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | | [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | - |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | | [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | `imgsz`, `optimize` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | | [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | `imgsz`, `half` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | | [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | | [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | `imgsz`, `half`, `int8`, `nms` |
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | | [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | `imgsz`, `keras` |
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | | [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | `imgsz` |
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | | [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8` |
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |

@ -69,7 +69,7 @@ whether each source can be used in streaming mode with `stream=True` ✅ and an
All supported arguments: All supported arguments:
| Key | Value | Description | | Key | Value | Description |
|------------------|------------------------|----------------------------------------------------------| |----------------|------------------------|--------------------------------------------------------------------------------|
| `source` | `'ultralytics/assets'` | source directory for images or videos | | `source` | `'ultralytics/assets'` | source directory for images or videos |
| `conf` | `0.25` | object confidence threshold for detection | | `conf` | `0.25` | object confidence threshold for detection |
| `iou` | `0.7` | intersection over union (IoU) threshold for NMS | | `iou` | `0.7` | intersection over union (IoU) threshold for NMS |
@ -84,7 +84,7 @@ All supported arguments:
| `hide_conf` | `False` | hide confidence scores | | `hide_conf` | `False` | hide confidence scores |
| `max_det` | `300` | maximum number of detections per image | | `max_det` | `300` | maximum number of detections per image |
| `vid_stride` | `False` | video frame-rate stride | | `vid_stride` | `False` | video frame-rate stride |
| `line_thickness` | `3` | bounding box thickness (pixels) | | `line_width` | `None` | The line width of the bounding boxes. If None, it is scaled to the image size. |
| `visualize` | `False` | visualize model features | | `visualize` | `False` | visualize model features |
| `augment` | `False` | apply image augmentation to prediction sources | | `augment` | `False` | apply image augmentation to prediction sources |
| `agnostic_nms` | `False` | class-agnostic NMS | | `agnostic_nms` | `False` | class-agnostic NMS |
@ -221,9 +221,9 @@ masks, classification logits, etc.) found in the results object
cv2.imshow("result", res_plotted) cv2.imshow("result", res_plotted)
``` ```
| Argument | Description | | Argument | Description |
|--------------------------------|----------------------------------------------------------------------------------------| |-------------------------------|----------------------------------------------------------------------------------------|
| `conf (bool)` | Whether to plot the detection confidence score. | | `conf (bool)` | Whether to plot the detection confidence score. |
| `line_width (float, optional)` | The line width of the bounding boxes. If None, it is scaled to the image size. | | `line_width (int, optional)` | The line width of the bounding boxes. If None, it is scaled to the image size. |
| `font_size (float, optional)` | The font size of the text. If None, it is scaled to the image size. | | `font_size (float, optional)` | The font size of the text. If None, it is scaled to the image size. |
| `font (str)` | The font to use for the text. | | `font (str)` | The font to use for the text. |
| `pil (bool)` | Whether to use PIL for image plotting. | | `pil (bool)` | Whether to use PIL for image plotting. |

@ -74,17 +74,17 @@ validation dataset and to detect and prevent overfitting.
Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument, Available YOLOv8 export formats are in the table below. You can export to any format using the `format` argument,
i.e. `format='onnx'` or `format='engine'`. i.e. `format='onnx'` or `format='engine'`.
| Format | `format` Argument | Model | Metadata | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|---------------------------|----------| |--------------------------------------------------------------------|-------------------|---------------------------|----------|-----------------------------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | | [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | - |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | | [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | `imgsz`, `optimize` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | | [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | `imgsz`, `half` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | | [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | | [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | `imgsz`, `half`, `int8`, `nms` |
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | | [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | `imgsz`, `keras` |
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | | [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | `imgsz` |
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | | [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8` |
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |

@ -0,0 +1,29 @@
# coco91_to_coco80_class
---
:::ultralytics.yolo.data.converter.coco91_to_coco80_class
<br><br>
# convert_coco
---
:::ultralytics.yolo.data.converter.convert_coco
<br><br>
# rle2polygon
---
:::ultralytics.yolo.data.converter.rle2polygon
<br><br>
# min_index
---
:::ultralytics.yolo.data.converter.min_index
<br><br>
# merge_multi_segment
---
:::ultralytics.yolo.data.converter.merge_multi_segment
<br><br>
# delete_dsstore
---
:::ultralytics.yolo.data.converter.delete_dsstore
<br><br>

@ -27,3 +27,8 @@
--- ---
:::ultralytics.yolo.utils.files.get_latest_run :::ultralytics.yolo.utils.files.get_latest_run
<br><br> <br><br>
# make_dirs
---
:::ultralytics.yolo.utils.files.make_dirs
<br><br>

@ -171,19 +171,19 @@ Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
Available YOLOv8-cls export formats are in the table below. You can predict or validate directly on exported models, Available YOLOv8-cls export formats are in the table below. You can predict or validate directly on exported models,
i.e. `yolo predict model=yolov8n-cls.onnx`. Usage examples are shown for your model after export completes. i.e. `yolo predict model=yolov8n-cls.onnx`. Usage examples are shown for your model after export completes.
| Format | `format` Argument | Model | Metadata | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|-------------------------------|----------| |--------------------------------------------------------------------|-------------------|------------------------------|----------|-----------------------------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n-cls.pt` | ✅ | | [PyTorch](https://pytorch.org/) | - | `yolov8n-cls.pt` | ✅ | - |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-cls.torchscript` | ✅ | | [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-cls.torchscript` | ✅ | `imgsz`, `optimize` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-cls.onnx` | ✅ | | [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-cls.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-cls_openvino_model/` | ✅ | | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-cls_openvino_model/` | ✅ | `imgsz`, `half` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-cls.engine` | ✅ | | [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-cls.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-cls.mlmodel` | ✅ | | [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-cls.mlmodel` | ✅ | `imgsz`, `half`, `int8`, `nms` |
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-cls_saved_model/` | ✅ | | [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-cls_saved_model/` | ✅ | `imgsz`, `keras` |
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-cls.pb` | ❌ | | [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-cls.pb` | ❌ | `imgsz` |
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-cls.tflite` | ✅ | | [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-cls.tflite` | ✅ | `imgsz`, `half`, `int8` |
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-cls_edgetpu.tflite` | ✅ | | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-cls_edgetpu.tflite` | ✅ | `imgsz` |
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` | ✅ | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` | ✅ | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` | ✅ | `imgsz` |
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.

@ -161,19 +161,19 @@ Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
Available YOLOv8 export formats are in the table below. You can predict or validate directly on exported models, Available YOLOv8 export formats are in the table below. You can predict or validate directly on exported models,
i.e. `yolo predict model=yolov8n.onnx`. Usage examples are shown for your model after export completes. i.e. `yolo predict model=yolov8n.onnx`. Usage examples are shown for your model after export completes.
| Format | `format` Argument | Model | Metadata | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|---------------------------|----------| |--------------------------------------------------------------------|-------------------|---------------------------|----------|-----------------------------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | | [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` | ✅ | - |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | | [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` | ✅ | `imgsz`, `optimize` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | | [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` | ✅ | `imgsz`, `half` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | | [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | | [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` | ✅ | `imgsz`, `half`, `int8`, `nms` |
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | | [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` | ✅ | `imgsz`, `keras` |
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | | [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` | ❌ | `imgsz` |
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | | [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8` |
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.

@ -161,19 +161,19 @@ Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc.
Available YOLOv8-pose export formats are in the table below. You can predict or validate directly on exported models, Available YOLOv8-pose export formats are in the table below. You can predict or validate directly on exported models,
i.e. `yolo predict model=yolov8n-pose.onnx`. Usage examples are shown for your model after export completes. i.e. `yolo predict model=yolov8n-pose.onnx`. Usage examples are shown for your model after export completes.
| Format | `format` Argument | Model | Metadata | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|--------------------------------|----------| |--------------------------------------------------------------------|-------------------|-------------------------------|----------|-----------------------------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n-pose.pt` | ✅ | | [PyTorch](https://pytorch.org/) | - | `yolov8n-pose.pt` | ✅ | - |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-pose.torchscript` | ✅ | | [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-pose.torchscript` | ✅ | `imgsz`, `optimize` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-pose.onnx` | ✅ | | [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-pose.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-pose_openvino_model/` | ✅ | | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-pose_openvino_model/` | ✅ | `imgsz`, `half` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-pose.engine` | ✅ | | [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-pose.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-pose.mlmodel` | ✅ | | [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-pose.mlmodel` | ✅ | `imgsz`, `half`, `int8`, `nms` |
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-pose_saved_model/` | ✅ | | [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-pose_saved_model/` | ✅ | `imgsz`, `keras` |
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-pose.pb` | ❌ | | [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-pose.pb` | ❌ | `imgsz` |
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-pose.tflite` | ✅ | | [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-pose.tflite` | ✅ | `imgsz`, `half`, `int8` |
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-pose_edgetpu.tflite` | ✅ | | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-pose_edgetpu.tflite` | ✅ | `imgsz` |
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-pose_web_model/` | ✅ | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-pose_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-pose_paddle_model/` | ✅ | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-pose_paddle_model/` | ✅ | `imgsz` |
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.

@ -168,19 +168,19 @@ Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
Available YOLOv8-seg export formats are in the table below. You can predict or validate directly on exported models, Available YOLOv8-seg export formats are in the table below. You can predict or validate directly on exported models,
i.e. `yolo predict model=yolov8n-seg.onnx`. Usage examples are shown for your model after export completes. i.e. `yolo predict model=yolov8n-seg.onnx`. Usage examples are shown for your model after export completes.
| Format | `format` Argument | Model | Metadata | | Format | `format` Argument | Model | Metadata | Arguments |
|--------------------------------------------------------------------|-------------------|-------------------------------|----------| |--------------------------------------------------------------------|-------------------|------------------------------|----------|-----------------------------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n-seg.pt` | ✅ | | [PyTorch](https://pytorch.org/) | - | `yolov8n-seg.pt` | ✅ | - |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-seg.torchscript` | ✅ | | [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-seg.torchscript` | ✅ | `imgsz`, `optimize` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-seg.onnx` | ✅ | | [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-seg.onnx` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `opset` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-seg_openvino_model/` | ✅ | | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-seg_openvino_model/` | ✅ | `imgsz`, `half` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-seg.engine` | ✅ | | [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-seg.engine` | ✅ | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-seg.mlmodel` | ✅ | | [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-seg.mlmodel` | ✅ | `imgsz`, `half`, `int8`, `nms` |
| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-seg_saved_model/` | ✅ | | [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-seg_saved_model/` | ✅ | `imgsz`, `keras` |
| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-seg.pb` | ❌ | | [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-seg.pb` | ❌ | `imgsz` |
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-seg.tflite` | ✅ | | [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-seg.tflite` | ✅ | `imgsz`, `half`, `int8` |
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-seg_edgetpu.tflite` | ✅ | | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-seg_edgetpu.tflite` | ✅ | `imgsz` |
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` | ✅ | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` | ✅ | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` | ✅ | `imgsz` |
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.

@ -129,7 +129,7 @@ The training settings for YOLO models encompass various hyperparameters and conf
The prediction settings for YOLO models encompass a range of hyperparameters and configurations that influence the model's performance, speed, and accuracy during inference on new data. Careful tuning and experimentation with these settings are essential to achieve optimal performance for a specific task. Key settings include the confidence threshold, Non-Maximum Suppression (NMS) threshold, and the number of classes considered. Additional factors affecting the prediction process are input data size and format, the presence of supplementary features such as masks or multiple labels per box, and the particular task the model is employed for. The prediction settings for YOLO models encompass a range of hyperparameters and configurations that influence the model's performance, speed, and accuracy during inference on new data. Careful tuning and experimentation with these settings are essential to achieve optimal performance for a specific task. Key settings include the confidence threshold, Non-Maximum Suppression (NMS) threshold, and the number of classes considered. Additional factors affecting the prediction process are input data size and format, the presence of supplementary features such as masks or multiple labels per box, and the particular task the model is employed for.
| Key | Value | Description | | Key | Value | Description |
|------------------|------------------------|----------------------------------------------------------| |----------------|------------------------|--------------------------------------------------------------------------------|
| `source` | `'ultralytics/assets'` | source directory for images or videos | | `source` | `'ultralytics/assets'` | source directory for images or videos |
| `conf` | `0.25` | object confidence threshold for detection | | `conf` | `0.25` | object confidence threshold for detection |
| `iou` | `0.7` | intersection over union (IoU) threshold for NMS | | `iou` | `0.7` | intersection over union (IoU) threshold for NMS |
@ -144,7 +144,7 @@ The prediction settings for YOLO models encompass a range of hyperparameters and
| `show_conf` | `True` | show object confidence scores in plots | | `show_conf` | `True` | show object confidence scores in plots |
| `max_det` | `300` | maximum number of detections per image | | `max_det` | `300` | maximum number of detections per image |
| `vid_stride` | `False` | video frame-rate stride | | `vid_stride` | `False` | video frame-rate stride |
| `line_thickness` | `3` | bounding box thickness (pixels) | | `line_width` | `None` | The line width of the bounding boxes. If None, it is scaled to the image size. |
| `visualize` | `False` | visualize model features | | `visualize` | `False` | visualize model features |
| `augment` | `False` | apply image augmentation to prediction sources | | `augment` | `False` | apply image augmentation to prediction sources |
| `agnostic_nms` | `False` | class-agnostic NMS | | `agnostic_nms` | `False` | class-agnostic NMS |

@ -107,7 +107,7 @@ python detect.py --weights yolov5x.pt yolov5l6.pt --img 640 --source data/images
Output: Output:
```bash ```bash
detect: weights=['yolov5x.pt', 'yolov5l6.pt'], source=data/images, imgsz=640, conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False detect: weights=['yolov5x.pt', 'yolov5l6.pt'], source=data/images, imgsz=640, conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_width=3, hide_labels=False, hide_conf=False, half=False
YOLOv5 🚀 v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB) YOLOv5 🚀 v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB)
Fusing layers... Fusing layers...

@ -100,7 +100,7 @@ python detect.py --weights yolov5s.pt --img 832 --source data/images --augment
Output: Output:
```bash ```bash
detect: weights=['yolov5s.pt'], source=data/images, imgsz=832, conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=True, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False detect: weights=['yolov5s.pt'], source=data/images, imgsz=832, conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=True, update=False, project=runs/detect, name=exp, exist_ok=False, line_width=3, hide_labels=False, hide_conf=False, half=False
YOLOv5 🚀 v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB) YOLOv5 🚀 v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB)
Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt to yolov5s.pt... Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt to yolov5s.pt...

@ -300,7 +300,7 @@
"name": "stdout", "name": "stdout",
"text": [ "text": [
"Ultralytics YOLOv8.0.71 🚀 Python-3.9.16 torch-2.0.0+cu118 CUDA:0 (Tesla T4, 15102MiB)\n", "Ultralytics YOLOv8.0.71 🚀 Python-3.9.16 torch-2.0.0+cu118 CUDA:0 (Tesla T4, 15102MiB)\n",
"\u001b[34m\u001b[1myolo/engine/trainer: \u001b[0mtask=detect, mode=train, model=yolov8n.pt, data=coco128.yaml, epochs=3, patience=50, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=None, exist_ok=False, pretrained=False, optimizer=SGD, verbose=True, seed=0, deterministic=True, single_cls=False, image_weights=False, rect=False, cos_lr=False, close_mosaic=0, resume=False, amp=True, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_thickness=3, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, v5loader=False, tracker=botsort.yaml, save_dir=runs/detect/train\n", "\u001b[34m\u001b[1myolo/engine/trainer: \u001b[0mtask=detect, mode=train, model=yolov8n.pt, data=coco128.yaml, epochs=3, patience=50, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=None, exist_ok=False, pretrained=False, optimizer=SGD, verbose=True, seed=0, deterministic=True, single_cls=False, image_weights=False, rect=False, cos_lr=False, close_mosaic=0, resume=False, amp=True, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_width=3, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, v5loader=False, tracker=botsort.yaml, save_dir=runs/detect/train\n",
"\n", "\n",
" from n params module arguments \n", " from n params module arguments \n",
" 0 -1 1 464 ultralytics.nn.modules.Conv [3, 16, 3, 2] \n", " 0 -1 1 464 ultralytics.nn.modules.Conv [3, 16, 3, 2] \n",

@ -140,12 +140,6 @@ nav:
- Segment: tasks/segment.md - Segment: tasks/segment.md
- Classify: tasks/classify.md - Classify: tasks/classify.md
- Pose: tasks/pose.md - Pose: tasks/pose.md
- Models:
- models/index.md
- YOLOv3: models/yolov3.md
- YOLOv5: models/yolov5.md
- YOLOv8: models/yolov8.md
- Segment Anything Model (SAM): models/sam.md
- Quickstart: quickstart.md - Quickstart: quickstart.md
- Modes: - Modes:
- modes/index.md - modes/index.md
@ -161,14 +155,54 @@ nav:
- Segment: tasks/segment.md - Segment: tasks/segment.md
- Classify: tasks/classify.md - Classify: tasks/classify.md
- Pose: tasks/pose.md - Pose: tasks/pose.md
- Models:
- models/index.md
- YOLOv3: models/yolov3.md
- YOLOv5: models/yolov5.md
- YOLOv8: models/yolov8.md
- Segment Anything Model (SAM): models/sam.md
- Usage: - Usage:
- CLI: usage/cli.md - CLI: usage/cli.md
- Python: usage/python.md - Python: usage/python.md
- Callbacks: usage/callbacks.md - Callbacks: usage/callbacks.md
- Configuration: usage/cfg.md - Configuration: usage/cfg.md
- Advanced Customization: usage/engine.md - Advanced Customization: usage/engine.md
- Ultralytics HUB: hub.md - YOLOv5:
- iOS and Android App: app.md - yolov5/index.md
- Quickstart: yolov5/quickstart_tutorial.md
- Environments:
- Amazon Web Services (AWS): yolov5/environments/aws_quickstart_tutorial.md
- Google Cloud (GCP): yolov5/environments/google_cloud_quickstart_tutorial.md
- Docker Image: yolov5/environments/docker_image_quickstart_tutorial.md
- Tutorials:
- Train Custom Data: yolov5/tutorials/train_custom_data.md
- Tips for Best Training Results: yolov5/tutorials/tips_for_best_training_results.md
- Multi-GPU Training: yolov5/tutorials/multi_gpu_training.md
- PyTorch Hub: yolov5/tutorials/pytorch_hub_model_loading.md
- TFLite, ONNX, CoreML, TensorRT Export: yolov5/tutorials/model_export.md
- NVIDIA Jetson Nano Deployment: yolov5/tutorials/running_on_jetson_nano.md
- Test-Time Augmentation (TTA): yolov5/tutorials/test_time_augmentation.md
- Model Ensembling: yolov5/tutorials/model_ensembling.md
- Pruning/Sparsity Tutorial: yolov5/tutorials/model_pruning_and_sparsity.md
- Hyperparameter evolution: yolov5/tutorials/hyperparameter_evolution.md
- Transfer learning with frozen layers: yolov5/tutorials/transfer_learning_with_frozen_layers.md
- Architecture Summary: yolov5/tutorials/architecture_description.md
- Roboflow Datasets: yolov5/tutorials/roboflow_datasets_integration.md
- Neural Magic's DeepSparse: yolov5/tutorials/neural_magic_pruning_quantization.md
- Comet Logging: yolov5/tutorials/comet_logging_integration.md
- Clearml Logging: yolov5/tutorials/clearml_logging_integration.md
- Ultralytics HUB:
- hub/index.md
- Quickstart: hub/quickstart.md
- Datasets: hub/datasets.md
- Projects: hub/projects.md
- Models: hub/models.md
- Integrations: hub/integrations.md
- Ultralytics HUB App:
- hub/app/index.md
- 'iOS': hub/app/ios.md
- 'Android': hub/app/android.md
- Inference API: hub/inference_api.md
- Reference: - Reference:
- hub: - hub:
- auth: reference/hub/auth.md - auth: reference/hub/auth.md
@ -195,6 +229,7 @@ nav:
- augment: reference/yolo/data/augment.md - augment: reference/yolo/data/augment.md
- base: reference/yolo/data/base.md - base: reference/yolo/data/base.md
- build: reference/yolo/data/build.md - build: reference/yolo/data/build.md
- converter: reference/yolo/data/converter.md
- dataloaders: - dataloaders:
- stream_loaders: reference/yolo/data/dataloaders/stream_loaders.md - stream_loaders: reference/yolo/data/dataloaders/stream_loaders.md
- v5augmentations: reference/yolo/data/dataloaders/v5augmentations.md - v5augmentations: reference/yolo/data/dataloaders/v5augmentations.md
@ -251,31 +286,6 @@ nav:
- predict: reference/yolo/v8/segment/predict.md - predict: reference/yolo/v8/segment/predict.md
- train: reference/yolo/v8/segment/train.md - train: reference/yolo/v8/segment/train.md
- val: reference/yolo/v8/segment/val.md - val: reference/yolo/v8/segment/val.md
- YOLOv5:
- yolov5/index.md
- Quickstart: yolov5/quickstart_tutorial.md
- Environments:
- Amazon Web Services (AWS): yolov5/environments/aws_quickstart_tutorial.md
- Google Cloud (GCP): yolov5/environments/google_cloud_quickstart_tutorial.md
- Docker Image: yolov5/environments/docker_image_quickstart_tutorial.md
- Tutorials:
- Train Custom Data: yolov5/tutorials/train_custom_data.md
- Tips for Best Training Results: yolov5/tutorials/tips_for_best_training_results.md
- Multi-GPU Training: yolov5/tutorials/multi_gpu_training.md
- PyTorch Hub: yolov5/tutorials/pytorch_hub_model_loading.md
- TFLite, ONNX, CoreML, TensorRT Export: yolov5/tutorials/model_export.md
- NVIDIA Jetson Nano Deployment: yolov5/tutorials/running_on_jetson_nano.md
- Test-Time Augmentation (TTA): yolov5/tutorials/test_time_augmentation.md
- Model Ensembling: yolov5/tutorials/model_ensembling.md
- Pruning/Sparsity Tutorial: yolov5/tutorials/model_pruning_and_sparsity.md
- Hyperparameter evolution: yolov5/tutorials/hyperparameter_evolution.md
- Transfer learning with frozen layers: yolov5/tutorials/transfer_learning_with_frozen_layers.md
- Architecture Summary: yolov5/tutorials/architecture_description.md
- Roboflow Datasets: yolov5/tutorials/roboflow_datasets_integration.md
- Neural Magic's DeepSparse: yolov5/tutorials/neural_magic_pruning_quantization.md
- Comet Logging: yolov5/tutorials/comet_logging_integration.md
- Clearml Logging: yolov5/tutorials/clearml_logging_integration.md
- Help: - Help:
- Help: help/index.md - Help: help/index.md
- Frequently Asked Questions (FAQ): help/FAQ.md - Frequently Asked Questions (FAQ): help/FAQ.md
@ -308,6 +318,8 @@ plugins:
predict.md: modes/predict.md predict.md: modes/predict.md
python.md: usage/python.md python.md: usage/python.md
quick-start.md: quickstart.md quick-start.md: quickstart.md
app.md: hub/app/index.md
sdk.md: index.md
reference/base_pred.md: reference/yolo/engine/predictor.md reference/base_pred.md: reference/yolo/engine/predictor.md
reference/base_trainer.md: reference/yolo/engine/trainer.md reference/base_trainer.md: reference/yolo/engine/trainer.md
reference/exporter.md: reference/yolo/engine/exporter.md reference/exporter.md: reference/yolo/engine/exporter.md
@ -315,7 +327,6 @@ plugins:
reference/nn.md: reference/nn/modules.md reference/nn.md: reference/nn/modules.md
reference/ops.md: reference/yolo/utils/ops.md reference/ops.md: reference/yolo/utils/ops.md
reference/results.md: reference/yolo/engine/results.md reference/results.md: reference/yolo/engine/results.md
sdk.md: index.md
tasks/classification.md: tasks/classify.md tasks/classification.md: tasks/classify.md
tasks/detection.md: tasks/detect.md tasks/detection.md: tasks/detect.md
tasks/segmentation.md: tasks/segment.md tasks/segmentation.md: tasks/segment.md

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
__version__ = '8.0.92' __version__ = '8.0.93'
from ultralytics.hub import start from ultralytics.hub import start
from ultralytics.vit.sam import SAM from ultralytics.vit.sam import SAM

@ -68,7 +68,7 @@ CFG_FRACTION_KEYS = ('dropout', 'iou', 'lr0', 'lrf', 'momentum', 'weight_decay',
'label_smoothing', 'hsv_h', 'hsv_s', 'hsv_v', 'translate', 'scale', 'perspective', 'flipud', 'label_smoothing', 'hsv_h', 'hsv_s', 'hsv_v', 'translate', 'scale', 'perspective', 'flipud',
'fliplr', 'mosaic', 'mixup', 'copy_paste', 'conf', 'iou') # fractional floats limited to 0.0 - 1.0 'fliplr', 'mosaic', 'mixup', 'copy_paste', 'conf', 'iou') # fractional floats limited to 0.0 - 1.0
CFG_INT_KEYS = ('epochs', 'patience', 'batch', 'workers', 'seed', 'close_mosaic', 'mask_ratio', 'max_det', 'vid_stride', CFG_INT_KEYS = ('epochs', 'patience', 'batch', 'workers', 'seed', 'close_mosaic', 'mask_ratio', 'max_det', 'vid_stride',
'line_thickness', 'workspace', 'nbs', 'save_period') 'line_width', 'workspace', 'nbs', 'save_period')
CFG_BOOL_KEYS = ('save', 'exist_ok', 'verbose', 'deterministic', 'single_cls', 'rect', 'cos_lr', 'overlap_mask', 'val', CFG_BOOL_KEYS = ('save', 'exist_ok', 'verbose', 'deterministic', 'single_cls', 'rect', 'cos_lr', 'overlap_mask', 'val',
'save_json', 'save_hybrid', 'half', 'dnn', 'plots', 'show', 'save_txt', 'save_conf', 'save_crop', 'save_json', 'save_hybrid', 'half', 'dnn', 'plots', 'show', 'save_txt', 'save_conf', 'save_crop',
'show_labels', 'show_conf', 'visualize', 'augment', 'agnostic_nms', 'retina_masks', 'boxes', 'keras', 'show_labels', 'show_conf', 'visualize', 'augment', 'agnostic_nms', 'retina_masks', 'boxes', 'keras',
@ -152,6 +152,9 @@ def _handle_deprecation(custom):
if key == 'hide_conf': if key == 'hide_conf':
deprecation_warn(key, 'show_conf') deprecation_warn(key, 'show_conf')
custom['show_conf'] = custom.pop('hide_conf') == 'False' custom['show_conf'] = custom.pop('hide_conf') == 'False'
if key == 'line_thickness':
deprecation_warn(key, 'line_width')
custom['line_width'] = custom.pop('line_thickness')
return custom return custom

@ -57,7 +57,7 @@ save_crop: False # save cropped images with results
show_labels: True # show object labels in plots show_labels: True # show object labels in plots
show_conf: True # show object confidence scores in plots show_conf: True # show object confidence scores in plots
vid_stride: 1 # video frame-rate stride vid_stride: 1 # video frame-rate stride
line_thickness: 3 # bounding box thickness (pixels) line_width: # line width of the bounding boxes
visualize: False # visualize model features visualize: False # visualize model features
augment: False # apply image augmentation to prediction sources augment: False # apply image augmentation to prediction sources
agnostic_nms: False # class-agnostic NMS agnostic_nms: False # class-agnostic NMS

@ -8,7 +8,6 @@ from ultralytics.yolo.utils.torch_utils import select_device
def auto_annotate(data, det_model='yolov8x.pt', sam_model='sam_b.pt', device='', output_dir=None): def auto_annotate(data, det_model='yolov8x.pt', sam_model='sam_b.pt', device='', output_dir=None):
""" """
Automatically annotates images using a YOLO object detection model and a SAM segmentation model. Automatically annotates images using a YOLO object detection model and a SAM segmentation model.
Args: Args:
data (str): Path to a folder containing images to be annotated. data (str): Path to a folder containing images to be annotated.
det_model (str, optional): Pre-trained YOLO detection model. Defaults to 'yolov8x.pt'. det_model (str, optional): Pre-trained YOLO detection model. Defaults to 'yolov8x.pt'.
@ -16,7 +15,6 @@ def auto_annotate(data, det_model='yolov8x.pt', sam_model='sam_b.pt', device='',
device (str, optional): Device to run the models on. Defaults to an empty string (CPU or GPU, if available). device (str, optional): Device to run the models on. Defaults to an empty string (CPU or GPU, if available).
output_dir (str, None, optional): Directory to save the annotated results. output_dir (str, None, optional): Directory to save the annotated results.
Defaults to a 'labels' folder in the same directory as 'data'. Defaults to a 'labels' folder in the same directory as 'data'.
""" """
device = select_device(device) device = select_device(device)
det_model = YOLO(det_model) det_model = YOLO(det_model)
@ -34,6 +32,7 @@ def auto_annotate(data, det_model='yolov8x.pt', sam_model='sam_b.pt', device='',
for result in det_results: for result in det_results:
boxes = result.boxes.xyxy # Boxes object for bbox outputs boxes = result.boxes.xyxy # Boxes object for bbox outputs
class_ids = result.boxes.cls.int().tolist() # noqa class_ids = result.boxes.cls.int().tolist() # noqa
if len(class_ids):
prompt_predictor.set_image(result.orig_img) prompt_predictor.set_image(result.orig_img)
masks, _, _ = prompt_predictor.predict_torch( masks, _, _ = prompt_predictor.predict_torch(
point_coords=None, point_coords=None,
@ -45,7 +44,7 @@ def auto_annotate(data, det_model='yolov8x.pt', sam_model='sam_b.pt', device='',
result.update(masks=masks.squeeze(1)) result.update(masks=masks.squeeze(1))
segments = result.masks.xyn # noqa segments = result.masks.xyn # noqa
with open(f'{str(Path(output_dir) / Path(result.path).stem)}.txt', 'w') as f: with open(str(Path(output_dir) / Path(result.path).stem) + '.txt', 'w') as f:
for i in range(len(segments)): for i in range(len(segments)):
s = segments[i] s = segments[i]
if len(s) == 0: if len(s) == 0:

@ -0,0 +1,230 @@
import json
from collections import defaultdict
from pathlib import Path
import cv2
import numpy as np
from tqdm import tqdm
from ultralytics.yolo.utils.checks import check_requirements
from ultralytics.yolo.utils.files import make_dirs
def coco91_to_coco80_class():
"""Converts 91-index COCO class IDs to 80-index COCO class IDs.
Returns:
(list): A list of 91 class IDs where the index represents the 80-index class ID and the value is the
corresponding 91-index class ID.
"""
return [
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, None, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, None, 24, 25, None,
None, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, None, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
51, 52, 53, 54, 55, 56, 57, 58, 59, None, 60, None, None, 61, None, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72,
None, 73, 74, 75, 76, 77, 78, 79, None]
def convert_coco(labels_dir='../coco/annotations/', use_segments=False, use_keypoints=False, cls91to80=True):
"""Converts COCO dataset annotations to a format suitable for training YOLOv5 models.
Args:
labels_dir (str, optional): Path to directory containing COCO dataset annotation files.
use_segments (bool, optional): Whether to include segmentation masks in the output.
use_keypoints (bool, optional): Whether to include keypoint annotations in the output.
cls91to80 (bool, optional): Whether to map 91 COCO class IDs to the corresponding 80 COCO class IDs.
Raises:
FileNotFoundError: If the labels_dir path does not exist.
Example Usage:
convert_coco(labels_dir='../coco/annotations/', use_segments=True, use_keypoints=True, cls91to80=True)
Output:
Generates output files in the specified output directory.
"""
save_dir = make_dirs('yolo_labels') # output directory
coco80 = coco91_to_coco80_class()
# Import json
for json_file in sorted(Path(labels_dir).resolve().glob('*.json')):
fn = Path(save_dir) / 'labels' / json_file.stem.replace('instances_', '') # folder name
fn.mkdir(parents=True, exist_ok=True)
with open(json_file) as f:
data = json.load(f)
# Create image dict
images = {'%g' % x['id']: x for x in data['images']}
# Create image-annotations dict
imgToAnns = defaultdict(list)
for ann in data['annotations']:
imgToAnns[ann['image_id']].append(ann)
# Write labels file
for img_id, anns in tqdm(imgToAnns.items(), desc=f'Annotations {json_file}'):
img = images['%g' % img_id]
h, w, f = img['height'], img['width'], img['file_name']
bboxes = []
segments = []
keypoints = []
for ann in anns:
if ann['iscrowd']:
continue
# The COCO box format is [top left x, top left y, width, height]
box = np.array(ann['bbox'], dtype=np.float64)
box[:2] += box[2:] / 2 # xy top-left corner to center
box[[0, 2]] /= w # normalize x
box[[1, 3]] /= h # normalize y
if box[2] <= 0 or box[3] <= 0: # if w <= 0 and h <= 0
continue
cls = coco80[ann['category_id'] - 1] if cls91to80 else ann['category_id'] - 1 # class
box = [cls] + box.tolist()
if box not in bboxes:
bboxes.append(box)
if use_segments and ann.get('segmentation') is not None:
if len(ann['segmentation']) == 0:
segments.append([])
continue
if isinstance(ann['segmentation'], dict):
ann['segmentation'] = rle2polygon(ann['segmentation'])
if len(ann['segmentation']) > 1:
s = merge_multi_segment(ann['segmentation'])
s = (np.concatenate(s, axis=0) / np.array([w, h])).reshape(-1).tolist()
else:
s = [j for i in ann['segmentation'] for j in i] # all segments concatenated
s = (np.array(s).reshape(-1, 2) / np.array([w, h])).reshape(-1).tolist()
s = [cls] + s
if s not in segments:
segments.append(s)
if use_keypoints and ann.get('keypoints') is not None:
k = (np.array(ann['keypoints']).reshape(-1, 3) / np.array([w, h, 1])).reshape(-1).tolist()
k = box + k
keypoints.append(k)
# Write
with open((fn / f).with_suffix('.txt'), 'a') as file:
for i in range(len(bboxes)):
if use_keypoints:
line = *(keypoints[i]), # cls, box, keypoints
else:
line = *(segments[i]
if use_segments and len(segments[i]) > 0 else bboxes[i]), # cls, box or segments
file.write(('%g ' * len(line)).rstrip() % line + '\n')
def rle2polygon(segmentation):
"""
Convert Run-Length Encoding (RLE) mask to polygon coordinates.
Args:
segmentation (dict, list): RLE mask representation of the object segmentation.
Returns:
(list): A list of lists representing the polygon coordinates for each contour.
Note:
Requires the 'pycocotools' package to be installed.
"""
check_requirements('pycocotools')
from pycocotools import mask
m = mask.decode(segmentation)
m[m > 0] = 255
contours, _ = cv2.findContours(m, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS)
polygons = []
for contour in contours:
epsilon = 0.001 * cv2.arcLength(contour, True)
contour_approx = cv2.approxPolyDP(contour, epsilon, True)
polygon = contour_approx.flatten().tolist()
polygons.append(polygon)
return polygons
def min_index(arr1, arr2):
"""
Find a pair of indexes with the shortest distance between two arrays of 2D points.
Args:
arr1 (np.array): A NumPy array of shape (N, 2) representing N 2D points.
arr2 (np.array): A NumPy array of shape (M, 2) representing M 2D points.
Returns:
(tuple): A tuple containing the indexes of the points with the shortest distance in arr1 and arr2 respectively.
"""
dis = ((arr1[:, None, :] - arr2[None, :, :]) ** 2).sum(-1)
return np.unravel_index(np.argmin(dis, axis=None), dis.shape)
def merge_multi_segment(segments):
"""
Merge multiple segments into one list by connecting the coordinates with the minimum distance between each segment.
This function connects these coordinates with a thin line to merge all segments into one.
Args:
segments (List[List]): Original segmentations in COCO's JSON file.
Each element is a list of coordinates, like [segmentation1, segmentation2,...].
Returns:
s (List[np.ndarray]): A list of connected segments represented as NumPy arrays.
"""
s = []
segments = [np.array(i).reshape(-1, 2) for i in segments]
idx_list = [[] for _ in range(len(segments))]
# record the indexes with min distance between each segment
for i in range(1, len(segments)):
idx1, idx2 = min_index(segments[i - 1], segments[i])
idx_list[i - 1].append(idx1)
idx_list[i].append(idx2)
# use two round to connect all the segments
for k in range(2):
# forward connection
if k == 0:
for i, idx in enumerate(idx_list):
# middle segments have two indexes
# reverse the index of middle segments
if len(idx) == 2 and idx[0] > idx[1]:
idx = idx[::-1]
segments[i] = segments[i][::-1, :]
segments[i] = np.roll(segments[i], -idx[0], axis=0)
segments[i] = np.concatenate([segments[i], segments[i][:1]])
# deal with the first segment and the last one
if i in [0, len(idx_list) - 1]:
s.append(segments[i])
else:
idx = [0, idx[1] - idx[0]]
s.append(segments[i][idx[0]:idx[1] + 1])
else:
for i in range(len(idx_list) - 1, -1, -1):
if i not in [0, len(idx_list) - 1]:
idx = idx_list[i]
nidx = abs(idx[1] - idx[0])
s.append(segments[i][nidx:])
return s
def delete_dsstore(path='../datasets'):
"""Delete Apple .DS_Store files in the specified directory and its subdirectories."""
from pathlib import Path
files = list(Path(path).rglob('.DS_store'))
print(files)
for f in files:
f.unlink()
if __name__ == '__main__':
source = 'COCO'
if source == 'COCO':
convert_coco(
'../datasets/coco/annotations', # directory with *.json
use_segments=False,
use_keypoints=True,
cls91to80=False)

@ -146,7 +146,7 @@ class BasePredictor:
log_string += result.verbose() log_string += result.verbose()
if self.args.save or self.args.show: # Add bbox to image if self.args.save or self.args.show: # Add bbox to image
plot_args = dict(line_width=self.args.line_thickness, plot_args = dict(line_width=self.args.line_width,
boxes=self.args.boxes, boxes=self.args.boxes,
conf=self.args.show_conf, conf=self.args.show_conf,
labels=self.args.show_labels) labels=self.args.show_labels)
@ -212,7 +212,7 @@ class BasePredictor:
self.model.warmup(imgsz=(1 if self.model.pt or self.model.triton else self.dataset.bs, 3, *self.imgsz)) self.model.warmup(imgsz=(1 if self.model.pt or self.model.triton else self.dataset.bs, 3, *self.imgsz))
self.done_warmup = True self.done_warmup = True
self.seen, self.windows, self.dt, self.batch = 0, [], (ops.Profile(), ops.Profile(), ops.Profile()), None self.seen, self.windows, self.batch, profilers = 0, [], None, (ops.Profile(), ops.Profile(), ops.Profile())
self.run_callbacks('on_predict_start') self.run_callbacks('on_predict_start')
for batch in self.dataset: for batch in self.dataset:
self.run_callbacks('on_predict_batch_start') self.run_callbacks('on_predict_batch_start')
@ -222,15 +222,15 @@ class BasePredictor:
mkdir=True) if self.args.visualize and (not self.source_type.tensor) else False mkdir=True) if self.args.visualize and (not self.source_type.tensor) else False
# Preprocess # Preprocess
with self.dt[0]: with profilers[0]:
im = self.preprocess(im0s) im = self.preprocess(im0s)
# Inference # Inference
with self.dt[1]: with profilers[1]:
preds = self.model(im, augment=self.args.augment, visualize=visualize) preds = self.model(im, augment=self.args.augment, visualize=visualize)
# Postprocess # Postprocess
with self.dt[2]: with profilers[2]:
self.results = self.postprocess(preds, im, im0s) self.results = self.postprocess(preds, im, im0s)
self.run_callbacks('on_predict_postprocess_end') self.run_callbacks('on_predict_postprocess_end')
@ -238,9 +238,9 @@ class BasePredictor:
n = len(im0s) n = len(im0s)
for i in range(n): for i in range(n):
self.results[i].speed = { self.results[i].speed = {
'preprocess': self.dt[0].dt * 1E3 / n, 'preprocess': profilers[0].dt * 1E3 / n,
'inference': self.dt[1].dt * 1E3 / n, 'inference': profilers[1].dt * 1E3 / n,
'postprocess': self.dt[2].dt * 1E3 / n} 'postprocess': profilers[2].dt * 1E3 / n}
if self.source_type.tensor: # skip write, show and plot operations if input is raw tensor if self.source_type.tensor: # skip write, show and plot operations if input is raw tensor
continue continue
p, im0 = path[i], im0s[i].copy() p, im0 = path[i], im0s[i].copy()
@ -259,7 +259,7 @@ class BasePredictor:
# Print time (inference-only) # Print time (inference-only)
if self.args.verbose: if self.args.verbose:
LOGGER.info(f'{s}{self.dt[1].dt * 1E3:.1f}ms') LOGGER.info(f'{s}{profilers[1].dt * 1E3:.1f}ms')
# Release assets # Release assets
if isinstance(self.vid_writer[-1], cv2.VideoWriter): if isinstance(self.vid_writer[-1], cv2.VideoWriter):
@ -267,7 +267,7 @@ class BasePredictor:
# Print results # Print results
if self.args.verbose and self.seen: if self.args.verbose and self.seen:
t = tuple(x.t / self.seen * 1E3 for x in self.dt) # speeds per image t = tuple(x.t / self.seen * 1E3 for x in profilers) # speeds per image
LOGGER.info(f'Speed: %.1fms preprocess, %.1fms inference, %.1fms postprocess per image at shape ' LOGGER.info(f'Speed: %.1fms preprocess, %.1fms inference, %.1fms postprocess per image at shape '
f'{(1, 3, *self.imgsz)}' % t) f'{(1, 3, *self.imgsz)}' % t)
if self.args.save or self.args.save_txt or self.args.save_crop: if self.args.save or self.args.save_txt or self.args.save_crop:

@ -197,6 +197,11 @@ class Results(SimpleClass):
conf = kwargs['show_conf'] conf = kwargs['show_conf']
assert type(conf) == bool, '`show_conf` should be of boolean type, i.e, show_conf=True/False' assert type(conf) == bool, '`show_conf` should be of boolean type, i.e, show_conf=True/False'
if 'show_conf' in kwargs:
deprecation_warn('line_thickness', 'line_width')
line_width = kwargs['line_thickness']
assert type(line_width) == int, '`line_width` should be of int type, i.e, line_width=3'
names = self.names names = self.names
annotator = Annotator(deepcopy(self.orig_img if img is None else img), annotator = Annotator(deepcopy(self.orig_img if img is None else img),
line_width, line_width,

@ -3,6 +3,7 @@
import contextlib import contextlib
import glob import glob
import os import os
import shutil
from datetime import datetime from datetime import datetime
from pathlib import Path from pathlib import Path
@ -87,3 +88,13 @@ def get_latest_run(search_dir='.'):
"""Return path to most recent 'last.pt' in /runs (i.e. to --resume from).""" """Return path to most recent 'last.pt' in /runs (i.e. to --resume from)."""
last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True) last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True)
return max(last_list, key=os.path.getctime) if last_list else '' return max(last_list, key=os.path.getctime) if last_list else ''
def make_dirs(dir='new_dir/'):
# Create folders
dir = Path(dir)
if dir.exists():
shutil.rmtree(dir) # delete dir
for p in dir, dir / 'labels', dir / 'images':
p.mkdir(parents=True, exist_ok=True) # make dir
return dir

Loading…
Cancel
Save