`ultralytics 8.0.91` tracker fix and docs comments (#2343)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
single_channel
Glenn Jocher 2 years ago committed by GitHub
parent 44c7c3514d
commit 3fd317edfd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -62,7 +62,7 @@ body:
label: Minimal Reproducible Example
description: >
When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to **reproduce** the problem.
This is referred to by community members as creating a [minimal reproducible example](https://stackoverflow.com/help/minimal-reproducible-example).
This is referred to by community members as creating a [minimal reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/).
placeholder: |
```
# Code to reproduce your issue here

@ -56,14 +56,6 @@ jobs:
hub.reset_model(model_id)
model = YOLO('https://hub.ultralytics.com/models/' + model_id)
model.train()
- name: Notify on failure
if: failure() && github.repository == 'ultralytics/ultralytics' && (github.event_name == 'schedule' || github.event_name == 'push')
uses: slackapi/slack-github-action@v1.23.0
with:
payload: |
{"text": "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n*Job Status:* ${{ job.status }}\n"}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_HUBWEB }}
Benchmarks:
runs-on: ${{ matrix.os }}
@ -124,14 +116,6 @@ jobs:
run: |
cat benchmarks.log
echo "$(cat benchmarks.log)" >> $GITHUB_STEP_SUMMARY
- name: Notify on failure
if: failure() && github.repository == 'ultralytics/ultralytics' && (github.event_name == 'schedule' || github.event_name == 'push')
uses: slackapi/slack-github-action@v1.23.0
with:
payload: |
{"text": "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n*Job Status:* ${{ job.status }}\n"}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}
Tests:
timeout-minutes: 60
@ -209,11 +193,17 @@ jobs:
- name: Pytest tests
shell: bash # for Windows compatibility
run: pytest tests
- name: Notify on failure
if: failure() && github.repository == 'ultralytics/ultralytics' && (github.event_name == 'schedule' || github.event_name == 'push')
Summary:
runs-on: ubuntu-latest
needs: [HUB, Benchmarks, Tests] # Add job names that you want to check for failure
if: always() # This ensures the job runs even if previous jobs fail
steps:
- name: Check for failure and notify
if: (needs.HUB.result == 'failure' || needs.Benchmarks.result == 'failure' || needs.Tests.result == 'failure') && github.repository == 'ultralytics/ultralytics' && (github.event_name == 'schedule' || github.event_name == 'push')
uses: slackapi/slack-github-action@v1.23.0
with:
payload: |
{"text": "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n*Job Status:* ${{ job.status }}\n"}
{"text": "<!channel> GitHub Actions error for ${{ github.workflow }} ❌\n\n\n*Repository:* https://github.com/${{ github.repository }}\n*Action:* https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}\n*Author:* ${{ github.actor }}\n*Event:* ${{ github.event_name }}\n"}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_YOLO }}

@ -0,0 +1,41 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
name: "CodeQL"
on:
schedule:
- cron: '0 0 1 * *'
jobs:
analyze:
name: Analyze
runs-on: ${{ 'ubuntu-latest' }}
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
steps:
- name: Checkout repository
uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: security-extended,security-and-quality
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"

@ -28,7 +28,7 @@ jobs:
issue-message: |
👋 Hello @${{ github.actor }}, thank you for your interest in YOLOv8 🚀! We recommend a visit to the [YOLOv8 Docs](https://docs.ultralytics.com) for new users where you can find many [Python](https://docs.ultralytics.com/usage/python/) and [CLI](https://docs.ultralytics.com/usage/cli/) usage examples and where many of the most common questions may already be answered.
If this is a 🐛 Bug Report, please provide a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us debug it.
If this is a 🐛 Bug Report, please provide a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/) to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our [Tips for Best Training Results](https://docs.ultralytics.com/yolov5/tutorials/tips_for_best_training_results/).

@ -1,13 +1,10 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO Continuous Integration (CI) GitHub Actions tests
# YOLO Continuous Integration (CI) GitHub Actions tests broken link checker
# Accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter)
name: Check Broken links
on:
push:
branches: [main]
pull_request:
branches: [main]
workflow_dispatch:
schedule:
- cron: '0 0 * * *' # runs at 00:00 UTC every day
@ -18,21 +15,26 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Test Markdown and HTML links
uses: lycheeverse/lychee-action@v1.7.0
- name: Download and install lychee
run: |
LYCHEE_URL=$(curl -s https://api.github.com/repos/lycheeverse/lychee/releases/latest | grep "browser_download_url" | grep "x86_64-unknown-linux-gnu.tar.gz" | cut -d '"' -f 4)
curl -L $LYCHEE_URL -o lychee.tar.gz
tar xzf lychee.tar.gz
sudo mv lychee /usr/local/bin
- name: Test Markdown and HTML links with retry
uses: nick-invision/retry@v2
with:
fail: true
# accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter)
args: --accept 429,999 --exclude-loopback --exclude twitter.com --exclude-path '**/ci.yaml' --exclude-mail './**/*.md' './**/*.html'
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
timeout_minutes: 5
retry_wait_seconds: 60
max_attempts: 3
command: lychee --accept 429,999 --exclude-loopback --exclude twitter.com --exclude-path '**/ci.yaml' --exclude-mail --github-token ${{ secrets.GITHUB_TOKEN }} './**/*.md' './**/*.html'
- name: Test Markdown, HTML, YAML, Python and Notebook links
- name: Test Markdown, HTML, YAML, Python and Notebook links with retry
if: github.event_name == 'workflow_dispatch'
uses: lycheeverse/lychee-action@v1.7.0
uses: nick-invision/retry@v2
with:
fail: true
# accept 429(Instagram, 'too many requests'), 999(LinkedIn, 'unknown status code'), Timeout(Twitter)
args: --accept 429,999 --exclude-loopback --exclude twitter.com,url.com --exclude-path '**/ci.yaml' --exclude-mail './**/*.md' './**/*.html' './**/*.yml' './**/*.yaml' './**/*.py' './**/*.ipynb'
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
timeout_minutes: 5
retry_wait_seconds: 60
max_attempts: 3
command: lychee --accept 429,999 --exclude-loopback --exclude twitter.com,url.com --exclude-path '**/ci.yaml' --exclude-mail --github-token ${{ secrets.GITHUB_TOKEN }} './**/*.md' './**/*.html' './**/*.yml' './**/*.yaml' './**/*.py' './**/*.ipynb'

@ -88,7 +88,7 @@ short guidelines below to help users provide what we need in order to get starte
When asking a question, people will be better able to provide help if you provide **code** that they can easily
understand and use to **reproduce** the problem. This is referred to by community members as creating
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces
a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/). Your code that reproduces
the problem should be:
- ✅ **Minimal** Use as little code as possible that still produces the same problem
@ -106,7 +106,7 @@ should be:
If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛
**Bug Report** [template](https://github.com/ultralytics/ultralytics/issues/new/choose) and providing
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better
a [minimum reproducible example](https://docs.ultralytics.com/help/minimum_reproducible_example/) to help us better
understand and diagnose your problem.
## License

@ -1,3 +1,7 @@
---
comments: true
---
# Ultralytics HUB App for YOLOv8
<a href="https://bit.ly/ultralytics_hub" target="_blank">

@ -1,3 +1,7 @@
---
comments: true
---
# Ultralytics YOLO Frequently Asked Questions (FAQ)
This FAQ section addresses some common questions and issues users might encounter while working with Ultralytics YOLO repositories.

@ -1,3 +1,7 @@
---
comments: true
---
# Ultralytics Contributor Covenant Code of Conduct
## Our Pledge

@ -1,3 +1,7 @@
---
comments: true
---
# Contributing to Ultralytics Open-Source YOLO Repositories
First of all, thank you for your interest in contributing to Ultralytics open-source YOLO repositories! Your contributions will help improve the project and benefit the community. This document provides guidelines and best practices for contributing to Ultralytics YOLO repositories.

@ -1,3 +1,7 @@
---
comments: true
---
Welcome to the Ultralytics Help page! We are committed to providing you with comprehensive resources to make your experience with Ultralytics YOLO repositories as smooth and enjoyable as possible. On this page, you'll find essential links to guides and documents that will help you navigate through common tasks and address any questions you might have while using our repositories.
- [Frequently Asked Questions (FAQ)](FAQ.md): Find answers to common questions and issues faced by users and contributors of Ultralytics YOLO repositories.

@ -1,3 +1,7 @@
---
comments: true
---
# Creating a Minimum Reproducible Example for Bug Reports in Ultralytics YOLO Repositories
When submitting a bug report for Ultralytics YOLO repositories, it's essential to provide a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) (MRE). An MRE is a small, self-contained piece of code that demonstrates the problem you're experiencing. Providing an MRE helps maintainers and contributors understand the issue and work on a fix more efficiently. This guide explains how to create an MRE when submitting bug reports to Ultralytics YOLO repositories.

@ -1,3 +1,7 @@
---
comments: true
---
# Ultralytics HUB
<a href="https://bit.ly/ultralytics_hub" target="_blank">

@ -1,3 +1,7 @@
---
comments: true
---
<div align="center">
<p>
<a href="https://github.com/ultralytics/ultralytics" target="_blank">

@ -1,3 +1,7 @@
---
comments: true
---
# YOLO Inference API (UNDER CONSTRUCTION)
The YOLO Inference API allows you to access the YOLOv8 object detection capabilities via a RESTful API. This enables you to run object detection on images without the need to install and set up the YOLOv8 environment locally.

@ -1,14 +1,27 @@
Ultralytics supports many models and architectures with more to come in the future. What to add your model architecture? [Here's](../help/contributing.md) how you can contribute
---
comments: true
---
# Models
Ultralytics supports many models and architectures with more to come in the future. Want to add your model architecture? [Here's](../help/contributing.md) how you can contribute.
In this documentation, we provide information on four major models:
1. [YOLOv3](./yolov3.md): The third iteration of the YOLO model family, known for its efficient real-time object detection capabilities.
2. [YOLOv5](./yolov5.md): An improved version of the YOLO architecture, offering better performance and speed tradeoffs compared to previous versions.
3. [YOLOv8](./yolov8.md): The latest version of the YOLO family, featuring enhanced capabilities such as instance segmentation, pose/keypoints estimation, and classification.
4. [Segment Anything Model (SAM)](./sam.md): Meta's Segment Anything Model (SAM).
You can use these models directly in the Command Line Interface (CLI) or in a Python environment. Below are examples of how to use the models with CLI and Python:
## YOLO
Model *.yaml files may be used directly in the Command Line Interface (CLI) with a yolo command:
## CLI Example
```bash
yolo task=detect mode=train model=yolov8n.yaml data=coco128.yaml epochs=100
```
They may also be used directly in a Python environment, and accepts the same arguments as in the CLI example above:
## Python Example
```python
from ultralytics import YOLO
@ -19,137 +32,4 @@ model.info() # display model information
model.train(data="coco128.yaml", epochs=100) # train the model
```
## YOLOv8
### About
### Supported Tasks
| Model Type | Pre-trained Weights | Task |
|-------------|------------------------------------------------------------------------------------------------------------------|-----------------------|
| YOLOv8 | `yolov8n.pt`, `yolov8s.pt`, `yolov8m.pt`, `yolov8l.pt`, `yolov8x.pt` | Detection |
| YOLOv8-seg | `yolov8n-seg.pt`, `yolov8s-seg.pt`, `yolov8m-seg.pt`, `yolov8l-seg.pt`, `yolov8x-seg.pt` | Instance Segmentation |
| YOLOv8-pose | `yolov8n-pose.pt`, `yolov8s-pose.pt`, `yolov8m-pose.pt`, `yolov8l-pose.pt`, `yolov8x-pose.pt` ,`yolov8x-pose-p6` | Pose/Keypoints |
| YOLOv8-cls | `yolov8n-cls.pt`, `yolov8s-cls.pt`, `yolov8m-cls.pt`, `yolov8l-cls.pt`, `yolov8x-cls.pt` | Classification |
### Supported Modes
| Mode | Supported |
|------------|--------------------|
| Inference | :heavy_check_mark: |
| Validation | :heavy_check_mark: |
| Training | :heavy_check_mark: |
??? Performance
=== "Detection"
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
=== "Segmentation"
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
=== "Classification"
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 |
| [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | 23.4 | 0.35 | 6.4 | 13.5 |
| [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | 85.4 | 0.62 | 17.0 | 42.7 |
| [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | 163.0 | 0.87 | 37.5 | 99.7 |
| [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 |
=== "Pose"
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 |
| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | 60.0 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 |
| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-pose.pt) | 640 | 67.6 | 90.0 | 784.5 | 2.59 | 44.4 | 168.6 |
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
## YOLOv5u
### About
Anchor-free YOLOv5 models with improved accuracy-speed tradeoff.
### Supported Tasks
| Model Type | Pre-trained Weights | Task |
|------------|-----------------------------------------------------------------------------------------------------------------------------|-----------|
| YOLOv5u | `yolov5nu`, `yolov5su`, `yolov5mu`, `yolov5lu`, `yolov5xu`, `yolov5n6u`, `yolov5s6u`, `yolov5m6u`, `yolov5l6u`, `yolov5x6u` | Detection |
### Supported Modes
| Mode | Supported |
|------------|--------------------|
| Inference | :heavy_check_mark: |
| Validation | :heavy_check_mark: |
| Training | :heavy_check_mark: |
??? Performance
=== "Detection"
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------- | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv5nu](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5nu.pt) | 640 | 34.3 | 73.6 | 1.06 | 2.6 | 7.7 |
| [YOLOv5su](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5su.pt) | 640 | 43.0 | 120.7 | 1.27 | 9.1 | 24.0 |
| [YOLOv5mu](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5mu.pt) | 640 | 49.0 | 233.9 | 1.86 | 25.1 | 64.2 |
| [YOLOv5lu](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5lu.pt) | 640 | 52.2 | 408.4 | 2.50 | 53.2 | 135.0 |
| [YOLOv5xu](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5xu.pt) | 640 | 53.2 | 763.2 | 3.81 | 97.2 | 246.4 |
| | | | | | | |
| [YOLOv5n6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5n6u.pt) | 1280 | 42.1 | - | - | 4.3 | 7.8 |
| [YOLOv5s6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5s6u.pt) | 1280 | 48.6 | - | - | 15.3 | 24.6 |
| [YOLOv5m6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5m6u.pt) | 1280 | 53.6 | - | - | 41.2 | 65.7 |
| [YOLOv5l6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5l6u.pt) | 1280 | 55.7 | - | - | 86.1 | 137.4 |
| [YOLOv5x6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5x6u.pt) | 1280 | 56.8 | - | - | 155.4 | 250.7 |
---
## Vision Transformers
Vit models currently support Python environment:
```python
from ultralytics.vit import SAM
# from ultralytics.vit import MODEL_TYPe
model = SAM("sam_b.pt")
model.info() # display model information
model.predict(...) # train the model
```
## Segment Anything
### About
### Supported Tasks
| Model Type | Pre-trained Weights | Tasks Supported |
|------------|---------------------|-----------------------|
| sam base | `sam_b.pt` | Instance Segmentation |
| sam large | `sam_l.pt` | Instance Segmentation |
### Supported Modes
| Mode | Supported |
|------------|--------------------|
| Inference | :heavy_check_mark: |
| Validation | :x: |
| Training | :x: |
For more details on each model, their supported tasks, modes, and performance, please visit their respective documentation pages linked above.

@ -0,0 +1,36 @@
---
comments: true
---
# Vision Transformers
Vit models currently support Python environment:
```python
from ultralytics.vit import SAM
# from ultralytics.vit import MODEL_TYPe
model = SAM("sam_b.pt")
model.info() # display model information
model.predict(...) # train the model
```
# Segment Anything
## About
## Supported Tasks
| Model Type | Pre-trained Weights | Tasks Supported |
|------------|---------------------|-----------------------|
| sam base | `sam_b.pt` | Instance Segmentation |
| sam large | `sam_l.pt` | Instance Segmentation |
## Supported Modes
| Mode | Supported |
|------------|--------------------|
| Inference | :heavy_check_mark: |
| Validation | :x: |
| Training | :x: |

@ -0,0 +1,7 @@
---
comments: true
---
# 🚧Page Under Construction ⚒
This page is currently under construction!👷Please check back later for updates. 😃🔜

@ -0,0 +1,41 @@
---
comments: true
---
# YOLOv5u
## About
Anchor-free YOLOv5 models with improved accuracy-speed tradeoff.
## Supported Tasks
| Model Type | Pre-trained Weights | Task |
|------------|-----------------------------------------------------------------------------------------------------------------------------|-----------|
| YOLOv5u | `yolov5nu`, `yolov5su`, `yolov5mu`, `yolov5lu`, `yolov5xu`, `yolov5n6u`, `yolov5s6u`, `yolov5m6u`, `yolov5l6u`, `yolov5x6u` | Detection |
## Supported Modes
| Mode | Supported |
|------------|--------------------|
| Inference | :heavy_check_mark: |
| Validation | :heavy_check_mark: |
| Training | :heavy_check_mark: |
??? Performance
=== "Detection"
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------- | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv5nu](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5nu.pt) | 640 | 34.3 | 73.6 | 1.06 | 2.6 | 7.7 |
| [YOLOv5su](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5su.pt) | 640 | 43.0 | 120.7 | 1.27 | 9.1 | 24.0 |
| [YOLOv5mu](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5mu.pt) | 640 | 49.0 | 233.9 | 1.86 | 25.1 | 64.2 |
| [YOLOv5lu](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5lu.pt) | 640 | 52.2 | 408.4 | 2.50 | 53.2 | 135.0 |
| [YOLOv5xu](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5xu.pt) | 640 | 53.2 | 763.2 | 3.81 | 97.2 | 246.4 |
| | | | | | | |
| [YOLOv5n6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5n6u.pt) | 1280 | 42.1 | - | - | 4.3 | 7.8 |
| [YOLOv5s6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5s6u.pt) | 1280 | 48.6 | - | - | 15.3 | 24.6 |
| [YOLOv5m6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5m6u.pt) | 1280 | 53.6 | - | - | 41.2 | 65.7 |
| [YOLOv5l6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5l6u.pt) | 1280 | 55.7 | - | - | 86.1 | 137.4 |
| [YOLOv5x6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5x6u.pt) | 1280 | 56.8 | - | - | 155.4 | 250.7 |

@ -0,0 +1,67 @@
---
comments: true
---
# YOLOv8
## About
## Supported Tasks
| Model Type | Pre-trained Weights | Task |
|-------------|------------------------------------------------------------------------------------------------------------------|-----------------------|
| YOLOv8 | `yolov8n.pt`, `yolov8s.pt`, `yolov8m.pt`, `yolov8l.pt`, `yolov8x.pt` | Detection |
| YOLOv8-seg | `yolov8n-seg.pt`, `yolov8s-seg.pt`, `yolov8m-seg.pt`, `yolov8l-seg.pt`, `yolov8x-seg.pt` | Instance Segmentation |
| YOLOv8-pose | `yolov8n-pose.pt`, `yolov8s-pose.pt`, `yolov8m-pose.pt`, `yolov8l-pose.pt`, `yolov8x-pose.pt` ,`yolov8x-pose-p6` | Pose/Keypoints |
| YOLOv8-cls | `yolov8n-cls.pt`, `yolov8s-cls.pt`, `yolov8m-cls.pt`, `yolov8l-cls.pt`, `yolov8x-cls.pt` | Classification |
## Supported Modes
| Mode | Supported |
|------------|--------------------|
| Inference | :heavy_check_mark: |
| Validation | :heavy_check_mark: |
| Training | :heavy_check_mark: |
??? Performance
=== "Detection"
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
=== "Segmentation"
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
=== "Classification"
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 |
| [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | 23.4 | 0.35 | 6.4 | 13.5 |
| [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | 85.4 | 0.62 | 17.0 | 42.7 |
| [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | 163.0 | 0.87 | 37.5 | 99.7 |
| [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 |
=== "Pose"
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 |
| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | 60.0 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 |
| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-pose.pt) | 640 | 67.6 | 90.0 | 784.5 | 2.59 | 44.4 | 168.6 |
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |

@ -1,3 +1,7 @@
---
comments: true
---
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
**Benchmark mode** is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks

@ -1,3 +1,7 @@
---
comments: true
---
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
**Export mode** is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the

@ -1,3 +1,7 @@
---
comments: true
---
# Ultralytics YOLOv8 Modes
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">

@ -1,3 +1,7 @@
---
comments: true
---
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
YOLOv8 **predict mode** can generate predictions for various tasks, returning either a list of `Results` objects or a

@ -1,3 +1,7 @@
---
comments: true
---
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
Object tracking is a task that involves identifying the location and class of objects, then assigning a unique ID to

@ -1,3 +1,11 @@
---
comments: true
---
---
comments: true
---
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
**Train mode** is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the

@ -1,3 +1,7 @@
---
comments: true
---
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
**Val mode** is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a

@ -0,0 +1,50 @@
{% if page.meta.comments %}
<h2 id="__comments">{{ lang.t("meta.comments") }}</h2>
<!-- Insert Giscus code snippet from https://giscus.app/ here -->
<script src="https://giscus.app/client.js"
data-repo="ultralytics/ultralytics"
data-repo-id="R_kgDOH-jzvQ"
data-category="Docs"
data-category-id="DIC_kwDOH-jzvc4CWLkL"
data-mapping="title"
data-strict="0"
data-reactions-enabled="1"
data-emit-metadata="0"
data-input-position="bottom"
data-theme="preferred_color_scheme"
data-lang="en"
crossorigin="anonymous"
async>
</script>
<!-- Synchronize Giscus theme with palette -->
<script>
var giscus = document.querySelector("script[src*=giscus]")
/* Set palette on initial load */
var palette = __md_get("__palette")
if (palette && typeof palette.color === "object") {
var theme = palette.color.scheme === "slate" ? "dark" : "light"
giscus.setAttribute("data-theme", theme)
}
/* Register event handlers after documented loaded */
document.addEventListener("DOMContentLoaded", function() {
var ref = document.querySelector("[data-md-component=palette]")
ref.addEventListener("change", function() {
var palette = __md_get("__palette")
if (palette && typeof palette.color === "object") {
var theme = palette.color.scheme === "slate" ? "dark" : "light"
/* Instruct Giscus to change theme */
var frame = document.querySelector(".giscus-frame")
frame.contentWindow.postMessage(
{ giscus: { setConfig: { theme } } },
"https://giscus.app"
)
}
})
})
</script>
{% endif %}

@ -1,3 +1,7 @@
---
comments: true
---
## Install
Install YOLOv8 via the `ultralytics` pip package for the latest stable release or by cloning

@ -1,3 +1,7 @@
---
comments: true
---
Image classification is the simplest of the three tasks and involves classifying an entire image into one of a set of
predefined classes.

@ -1,3 +1,7 @@
---
comments: true
---
Object detection is a task that involves identifying the location and class of objects in an image or video stream.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">

@ -1,3 +1,7 @@
---
comments: true
---
# Ultralytics YOLOv8 Tasks
YOLOv8 is an AI framework that supports multiple computer vision **tasks**. The framework can be used to

@ -1,3 +1,7 @@
---
comments: true
---
Pose estimation is a task that involves identifying the location of specific points in an image, usually referred
to as keypoints. The keypoints can represent various parts of the object such as joints, landmarks, or other distinctive
features. The locations of the keypoints are usually represented as a set of 2D `[x, y]` or 3D `[x, y, visible]`

@ -1,3 +1,7 @@
---
comments: true
---
Instance segmentation goes a step further than object detection and involves identifying individual objects in an image
and segmenting them from the rest of the image.

@ -1,3 +1,7 @@
---
comments: true
---
## Callbacks
Ultralytics framework supports callbacks as entry points in strategic stages of train, val, export, and predict modes.
@ -13,7 +17,7 @@ In this example, we want to return the original frame with each result object. H
```python
def on_predict_batch_end(predictor):
# Retrieve the batch data
_, _, im0s, _, _ = predictor.batch
_, im0s, _, _ = predictor.batch
# Ensure that im0s is a list
im0s = im0s if isinstance(im0s, list) else [im0s]

@ -1,3 +1,7 @@
---
comments: true
---
YOLO settings and hyperparameters play a critical role in the model's performance, speed, and accuracy. These settings
and hyperparameters can affect the model's behavior at various stages of the model development process, including
training, validation, and prediction.

@ -1,3 +1,7 @@
---
comments: true
---
# Command Line Interface Usage
The YOLO command line interface (CLI) allows for simple single-line commands without the need for a Python environment.

@ -1,3 +1,7 @@
---
comments: true
---
Both the Ultralytics YOLO command-line and python interfaces are simply a high-level abstraction on the base engine
executors. Let's take a look at the Trainer engine.

@ -1,3 +1,7 @@
---
comments: true
---
# Hyperparameter Tuning with Ray Tune and YOLOv8
Hyperparameter tuning (or hyperparameter optimization) is the process of determining the right combination of hyperparameters that maximizes model performance. It works by running multiple trials in a single training process, evaluating the performance of each trial, and selecting the best hyperparameter values based on the evaluation results.

@ -1,3 +1,7 @@
---
comments: true
---
# Python Usage
Welcome to the YOLOv8 Python Usage documentation! This guide is designed to help you seamlessly integrate YOLOv8 into

@ -1,3 +1,7 @@
---
comments: true
---
# YOLOv5 🚀 on AWS Deep Learning Instance: A Comprehensive Guide
This guide will help new users run YOLOv5 on an Amazon Web Services (AWS) Deep Learning instance. AWS offers a [Free Tier](https://aws.amazon.com/free/) and a [credit program](https://aws.amazon.com/activate/) for a quick and affordable start.

@ -1,3 +1,7 @@
---
comments: true
---
# Get Started with YOLOv5 🚀 in Docker
This tutorial will guide you through the process of setting up and running YOLOv5 in a Docker container.

@ -1,3 +1,7 @@
---
comments: true
---
# Run YOLOv5 🚀 on Google Cloud Platform (GCP) Deep Learning Virtual Machine (VM) ⭐
This tutorial will guide you through the process of setting up and running YOLOv5 on a GCP Deep Learning VM. New GCP users are eligible for a [$300 free credit offer](https://cloud.google.com/free/docs/gcp-free-tier#free-trial).

@ -1,4 +1,8 @@
# YOLOv5 Docs
---
comments: true
---
# Ultralytics YOLOv5
<div align="center">
<p>

@ -1,3 +1,7 @@
---
comments: true
---
# YOLOv5 Quickstart
See below for quickstart examples.

@ -1,3 +1,7 @@
---
comments: true
---
## 1. Model Structure
YOLOv5 (v6.0/6.1) consists of:

@ -1,3 +1,7 @@
---
comments: true
---
# ClearML Integration
<img align="center" src="https://github.com/thepycoder/clearml_screenshots/raw/main/logos_dark.png#gh-light-mode-only" alt="Clear|ML"><img align="center" src="https://github.com/thepycoder/clearml_screenshots/raw/main/logos_light.png#gh-dark-mode-only" alt="Clear|ML">

@ -1,3 +1,7 @@
---
comments: true
---
<img src="https://cdn.comet.ml/img/notebook_logo.png">
# YOLOv5 with Comet

@ -1,3 +1,7 @@
---
comments: true
---
📚 This guide explains **hyperparameter evolution** for YOLOv5 🚀. Hyperparameter evolution is a method of [Hyperparameter Optimization](https://en.wikipedia.org/wiki/Hyperparameter_optimization) using a [Genetic Algorithm](https://en.wikipedia.org/wiki/Genetic_algorithm) (GA) for optimization. UPDATED 25 September 2022.
Hyperparameters in ML control various aspects of training, and finding optimal values for them can be a challenge. Traditional methods like grid searches can quickly become intractable due to 1) the high dimensional search space 2) unknown correlations among the dimensions, and 3) expensive nature of evaluating the fitness at each point, making GA a suitable candidate for hyperparameter searches.

@ -1,3 +1,7 @@
---
comments: true
---
📚 This guide explains how to use YOLOv5 🚀 **model ensembling** during testing and inference for improved mAP and Recall.
UPDATED 25 September 2022.

@ -1,3 +1,7 @@
---
comments: true
---
# TFLite, ONNX, CoreML, TensorRT Export
📚 This guide explains how to export a trained YOLOv5 🚀 model from PyTorch to ONNX and TorchScript formats.

@ -1,3 +1,7 @@
---
comments: true
---
📚 This guide explains how to apply **pruning** to YOLOv5 🚀 models.
UPDATED 25 September 2022.

@ -1,3 +1,7 @@
---
comments: true
---
📚 This guide explains how to properly use **multiple** GPUs to train a dataset with YOLOv5 🚀 on single or multiple machine(s).
UPDATED 25 December 2022.

@ -1,3 +1,7 @@
---
comments: true
---
<!--
Copyright (c) 2021 - present / Neuralmagic, Inc. All Rights Reserved.
@ -30,8 +34,6 @@ Put simply, DeepSparse gives you the performance of GPUs and the simplicity of s
- **Infinite Scalability**: Scale vertically to 100s of cores, out with standard Kubernetes, or fully-abstracted with Serverless
- **Easy Integration**: Clean APIs for integrating your model into an application and monitoring it in production
**[Start your 90 day Free Trial](https://neuralmagic.com/deepsparse-free-trial/?utm_campaign=free_trial&utm_source=ultralytics_github).**
### How Does DeepSparse Achieve GPU-Class Performance?
DeepSparse takes advantage of model sparsity to gain its performance speedup.
@ -256,5 +258,3 @@ deepsparse.benchmark zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned35
## Get Started With DeepSparse
**Research or Testing?** DeepSparse Community is free for research and testing. Get started with our [Documentation](https://docs.neuralmagic.com/).
**Want to Try DeepSparse Enterprise?** [Start your 90 day free trial](https://neuralmagic.com/deepsparse-free-trial/?utm_campaign=free_trial&utm_source=ultralytics_github).

@ -1,3 +1,7 @@
---
comments: true
---
📚 This guide explains how to load YOLOv5 🚀 from PyTorch Hub at [https://pytorch.org/hub/ultralytics_yolov5](https://pytorch.org/hub/ultralytics_yolov5).
UPDATED 26 March 2023.

@ -1,3 +1,7 @@
---
comments: true
---
# Roboflow Datasets
You can now use Roboflow to organize, label, prepare, version, and host your datasets for training YOLOv5 🚀 models. Roboflow is free to use with YOLOv5 if you make your workspace public.

@ -1,3 +1,7 @@
---
comments: true
---
# Deploy on NVIDIA Jetson using TensorRT and DeepStream SDK
📚 This guide explains how to deploy a trained model into NVIDIA Jetson Platform and perform inference using TensorRT and DeepStream SDK. Here we use TensorRT to maximize the inference performance on the Jetson platform.

@ -1,3 +1,7 @@
---
comments: true
---
# Test-Time Augmentation (TTA)
📚 This guide explains how to use Test Time Augmentation (TTA) during testing and inference for improved mAP and Recall with YOLOv5 🚀.

@ -1,3 +1,7 @@
---
comments: true
---
📚 This guide explains how to produce the best mAP and training results with YOLOv5 🚀.
UPDATED 25 May 2022.

@ -1,3 +1,7 @@
---
comments: true
---
📚 This guide explains how to train your own **custom dataset** with [YOLOv5](https://github.com/ultralytics/yolov5) 🚀.
UPDATED 26 March 2023.

@ -1,3 +1,7 @@
---
comments: true
---
📚 This guide explains how to **freeze** YOLOv5 🚀 layers when **transfer learning**. Transfer learning is a useful way to quickly retrain a model on new data without having to retrain the entire network. Instead, part of the initial weights are frozen in place, and the rest of the weights are used to compute loss and are updated by the optimizer. This requires less resources than normal training and allows for faster training times, though it may also result in reductions to final trained accuracy.
UPDATED 25 September 2022.

@ -8,7 +8,8 @@ repo_name: ultralytics/ultralytics
remote_name: https://github.com/ultralytics/docs
theme:
name: "material"
name: material
custom_dir: docs/overrides
logo: https://github.com/ultralytics/assets/raw/main/logo/Ultralytics_Logotype_Reverse.svg
favicon: assets/favicon.ico
font:
@ -110,8 +111,6 @@ markdown_extensions:
- pymdownx.emoji:
emoji_index: !!python/name:materialx.emoji.twemoji # noqa
emoji_generator: !!python/name:materialx.emoji.to_svg
# Content tabs
- pymdownx.tabbed:
alternate_style: true
@ -122,69 +121,7 @@ markdown_extensions:
- pymdownx.mark
- pymdownx.tilde
plugins:
- mkdocstrings
- search
- redirects:
redirect_maps:
callbacks.md: usage/callbacks.md
cfg.md: usage/cfg.md
cli.md: usage/cli.md
config.md: usage/cfg.md
engine.md: usage/engine.md
environments/AWS-Quickstart.md: yolov5/environments/aws_quickstart_tutorial.md
environments/Docker-Quickstart.md: yolov5/environments/docker_image_quickstart_tutorial.md
environments/GCP-Quickstart.md: yolov5/environments/google_cloud_quickstart_tutorial.md
FAQ/augmentation.md: yolov5/tutorials/tips_for_best_training_results.md
package-framework.md: index.md
package-framework/mock_detector.md: index.md
predict.md: modes/predict.md
python.md: usage/python.md
quick-start.md: quickstart.md
reference/base_pred.md: reference/yolo/engine/predictor.md
reference/base_trainer.md: reference/yolo/engine/trainer.md
reference/exporter.md: reference/yolo/engine/exporter.md
reference/model.md: reference/yolo/engine/model.md
reference/nn.md: reference/nn/modules.md
reference/ops.md: reference/yolo/utils/ops.md
reference/results.md: reference/yolo/engine/results.md
sdk.md: index.md
tasks/classification.md: tasks/classify.md
tasks/detection.md: tasks/detect.md
tasks/segmentation.md: tasks/segment.md
tasks/keypoints.md: tasks/pose.md
tasks/tracking.md: modes/track.md
tutorials/architecture-summary.md: yolov5/tutorials/architecture_description.md
tutorials/clearml-logging.md: yolov5/tutorials/clearml_logging_integration.md
tutorials/comet-logging.md: yolov5/tutorials/comet_logging_integration.md
tutorials/hyperparameter-evolution.md: yolov5/tutorials/hyperparameter_evolution.md
tutorials/model-ensembling.md: yolov5/tutorials/model_ensembling.md
tutorials/multi-gpu-training.md: yolov5/tutorials/multi_gpu_training.md
tutorials/nvidia-jetson.md: yolov5/tutorials/running_on_jetson_nano.md
tutorials/pruning-sparsity.md: yolov5/tutorials/model_pruning_and_sparsity.md
tutorials/pytorch-hub.md: yolov5/tutorials/pytorch_hub_model_loading.md
tutorials/roboflow.md: yolov5/tutorials/roboflow_datasets_integration.md
tutorials/test-time-augmentation.md: yolov5/tutorials/test_time_augmentation.md
tutorials/torchscript-onnx-coreml-export.md: yolov5/tutorials/model_export.md
tutorials/train-custom-datasets.md: yolov5/tutorials/train_custom_data.md
tutorials/training-tips-best-results.md: yolov5/tutorials/tips_for_best_training_results.md
tutorials/transfer-learning-froze-layers.md: yolov5/tutorials/transfer_learning_with_frozen_layers.md
tutorials/weights-and-biasis-logging.md: yolov5/tutorials/comet_logging_integration.md
yolov5/pytorch_hub.md: yolov5/tutorials/pytorch_hub_model_loading.md
yolov5/hyp_evolution.md: yolov5/tutorials/hyperparameter_evolution.md
yolov5/pruning_sparsity.md: yolov5/tutorials/model_pruning_and_sparsity.md
yolov5/comet.md: yolov5/tutorials/comet_logging_integration.md
yolov5/tta.md: yolov5/tutorials/test_time_augmentation.md
yolov5/multi_gpu_training.md: yolov5/tutorials/multi_gpu_training.md
yolov5/ensemble.md: yolov5/tutorials/model_ensembling.md
yolov5/jetson_nano.md: yolov5/tutorials/running_on_jetson_nano.md
yolov5/transfer_learn_frozen.md: yolov5/tutorials/transfer_learning_with_frozen_layers.md
yolov5/neural_magic.md: yolov5/tutorials/neural_magic_pruning_quantization.md
yolov5/train_custom_data.md: yolov5/tutorials/train_custom_data.md
yolov5/architecture.md: yolov5/tutorials/architecture_description.md
yolov5/export.md: yolov5/tutorials/model_export.md
# Primary navigation
# Primary navigation ---------------------------------------------------------------------------------------------------
nav:
- Home:
- Home: index.md
@ -204,7 +141,11 @@ nav:
- Classify: tasks/classify.md
- Pose: tasks/pose.md
- Models:
- models/index.md
- models/index.md
- YOLOv3: models/yolov3.md
- YOLOv5: models/yolov5.md
- YOLOv8: models/yolov8.md
- Segment Anything Model (SAM): models/sam.md
- Quickstart: quickstart.md
- Modes:
- modes/index.md
@ -343,3 +284,70 @@ nav:
- Minimum Reproducible Example (MRE) Guide: help/minimum_reproducible_example.md
- Code of Conduct: help/code_of_conduct.md
- Security Policy: SECURITY.md
# Plugins including 301 redirects navigation ---------------------------------------------------------------------------
plugins:
- mkdocstrings
- search
- redirects:
redirect_maps:
callbacks.md: usage/callbacks.md
cfg.md: usage/cfg.md
cli.md: usage/cli.md
config.md: usage/cfg.md
engine.md: usage/engine.md
environments/AWS-Quickstart.md: yolov5/environments/aws_quickstart_tutorial.md
environments/Docker-Quickstart.md: yolov5/environments/docker_image_quickstart_tutorial.md
environments/GCP-Quickstart.md: yolov5/environments/google_cloud_quickstart_tutorial.md
FAQ/augmentation.md: yolov5/tutorials/tips_for_best_training_results.md
package-framework.md: index.md
package-framework/mock_detector.md: index.md
predict.md: modes/predict.md
python.md: usage/python.md
quick-start.md: quickstart.md
reference/base_pred.md: reference/yolo/engine/predictor.md
reference/base_trainer.md: reference/yolo/engine/trainer.md
reference/exporter.md: reference/yolo/engine/exporter.md
reference/model.md: reference/yolo/engine/model.md
reference/nn.md: reference/nn/modules.md
reference/ops.md: reference/yolo/utils/ops.md
reference/results.md: reference/yolo/engine/results.md
sdk.md: index.md
tasks/classification.md: tasks/classify.md
tasks/detection.md: tasks/detect.md
tasks/segmentation.md: tasks/segment.md
tasks/keypoints.md: tasks/pose.md
tasks/tracking.md: modes/track.md
tutorials/architecture-summary.md: yolov5/tutorials/architecture_description.md
tutorials/clearml-logging.md: yolov5/tutorials/clearml_logging_integration.md
tutorials/comet-logging.md: yolov5/tutorials/comet_logging_integration.md
tutorials/hyperparameter-evolution.md: yolov5/tutorials/hyperparameter_evolution.md
tutorials/model-ensembling.md: yolov5/tutorials/model_ensembling.md
tutorials/multi-gpu-training.md: yolov5/tutorials/multi_gpu_training.md
tutorials/nvidia-jetson.md: yolov5/tutorials/running_on_jetson_nano.md
tutorials/pruning-sparsity.md: yolov5/tutorials/model_pruning_and_sparsity.md
tutorials/pytorch-hub.md: yolov5/tutorials/pytorch_hub_model_loading.md
tutorials/roboflow.md: yolov5/tutorials/roboflow_datasets_integration.md
tutorials/test-time-augmentation.md: yolov5/tutorials/test_time_augmentation.md
tutorials/torchscript-onnx-coreml-export.md: yolov5/tutorials/model_export.md
tutorials/train-custom-datasets.md: yolov5/tutorials/train_custom_data.md
tutorials/training-tips-best-results.md: yolov5/tutorials/tips_for_best_training_results.md
tutorials/transfer-learning-froze-layers.md: yolov5/tutorials/transfer_learning_with_frozen_layers.md
tutorials/weights-and-biasis-logging.md: yolov5/tutorials/comet_logging_integration.md
yolov5/pytorch_hub.md: yolov5/tutorials/pytorch_hub_model_loading.md
yolov5/hyp_evolution.md: yolov5/tutorials/hyperparameter_evolution.md
yolov5/pruning_sparsity.md: yolov5/tutorials/model_pruning_and_sparsity.md
yolov5/comet.md: yolov5/tutorials/comet_logging_integration.md
yolov5/tta.md: yolov5/tutorials/test_time_augmentation.md
yolov5/multi_gpu_training.md: yolov5/tutorials/multi_gpu_training.md
yolov5/ensemble.md: yolov5/tutorials/model_ensembling.md
yolov5/jetson_nano.md: yolov5/tutorials/running_on_jetson_nano.md
yolov5/transfer_learn_frozen.md: yolov5/tutorials/transfer_learning_with_frozen_layers.md
yolov5/neural_magic.md: yolov5/tutorials/neural_magic_pruning_quantization.md
yolov5/train_custom_data.md: yolov5/tutorials/train_custom_data.md
yolov5/architecture.md: yolov5/tutorials/architecture_description.md
yolov5/export.md: yolov5/tutorials/model_export.md
yolov5/tips_for_best_training_results.md: yolov5/tutorials/tips_for_best_training_results.md
yolov5/tutorials/yolov5_neural_magic_tutorial.md: yolov5/tutorials/neural_magic_pruning_quantization.md
yolov5/tutorials/model_ensembling_tutorial.md: yolov5/tutorials/model_ensembling.md
yolov5/tutorials/pytorch_hub_tutorial.md: yolov5/tutorials/pytorch_hub_model_loading.md

@ -236,3 +236,13 @@ def test_result():
res[0].plot()
res[0] = res[0].cpu().numpy()
print(res[0].path)
def test_track():
im = cv2.imread(str(SOURCE))
model = YOLO(MODEL)
seg_model = YOLO('yolov8n-seg.pt')
pose_model = YOLO('yolov8n-pose.pt')
model.track(source=im)
seg_model.track(source=im)
pose_model.track(source=im)

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license
__version__ = '8.0.90'
__version__ = '8.0.91'
from ultralytics.hub import start
from ultralytics.vit.sam import SAM

@ -39,8 +39,7 @@ def on_predict_start(predictor, persist=False):
def on_predict_postprocess_end(predictor):
"""Postprocess detected boxes and update with object tracking."""
bs = predictor.dataset.bs
im0s = predictor.batch[2]
im0s = im0s if isinstance(im0s, list) else [im0s]
im0s = predictor.batch[1]
for i in range(bs):
det = predictor.results[i].boxes.cpu().numpy()
if len(det) == 0:

Loading…
Cancel
Save