`ultralytics 8.0.136` refactor and simplify package (#3748)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
single_channel
Laughing 1 year ago committed by GitHub
parent 8ebe94d1e9
commit 620f3eb218
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -111,22 +111,22 @@ jobs:
- name: Benchmark DetectionModel - name: Benchmark DetectionModel
shell: python shell: python
run: | run: |
from ultralytics.yolo.utils.benchmarks import benchmark from ultralytics.utils.benchmarks import benchmark
benchmark(model='${{ matrix.model }}.pt', imgsz=160, half=False, hard_fail=0.26) benchmark(model='${{ matrix.model }}.pt', imgsz=160, half=False, hard_fail=0.26)
- name: Benchmark SegmentationModel - name: Benchmark SegmentationModel
shell: python shell: python
run: | run: |
from ultralytics.yolo.utils.benchmarks import benchmark from ultralytics.utils.benchmarks import benchmark
benchmark(model='${{ matrix.model }}-seg.pt', imgsz=160, half=False, hard_fail=0.30) benchmark(model='${{ matrix.model }}-seg.pt', imgsz=160, half=False, hard_fail=0.30)
- name: Benchmark ClassificationModel - name: Benchmark ClassificationModel
shell: python shell: python
run: | run: |
from ultralytics.yolo.utils.benchmarks import benchmark from ultralytics.utils.benchmarks import benchmark
benchmark(model='${{ matrix.model }}-cls.pt', imgsz=160, half=False, hard_fail=0.36) benchmark(model='${{ matrix.model }}-cls.pt', imgsz=160, half=False, hard_fail=0.36)
- name: Benchmark PoseModel - name: Benchmark PoseModel
shell: python shell: python
run: | run: |
from ultralytics.yolo.utils.benchmarks import benchmark from ultralytics.utils.benchmarks import benchmark
benchmark(model='${{ matrix.model }}-pose.pt', imgsz=160, half=False, hard_fail=0.17) benchmark(model='${{ matrix.model }}-pose.pt', imgsz=160, half=False, hard_fail=0.17)
- name: Benchmark Summary - name: Benchmark Summary
run: | run: |

@ -40,7 +40,7 @@ jobs:
import os import os
import pkg_resources as pkg import pkg_resources as pkg
import ultralytics import ultralytics
from ultralytics.yolo.utils.checks import check_latest_pypi_version from ultralytics.utils.checks import check_latest_pypi_version
v_local = pkg.parse_version(ultralytics.__version__).release v_local = pkg.parse_version(ultralytics.__version__).release
v_pypi = pkg.parse_version(check_latest_pypi_version()).release v_pypi = pkg.parse_version(check_latest_pypi_version()).release

@ -100,7 +100,7 @@ results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
path = model.export(format="onnx") # export the model to ONNX format path = model.export(format="onnx") # export the model to ONNX format
``` ```
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases). See YOLOv8 [Python Docs](https://docs.ultralytics.com/usage/python) for more examples. [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases). See YOLOv8 [Python Docs](https://docs.ultralytics.com/usage/python) for more examples.
</details> </details>
@ -110,7 +110,7 @@ YOLOv8 [Detect](https://docs.ultralytics.com/tasks/detect), [Segment](https://do
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png"> <img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png">
All [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use. All [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) download automatically from the latest Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
<details open><summary>Detection</summary> <details open><summary>Detection</summary>

@ -100,7 +100,7 @@ results = model("https://ultralytics.com/images/bus.jpg") # 对图像进行预
success = model.export(format="onnx") # 将模型导出为 ONNX 格式 success = model.export(format="onnx") # 将模型导出为 ONNX 格式
``` ```
[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) 会自动从最新的 Ultralytics [发布版本](https://github.com/ultralytics/assets/releases)中下载。查看 YOLOv8 [Python 文档](https://docs.ultralytics.com/usage/python)以获取更多示例。 [模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models) 会自动从最新的 Ultralytics [发布版本](https://github.com/ultralytics/assets/releases)中下载。查看 YOLOv8 [Python 文档](https://docs.ultralytics.com/usage/python)以获取更多示例。
</details> </details>
@ -110,7 +110,7 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png"> <img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/im/banner-tasks.png">
所有[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models)在首次使用时会自动从最新的Ultralytics [发布版本](https://github.com/ultralytics/assets/releases)下载。 所有[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models)在首次使用时会自动从最新的Ultralytics [发布版本](https://github.com/ultralytics/assets/releases)下载。
<details open><summary>检测</summary> <details open><summary>检测</summary>

@ -10,7 +10,7 @@ import os
import re import re
from collections import defaultdict from collections import defaultdict
from pathlib import Path from pathlib import Path
from ultralytics.yolo.utils import ROOT from ultralytics.utils import ROOT
NEW_YAML_DIR = ROOT.parent NEW_YAML_DIR = ROOT.parent
CODE_DIR = ROOT CODE_DIR = ROOT
@ -39,7 +39,7 @@ def create_markdown(py_filepath, module_path, classes, functions):
with open(md_filepath, 'r') as file: with open(md_filepath, 'r') as file:
existing_content = file.read() existing_content = file.read()
header_parts = existing_content.split('---', 2) header_parts = existing_content.split('---', 2)
if len(header_parts) >= 3: if 'description:' in header_parts or 'comments:' in header_parts and len(header_parts) >= 3:
header_content = f"{header_parts[0]}---{header_parts[1]}---\n\n" header_content = f"{header_parts[0]}---{header_parts[1]}---\n\n"
module_path = module_path.replace('.__init__', '') module_path = module_path.replace('.__init__', '')

@ -29,12 +29,12 @@ The Argoverse dataset is widely used for training and evaluating deep learning m
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Argoverse dataset, the `Argoverse.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/Argoverse.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/Argoverse.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Argoverse dataset, the `Argoverse.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Argoverse.yaml).
!!! example "ultralytics/datasets/Argoverse.yaml" !!! example "ultralytics/cfg/datasets/Argoverse.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/Argoverse.yaml" --8<-- "ultralytics/cfg/datasets/Argoverse.yaml"
``` ```
## Usage ## Usage

@ -29,12 +29,12 @@ The COCO dataset is widely used for training and evaluating deep learning models
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
!!! example "ultralytics/datasets/coco.yaml" !!! example "ultralytics/cfg/datasets/coco.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/coco.yaml" --8<-- "ultralytics/cfg/datasets/coco.yaml"
``` ```
## Usage ## Usage

@ -19,12 +19,12 @@ and [YOLOv8](https://github.com/ultralytics/ultralytics).
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8 dataset, the `coco8.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco8.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco8.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8 dataset, the `coco8.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8.yaml).
!!! example "ultralytics/datasets/coco8.yaml" !!! example "ultralytics/cfg/datasets/coco8.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/coco8.yaml" --8<-- "ultralytics/cfg/datasets/coco8.yaml"
``` ```
## Usage ## Usage

@ -28,12 +28,12 @@ The Global Wheat Head Dataset is widely used for training and evaluating deep le
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Global Wheat Head Dataset, the `GlobalWheat2020.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/GlobalWheat2020.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/GlobalWheat2020.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Global Wheat Head Dataset, the `GlobalWheat2020.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/GlobalWheat2020.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/GlobalWheat2020.yaml).
!!! example "ultralytics/datasets/GlobalWheat2020.yaml" !!! example "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/GlobalWheat2020.yaml" --8<-- "ultralytics/cfg/datasets/GlobalWheat2020.yaml"
``` ```
## Usage ## Usage

@ -93,7 +93,7 @@ If you have your own dataset and would like to use it for training detection mod
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet: You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
```python ```python
from ultralytics.yolo.data.converter import convert_coco from ultralytics.data.converter import convert_coco
convert_coco(labels_dir='../coco/annotations/') convert_coco(labels_dir='../coco/annotations/')
``` ```

@ -28,12 +28,12 @@ The Objects365 dataset is widely used for training and evaluating deep learning
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Objects365 Dataset, the `Objects365.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/Objects365.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/Objects365.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the Objects365 Dataset, the `Objects365.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Objects365.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/Objects365.yaml).
!!! example "ultralytics/datasets/Objects365.yaml" !!! example "ultralytics/cfg/datasets/Objects365.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/Objects365.yaml" --8<-- "ultralytics/cfg/datasets/Objects365.yaml"
``` ```
## Usage ## Usage

@ -30,12 +30,12 @@ The SKU-110k dataset is widely used for training and evaluating deep learning mo
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the SKU-110K dataset, the `SKU-110K.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/SKU-110K.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/SKU-110K.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. For the case of the SKU-110K dataset, the `SKU-110K.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/SKU-110K.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/SKU-110K.yaml).
!!! example "ultralytics/datasets/SKU-110K.yaml" !!! example "ultralytics/cfg/datasets/SKU-110K.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/SKU-110K.yaml" --8<-- "ultralytics/cfg/datasets/SKU-110K.yaml"
``` ```
## Usage ## Usage

@ -26,12 +26,12 @@ The VisDrone dataset is widely used for training and evaluating deep learning mo
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Visdrone dataset, the `VisDrone.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/VisDrone.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/VisDrone.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the Visdrone dataset, the `VisDrone.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VisDrone.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VisDrone.yaml).
!!! example "ultralytics/datasets/VisDrone.yaml" !!! example "ultralytics/cfg/datasets/VisDrone.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/VisDrone.yaml" --8<-- "ultralytics/cfg/datasets/VisDrone.yaml"
``` ```
## Usage ## Usage

@ -29,12 +29,12 @@ The VOC dataset is widely used for training and evaluating deep learning models
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the VOC dataset, the `VOC.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/VOC.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/VOC.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the VOC dataset, the `VOC.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VOC.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/VOC.yaml).
!!! example "ultralytics/datasets/VOC.yaml" !!! example "ultralytics/cfg/datasets/VOC.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/VOC.yaml" --8<-- "ultralytics/cfg/datasets/VOC.yaml"
``` ```
## Usage ## Usage

@ -32,12 +32,12 @@ The xView dataset is widely used for training and evaluating deep learning model
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the xView dataset, the `xView.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/xView.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/xView.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the xView dataset, the `xView.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/xView.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/xView.yaml).
!!! example "ultralytics/datasets/xView.yaml" !!! example "ultralytics/cfg/datasets/xView.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/xView.yaml" --8<-- "ultralytics/cfg/datasets/xView.yaml"
``` ```
## Usage ## Usage

@ -30,12 +30,12 @@ The COCO-Pose dataset is specifically used for training and evaluating deep lear
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Pose dataset, the `coco-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco-pose.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Pose dataset, the `coco-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco-pose.yaml).
!!! example "ultralytics/datasets/coco-pose.yaml" !!! example "ultralytics/cfg/datasets/coco-pose.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/coco-pose.yaml" --8<-- "ultralytics/cfg/datasets/coco-pose.yaml"
``` ```
## Usage ## Usage

@ -19,12 +19,12 @@ and [YOLOv8](https://github.com/ultralytics/ultralytics).
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Pose dataset, the `coco8-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco8-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco8-pose.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Pose dataset, the `coco8-pose.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-pose.yaml).
!!! example "ultralytics/datasets/coco8-pose.yaml" !!! example "ultralytics/cfg/datasets/coco8-pose.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/coco8-pose.yaml" --8<-- "ultralytics/cfg/datasets/coco8-pose.yaml"
``` ```
## Usage ## Usage

@ -120,7 +120,7 @@ If you have your own dataset and would like to use it for training pose estimati
Ultralytics provides a convenient conversion tool to convert labels from the popular COCO dataset format to YOLO format: Ultralytics provides a convenient conversion tool to convert labels from the popular COCO dataset format to YOLO format:
```python ```python
from ultralytics.yolo.data.converter import convert_coco from ultralytics.data.converter import convert_coco
convert_coco(labels_dir='../coco/annotations/', use_keypoints=True) convert_coco(labels_dir='../coco/annotations/', use_keypoints=True)
``` ```

@ -29,12 +29,12 @@ COCO-Seg is widely used for training and evaluating deep learning models in inst
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Seg dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO-Seg dataset, the `coco.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco.yaml).
!!! example "ultralytics/datasets/coco.yaml" !!! example "ultralytics/cfg/datasets/coco.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/coco.yaml" --8<-- "ultralytics/cfg/datasets/coco.yaml"
``` ```
## Usage ## Usage

@ -19,12 +19,12 @@ and [YOLOv8](https://github.com/ultralytics/ultralytics).
## Dataset YAML ## Dataset YAML
A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Seg dataset, the `coco8-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco8-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco8-seg.yaml). A YAML (Yet Another Markup Language) file is used to define the dataset configuration. It contains information about the dataset's paths, classes, and other relevant information. In the case of the COCO8-Seg dataset, the `coco8-seg.yaml` file is maintained at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-seg.yaml](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/coco8-seg.yaml).
!!! example "ultralytics/datasets/coco8-seg.yaml" !!! example "ultralytics/cfg/datasets/coco8-seg.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/coco8-seg.yaml" --8<-- "ultralytics/cfg/datasets/coco8-seg.yaml"
``` ```
## Usage ## Usage

@ -104,7 +104,7 @@ If you have your own dataset and would like to use it for training segmentation
You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet: You can easily convert labels from the popular COCO dataset format to the YOLO format using the following code snippet:
```python ```python
from ultralytics.yolo.data.converter import convert_coco from ultralytics.data.converter import convert_coco
convert_coco(labels_dir='../coco/annotations/', use_segments=True) convert_coco(labels_dir='../coco/annotations/', use_segments=True)
``` ```
@ -122,7 +122,7 @@ Auto-annotation is an essential feature that allows you to generate a segmentati
To auto-annotate your dataset using the Ultralytics framework, you can use the `auto_annotate` function as shown below: To auto-annotate your dataset using the Ultralytics framework, you can use the `auto_annotate` function as shown below:
```python ```python
from ultralytics.yolo.data.annotator import auto_annotate from ultralytics.data.annotator import auto_annotate
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model='sam_b.pt') auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model='sam_b.pt')
``` ```

@ -34,7 +34,7 @@ The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format.
!!! example "coco8.yaml" !!! example "coco8.yaml"
```yaml ```yaml
--8<-- "ultralytics/datasets/coco8.yaml" --8<-- "ultralytics/cfg/datasets/coco8.yaml"
``` ```
After zipping your dataset, you should validate it before uploading it to Ultralytics HUB. Ultralytics HUB conducts the dataset validation check post-upload, so by ensuring your dataset is correctly formatted and error-free ahead of time, you can forestall any setbacks due to dataset rejection. After zipping your dataset, you should validate it before uploading it to Ultralytics HUB. Ultralytics HUB conducts the dataset validation check post-upload, so by ensuring your dataset is correctly formatted and error-free ahead of time, you can forestall any setbacks due to dataset rejection.

@ -42,7 +42,7 @@ To perform object detection on an image, use the `predict` method as shown below
```python ```python
from ultralytics import FastSAM from ultralytics import FastSAM
from ultralytics.yolo.fastsam import FastSAMPrompt from ultralytics.models.fastsam import FastSAMPrompt
# Define image path and inference device # Define image path and inference device
IMAGE_PATH = 'ultralytics/assets/bus.jpg' IMAGE_PATH = 'ultralytics/assets/bus.jpg'

@ -6,9 +6,9 @@ keywords: MobileSAM, Faster Segment Anything, Segment Anything, Segment Anything
![MobileSAM Logo](https://github.com/ChaoningZhang/MobileSAM/blob/master/assets/logo2.png?raw=true) ![MobileSAM Logo](https://github.com/ChaoningZhang/MobileSAM/blob/master/assets/logo2.png?raw=true)
# Faster Segment Anything (MobileSAM) # Mobile Segment Anything (MobileSAM)
The MobileSAM paper is now available on [ResearchGate](https://www.researchgate.net/publication/371851844_Faster_Segment_Anything_Towards_Lightweight_SAM_for_Mobile_Applications) and [arXiv](https://arxiv.org/pdf/2306.14289.pdf). The most recent version will initially appear on ResearchGate due to the delayed content update on arXiv. The MobileSAM paper is now available on [arXiv](https://arxiv.org/pdf/2306.14289.pdf).
A demonstration of MobileSAM running on a CPU can be accessed at this [demo link](https://huggingface.co/spaces/dhkim2810/MobileSAM). The performance on a Mac i5 CPU takes approximately 3 seconds. On the Hugging Face demo, the interface and lower-performance CPUs contribute to a slower response, but it continues to function effectively. A demonstration of MobileSAM running on a CPU can be accessed at this [demo link](https://huggingface.co/spaces/dhkim2810/MobileSAM). The performance on a Mac i5 CPU takes approximately 3 seconds. On the Hugging Face demo, the interface and lower-performance CPUs contribute to a slower response, but it continues to function effectively.

@ -88,7 +88,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
=== "Prompt inference" === "Prompt inference"
```python ```python
from ultralytics.vit.sam import Predictor as SAMPredictor from ultralytics.models.sam import Predictor as SAMPredictor
# Create SAMPredictor # Create SAMPredictor
overrides = dict(conf=0.25, task='segment', mode='predict', imgsz=1024, model="mobile_sam.pt") overrides = dict(conf=0.25, task='segment', mode='predict', imgsz=1024, model="mobile_sam.pt")
@ -108,7 +108,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
=== "Segment everything" === "Segment everything"
```python ```python
from ultralytics.vit.sam import Predictor as SAMPredictor from ultralytics.models.sam import Predictor as SAMPredictor
# Create SAMPredictor # Create SAMPredictor
overrides = dict(conf=0.25, task='segment', mode='predict', imgsz=1024, model="mobile_sam.pt") overrides = dict(conf=0.25, task='segment', mode='predict', imgsz=1024, model="mobile_sam.pt")
@ -119,7 +119,7 @@ The Segment Anything Model can be employed for a multitude of downstream tasks t
``` ```
- More additional args for `Segment everything` see [`Predictor/generate` Reference](../reference/vit/sam/predict.md). - More additional args for `Segment everything` see [`Predictor/generate` Reference](../reference/models/sam/predict.md).
## Available Models and Supported Tasks ## Available Models and Supported Tasks
@ -184,7 +184,7 @@ Auto-annotation is a key feature of SAM, allowing users to generate a [segmentat
To auto-annotate your dataset with the Ultralytics framework, use the `auto_annotate` function as shown below: To auto-annotate your dataset with the Ultralytics framework, use the `auto_annotate` function as shown below:
```python ```python
from ultralytics.yolo.data.annotator import auto_annotate from ultralytics.data.annotator import auto_annotate
auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model='sam_b.pt') auto_annotate(data="path/to/images", det_model="yolov8x.pt", sam_model='sam_b.pt')
``` ```

@ -27,7 +27,7 @@ full list of export arguments.
=== "Python" === "Python"
```python ```python
from ultralytics.yolo.utils.benchmarks import benchmark from ultralytics.utils.benchmarks import benchmark
# Benchmark on GPU # Benchmark on GPU
benchmark(model='yolov8n.pt', imgsz=640, half=False, device=0) benchmark(model='yolov8n.pt', imgsz=640, half=False, device=0)

@ -316,7 +316,7 @@ All supported arguments:
## Image and Video Formats ## Image and Video Formats
YOLOv8 supports various image and video formats, as specified in [yolo/data/utils.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/data/utils.py). See the tables below for the valid suffixes and example predict commands. YOLOv8 supports various image and video formats, as specified in [data/utils.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/data/utils.py). See the tables below for the valid suffixes and example predict commands.
### Image Suffixes ### Image Suffixes
@ -451,7 +451,7 @@ operations are cached, meaning they're only calculated once per object, and thos
keypoints.data # raw probs tensor, (num_class, ) keypoints.data # raw probs tensor, (num_class, )
``` ```
Class reference documentation for `Results` module and its components can be found [here](../reference/yolo/engine/results.md) Class reference documentation for `Results` module and its components can be found [here](../reference/engine/results.md)
## Plotting results ## Plotting results

@ -79,7 +79,7 @@ to [predict page](https://docs.ultralytics.com/modes/predict/).
### Tracker ### Tracker
We also support using a modified tracker config file, just copy a config file i.e `custom_tracker.yaml` We also support using a modified tracker config file, just copy a config file i.e `custom_tracker.yaml`
from [ultralytics/tracker/cfg](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/tracker/cfg) and modify from [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modify
any configurations(expect the `tracker_type`) you need to. any configurations(expect the `tracker_type`) you need to.
!!! example "" !!! example ""
@ -97,5 +97,5 @@ any configurations(expect the `tracker_type`) you need to.
yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" tracker='custom_tracker.yaml' yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" tracker='custom_tracker.yaml'
``` ```
Please refer to [ultralytics/tracker/cfg](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/tracker/cfg) Please refer to [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers)
page page

@ -0,0 +1,44 @@
## cfg2dict
---
### ::: ultralytics.cfg.cfg2dict
<br><br>
## get_cfg
---
### ::: ultralytics.cfg.get_cfg
<br><br>
## _handle_deprecation
---
### ::: ultralytics.cfg._handle_deprecation
<br><br>
## check_cfg_mismatch
---
### ::: ultralytics.cfg.check_cfg_mismatch
<br><br>
## merge_equals_args
---
### ::: ultralytics.cfg.merge_equals_args
<br><br>
## handle_yolo_hub
---
### ::: ultralytics.cfg.handle_yolo_hub
<br><br>
## handle_yolo_settings
---
### ::: ultralytics.cfg.handle_yolo_settings
<br><br>
## entrypoint
---
### ::: ultralytics.cfg.entrypoint
<br><br>
## copy_default_cfg
---
### ::: ultralytics.cfg.copy_default_cfg
<br><br>

@ -0,0 +1,4 @@
## auto_annotate
---
### ::: ultralytics.data.annotator.auto_annotate
<br><br>

@ -0,0 +1,94 @@
## BaseTransform
---
### ::: ultralytics.data.augment.BaseTransform
<br><br>
## Compose
---
### ::: ultralytics.data.augment.Compose
<br><br>
## BaseMixTransform
---
### ::: ultralytics.data.augment.BaseMixTransform
<br><br>
## Mosaic
---
### ::: ultralytics.data.augment.Mosaic
<br><br>
## MixUp
---
### ::: ultralytics.data.augment.MixUp
<br><br>
## RandomPerspective
---
### ::: ultralytics.data.augment.RandomPerspective
<br><br>
## RandomHSV
---
### ::: ultralytics.data.augment.RandomHSV
<br><br>
## RandomFlip
---
### ::: ultralytics.data.augment.RandomFlip
<br><br>
## LetterBox
---
### ::: ultralytics.data.augment.LetterBox
<br><br>
## CopyPaste
---
### ::: ultralytics.data.augment.CopyPaste
<br><br>
## Albumentations
---
### ::: ultralytics.data.augment.Albumentations
<br><br>
## Format
---
### ::: ultralytics.data.augment.Format
<br><br>
## ClassifyLetterBox
---
### ::: ultralytics.data.augment.ClassifyLetterBox
<br><br>
## CenterCrop
---
### ::: ultralytics.data.augment.CenterCrop
<br><br>
## ToTensor
---
### ::: ultralytics.data.augment.ToTensor
<br><br>
## v8_transforms
---
### ::: ultralytics.data.augment.v8_transforms
<br><br>
## classify_transforms
---
### ::: ultralytics.data.augment.classify_transforms
<br><br>
## hsv2colorjitter
---
### ::: ultralytics.data.augment.hsv2colorjitter
<br><br>
## classify_albumentations
---
### ::: ultralytics.data.augment.classify_albumentations
<br><br>

@ -0,0 +1,4 @@
## BaseDataset
---
### ::: ultralytics.data.base.BaseDataset
<br><br>

@ -0,0 +1,34 @@
## InfiniteDataLoader
---
### ::: ultralytics.data.build.InfiniteDataLoader
<br><br>
## _RepeatSampler
---
### ::: ultralytics.data.build._RepeatSampler
<br><br>
## seed_worker
---
### ::: ultralytics.data.build.seed_worker
<br><br>
## build_yolo_dataset
---
### ::: ultralytics.data.build.build_yolo_dataset
<br><br>
## build_dataloader
---
### ::: ultralytics.data.build.build_dataloader
<br><br>
## check_source
---
### ::: ultralytics.data.build.check_source
<br><br>
## load_inference_source
---
### ::: ultralytics.data.build.load_inference_source
<br><br>

@ -0,0 +1,29 @@
## coco91_to_coco80_class
---
### ::: ultralytics.data.converter.coco91_to_coco80_class
<br><br>
## convert_coco
---
### ::: ultralytics.data.converter.convert_coco
<br><br>
## rle2polygon
---
### ::: ultralytics.data.converter.rle2polygon
<br><br>
## min_index
---
### ::: ultralytics.data.converter.min_index
<br><br>
## merge_multi_segment
---
### ::: ultralytics.data.converter.merge_multi_segment
<br><br>
## delete_dsstore
---
### ::: ultralytics.data.converter.delete_dsstore
<br><br>

@ -0,0 +1,14 @@
## YOLODataset
---
### ::: ultralytics.data.dataset.YOLODataset
<br><br>
## ClassificationDataset
---
### ::: ultralytics.data.dataset.ClassificationDataset
<br><br>
## SemanticDataset
---
### ::: ultralytics.data.dataset.SemanticDataset
<br><br>

@ -0,0 +1,39 @@
## SourceTypes
---
### ::: ultralytics.data.loaders.SourceTypes
<br><br>
## LoadStreams
---
### ::: ultralytics.data.loaders.LoadStreams
<br><br>
## LoadScreenshots
---
### ::: ultralytics.data.loaders.LoadScreenshots
<br><br>
## LoadImages
---
### ::: ultralytics.data.loaders.LoadImages
<br><br>
## LoadPilAndNumpy
---
### ::: ultralytics.data.loaders.LoadPilAndNumpy
<br><br>
## LoadTensor
---
### ::: ultralytics.data.loaders.LoadTensor
<br><br>
## autocast_list
---
### ::: ultralytics.data.loaders.autocast_list
<br><br>
## get_best_youtube_url
---
### ::: ultralytics.data.loaders.get_best_youtube_url
<br><br>

@ -0,0 +1,69 @@
## HUBDatasetStats
---
### ::: ultralytics.data.utils.HUBDatasetStats
<br><br>
## img2label_paths
---
### ::: ultralytics.data.utils.img2label_paths
<br><br>
## get_hash
---
### ::: ultralytics.data.utils.get_hash
<br><br>
## exif_size
---
### ::: ultralytics.data.utils.exif_size
<br><br>
## verify_image_label
---
### ::: ultralytics.data.utils.verify_image_label
<br><br>
## polygon2mask
---
### ::: ultralytics.data.utils.polygon2mask
<br><br>
## polygons2masks
---
### ::: ultralytics.data.utils.polygons2masks
<br><br>
## polygons2masks_overlap
---
### ::: ultralytics.data.utils.polygons2masks_overlap
<br><br>
## check_det_dataset
---
### ::: ultralytics.data.utils.check_det_dataset
<br><br>
## check_cls_dataset
---
### ::: ultralytics.data.utils.check_cls_dataset
<br><br>
## compress_one_image
---
### ::: ultralytics.data.utils.compress_one_image
<br><br>
## delete_dsstore
---
### ::: ultralytics.data.utils.delete_dsstore
<br><br>
## zip_directory
---
### ::: ultralytics.data.utils.zip_directory
<br><br>
## autosplit
---
### ::: ultralytics.data.utils.autosplit
<br><br>

@ -0,0 +1,29 @@
## Exporter
---
### ::: ultralytics.engine.exporter.Exporter
<br><br>
## iOSDetectModel
---
### ::: ultralytics.engine.exporter.iOSDetectModel
<br><br>
## export_formats
---
### ::: ultralytics.engine.exporter.export_formats
<br><br>
## gd_outputs
---
### ::: ultralytics.engine.exporter.gd_outputs
<br><br>
## try_export
---
### ::: ultralytics.engine.exporter.try_export
<br><br>
## export
---
### ::: ultralytics.engine.exporter.export
<br><br>

@ -0,0 +1,4 @@
## YOLO
---
### ::: ultralytics.engine.model.YOLO
<br><br>

@ -0,0 +1,4 @@
## BasePredictor
---
### ::: ultralytics.engine.predictor.BasePredictor
<br><br>

@ -0,0 +1,29 @@
## BaseTensor
---
### ::: ultralytics.engine.results.BaseTensor
<br><br>
## Results
---
### ::: ultralytics.engine.results.Results
<br><br>
## Boxes
---
### ::: ultralytics.engine.results.Boxes
<br><br>
## Masks
---
### ::: ultralytics.engine.results.Masks
<br><br>
## Keypoints
---
### ::: ultralytics.engine.results.Keypoints
<br><br>
## Probs
---
### ::: ultralytics.engine.results.Probs
<br><br>

@ -0,0 +1,4 @@
## BaseTrainer
---
### ::: ultralytics.engine.trainer.BaseTrainer
<br><br>

@ -0,0 +1,4 @@
## BaseValidator
---
### ::: ultralytics.engine.validator.BaseValidator
<br><br>

@ -1,8 +1,3 @@
---
description: Access Ultralytics HUB, manage API keys, train models, and export in various formats with ease using the HUB API.
keywords: Ultralytics, YOLO, Docs HUB, API, login, logout, reset model, export model, check dataset, HUBDatasetStats, YOLO training, YOLO model
---
## login ## login
--- ---
### ::: ultralytics.hub.login ### ::: ultralytics.hub.login

@ -1,8 +1,3 @@
---
description: Learn how to use Ultralytics hub authentication in your projects with examples and guidelines from the Auth page on Ultralytics Docs.
keywords: Ultralytics, ultralytics hub, api keys, authentication, collab accounts, requests, hub management, monitoring
---
## Auth ## Auth
--- ---
### ::: ultralytics.hub.auth.Auth ### ::: ultralytics.hub.auth.Auth

@ -1,8 +1,3 @@
---
description: Accelerate your AI development with the Ultralytics HUB Training Session. High-performance training of object detection models.
keywords: YOLOv5, object detection, HUBTrainingSession, custom models, Ultralytics Docs
---
## HUBTrainingSession ## HUBTrainingSession
--- ---
### ::: ultralytics.hub.session.HUBTrainingSession ### ::: ultralytics.hub.session.HUBTrainingSession

@ -1,8 +1,3 @@
---
description: Explore Ultralytics events, including 'request_with_credentials' and 'smart_request', to improve your project's performance and efficiency.
keywords: Ultralytics, Hub Utils, API Documentation, Python, requests_with_progress, Events, classes, usage, examples
---
## Events ## Events
--- ---
### ::: ultralytics.hub.utils.Events ### ::: ultralytics.hub.utils.Events

@ -0,0 +1,4 @@
## FastSAM
---
### ::: ultralytics.models.fastsam.model.FastSAM
<br><br>

@ -0,0 +1,4 @@
## FastSAMPredictor
---
### ::: ultralytics.models.fastsam.predict.FastSAMPredictor
<br><br>

@ -0,0 +1,4 @@
## FastSAMPrompt
---
### ::: ultralytics.models.fastsam.prompt.FastSAMPrompt
<br><br>

@ -0,0 +1,9 @@
## adjust_bboxes_to_image_border
---
### ::: ultralytics.models.fastsam.utils.adjust_bboxes_to_image_border
<br><br>
## bbox_iou
---
### ::: ultralytics.models.fastsam.utils.bbox_iou
<br><br>

@ -0,0 +1,4 @@
## FastSAMValidator
---
### ::: ultralytics.models.fastsam.val.FastSAMValidator
<br><br>

@ -0,0 +1,4 @@
## NAS
---
### ::: ultralytics.models.nas.model.NAS
<br><br>

@ -0,0 +1,4 @@
## NASPredictor
---
### ::: ultralytics.models.nas.predict.NASPredictor
<br><br>

@ -0,0 +1,4 @@
## NASValidator
---
### ::: ultralytics.models.nas.val.NASValidator
<br><br>

@ -0,0 +1,4 @@
## RTDETR
---
### ::: ultralytics.models.rtdetr.model.RTDETR
<br><br>

@ -0,0 +1,4 @@
## RTDETRPredictor
---
### ::: ultralytics.models.rtdetr.predict.RTDETRPredictor
<br><br>

@ -0,0 +1,9 @@
## RTDETRTrainer
---
### ::: ultralytics.models.rtdetr.train.RTDETRTrainer
<br><br>
## train
---
### ::: ultralytics.models.rtdetr.train.train
<br><br>

@ -0,0 +1,9 @@
## RTDETRDataset
---
### ::: ultralytics.models.rtdetr.val.RTDETRDataset
<br><br>
## RTDETRValidator
---
### ::: ultralytics.models.rtdetr.val.RTDETRValidator
<br><br>

@ -0,0 +1,84 @@
## MaskData
---
### ::: ultralytics.models.sam.amg.MaskData
<br><br>
## is_box_near_crop_edge
---
### ::: ultralytics.models.sam.amg.is_box_near_crop_edge
<br><br>
## box_xyxy_to_xywh
---
### ::: ultralytics.models.sam.amg.box_xyxy_to_xywh
<br><br>
## batch_iterator
---
### ::: ultralytics.models.sam.amg.batch_iterator
<br><br>
## mask_to_rle_pytorch
---
### ::: ultralytics.models.sam.amg.mask_to_rle_pytorch
<br><br>
## rle_to_mask
---
### ::: ultralytics.models.sam.amg.rle_to_mask
<br><br>
## area_from_rle
---
### ::: ultralytics.models.sam.amg.area_from_rle
<br><br>
## calculate_stability_score
---
### ::: ultralytics.models.sam.amg.calculate_stability_score
<br><br>
## build_point_grid
---
### ::: ultralytics.models.sam.amg.build_point_grid
<br><br>
## build_all_layer_point_grids
---
### ::: ultralytics.models.sam.amg.build_all_layer_point_grids
<br><br>
## generate_crop_boxes
---
### ::: ultralytics.models.sam.amg.generate_crop_boxes
<br><br>
## uncrop_boxes_xyxy
---
### ::: ultralytics.models.sam.amg.uncrop_boxes_xyxy
<br><br>
## uncrop_points
---
### ::: ultralytics.models.sam.amg.uncrop_points
<br><br>
## uncrop_masks
---
### ::: ultralytics.models.sam.amg.uncrop_masks
<br><br>
## remove_small_regions
---
### ::: ultralytics.models.sam.amg.remove_small_regions
<br><br>
## coco_encode_rle
---
### ::: ultralytics.models.sam.amg.coco_encode_rle
<br><br>
## batched_mask_to_box
---
### ::: ultralytics.models.sam.amg.batched_mask_to_box
<br><br>

@ -0,0 +1,29 @@
## build_sam_vit_h
---
### ::: ultralytics.models.sam.build.build_sam_vit_h
<br><br>
## build_sam_vit_l
---
### ::: ultralytics.models.sam.build.build_sam_vit_l
<br><br>
## build_sam_vit_b
---
### ::: ultralytics.models.sam.build.build_sam_vit_b
<br><br>
## build_mobile_sam
---
### ::: ultralytics.models.sam.build.build_mobile_sam
<br><br>
## _build_sam
---
### ::: ultralytics.models.sam.build._build_sam
<br><br>
## build_sam
---
### ::: ultralytics.models.sam.build.build_sam
<br><br>

@ -0,0 +1,4 @@
## SAM
---
### ::: ultralytics.models.sam.model.SAM
<br><br>

@ -0,0 +1,9 @@
## MaskDecoder
---
### ::: ultralytics.models.sam.modules.decoders.MaskDecoder
<br><br>
## MLP
---
### ::: ultralytics.models.sam.modules.decoders.MLP
<br><br>

@ -0,0 +1,49 @@
## ImageEncoderViT
---
### ::: ultralytics.models.sam.modules.encoders.ImageEncoderViT
<br><br>
## PromptEncoder
---
### ::: ultralytics.models.sam.modules.encoders.PromptEncoder
<br><br>
## PositionEmbeddingRandom
---
### ::: ultralytics.models.sam.modules.encoders.PositionEmbeddingRandom
<br><br>
## Block
---
### ::: ultralytics.models.sam.modules.encoders.Block
<br><br>
## Attention
---
### ::: ultralytics.models.sam.modules.encoders.Attention
<br><br>
## PatchEmbed
---
### ::: ultralytics.models.sam.modules.encoders.PatchEmbed
<br><br>
## window_partition
---
### ::: ultralytics.models.sam.modules.encoders.window_partition
<br><br>
## window_unpartition
---
### ::: ultralytics.models.sam.modules.encoders.window_unpartition
<br><br>
## get_rel_pos
---
### ::: ultralytics.models.sam.modules.encoders.get_rel_pos
<br><br>
## add_decomposed_rel_pos
---
### ::: ultralytics.models.sam.modules.encoders.add_decomposed_rel_pos
<br><br>

@ -0,0 +1,4 @@
## Sam
---
### ::: ultralytics.models.sam.modules.sam.Sam
<br><br>

@ -0,0 +1,54 @@
## Conv2d_BN
---
### ::: ultralytics.models.sam.modules.tiny_encoder.Conv2d_BN
<br><br>
## PatchEmbed
---
### ::: ultralytics.models.sam.modules.tiny_encoder.PatchEmbed
<br><br>
## MBConv
---
### ::: ultralytics.models.sam.modules.tiny_encoder.MBConv
<br><br>
## PatchMerging
---
### ::: ultralytics.models.sam.modules.tiny_encoder.PatchMerging
<br><br>
## ConvLayer
---
### ::: ultralytics.models.sam.modules.tiny_encoder.ConvLayer
<br><br>
## Mlp
---
### ::: ultralytics.models.sam.modules.tiny_encoder.Mlp
<br><br>
## Attention
---
### ::: ultralytics.models.sam.modules.tiny_encoder.Attention
<br><br>
## TinyViTBlock
---
### ::: ultralytics.models.sam.modules.tiny_encoder.TinyViTBlock
<br><br>
## BasicLayer
---
### ::: ultralytics.models.sam.modules.tiny_encoder.BasicLayer
<br><br>
## LayerNorm2d
---
### ::: ultralytics.models.sam.modules.tiny_encoder.LayerNorm2d
<br><br>
## TinyViT
---
### ::: ultralytics.models.sam.modules.tiny_encoder.TinyViT
<br><br>

@ -0,0 +1,14 @@
## TwoWayTransformer
---
### ::: ultralytics.models.sam.modules.transformer.TwoWayTransformer
<br><br>
## TwoWayAttentionBlock
---
### ::: ultralytics.models.sam.modules.transformer.TwoWayAttentionBlock
<br><br>
## Attention
---
### ::: ultralytics.models.sam.modules.transformer.Attention
<br><br>

@ -0,0 +1,4 @@
## Predictor
---
### ::: ultralytics.models.sam.predict.Predictor
<br><br>

@ -0,0 +1,9 @@
## DETRLoss
---
### ::: ultralytics.models.utils.loss.DETRLoss
<br><br>
## RTDETRDetectionLoss
---
### ::: ultralytics.models.utils.loss.RTDETRDetectionLoss
<br><br>

@ -0,0 +1,14 @@
## HungarianMatcher
---
### ::: ultralytics.models.utils.ops.HungarianMatcher
<br><br>
## get_cdn_group
---
### ::: ultralytics.models.utils.ops.get_cdn_group
<br><br>
## inverse_sigmoid
---
### ::: ultralytics.models.utils.ops.inverse_sigmoid
<br><br>

@ -0,0 +1,9 @@
## ClassificationPredictor
---
### ::: ultralytics.models.yolo.classify.predict.ClassificationPredictor
<br><br>
## predict
---
### ::: ultralytics.models.yolo.classify.predict.predict
<br><br>

@ -0,0 +1,9 @@
## ClassificationTrainer
---
### ::: ultralytics.models.yolo.classify.train.ClassificationTrainer
<br><br>
## train
---
### ::: ultralytics.models.yolo.classify.train.train
<br><br>

@ -0,0 +1,9 @@
## ClassificationValidator
---
### ::: ultralytics.models.yolo.classify.val.ClassificationValidator
<br><br>
## val
---
### ::: ultralytics.models.yolo.classify.val.val
<br><br>

@ -0,0 +1,9 @@
## DetectionPredictor
---
### ::: ultralytics.models.yolo.detect.predict.DetectionPredictor
<br><br>
## predict
---
### ::: ultralytics.models.yolo.detect.predict.predict
<br><br>

@ -0,0 +1,9 @@
## DetectionTrainer
---
### ::: ultralytics.models.yolo.detect.train.DetectionTrainer
<br><br>
## train
---
### ::: ultralytics.models.yolo.detect.train.train
<br><br>

@ -0,0 +1,9 @@
## DetectionValidator
---
### ::: ultralytics.models.yolo.detect.val.DetectionValidator
<br><br>
## val
---
### ::: ultralytics.models.yolo.detect.val.val
<br><br>

@ -0,0 +1,9 @@
## PosePredictor
---
### ::: ultralytics.models.yolo.pose.predict.PosePredictor
<br><br>
## predict
---
### ::: ultralytics.models.yolo.pose.predict.predict
<br><br>

@ -0,0 +1,9 @@
## PoseTrainer
---
### ::: ultralytics.models.yolo.pose.train.PoseTrainer
<br><br>
## train
---
### ::: ultralytics.models.yolo.pose.train.train
<br><br>

@ -0,0 +1,9 @@
## PoseValidator
---
### ::: ultralytics.models.yolo.pose.val.PoseValidator
<br><br>
## val
---
### ::: ultralytics.models.yolo.pose.val.val
<br><br>

@ -0,0 +1,9 @@
## SegmentationPredictor
---
### ::: ultralytics.models.yolo.segment.predict.SegmentationPredictor
<br><br>
## predict
---
### ::: ultralytics.models.yolo.segment.predict.predict
<br><br>

@ -0,0 +1,9 @@
## SegmentationTrainer
---
### ::: ultralytics.models.yolo.segment.train.SegmentationTrainer
<br><br>
## train
---
### ::: ultralytics.models.yolo.segment.train.train
<br><br>

@ -0,0 +1,9 @@
## SegmentationValidator
---
### ::: ultralytics.models.yolo.segment.val.SegmentationValidator
<br><br>
## val
---
### ::: ultralytics.models.yolo.segment.val.val
<br><br>

@ -1,8 +1,3 @@
---
description: Ensure class names match filenames for easy imports. Use AutoBackend to automatically rename and refactor model files.
keywords: AutoBackend, ultralytics, nn, autobackend, check class names, neural network
---
## AutoBackend ## AutoBackend
--- ---
### ::: ultralytics.nn.autobackend.AutoBackend ### ::: ultralytics.nn.autobackend.AutoBackend

@ -1,8 +1,3 @@
---
description: Explore ultralytics.nn.modules.block to build powerful YOLO object detection models. Master DFL, HGStem, SPP, CSP components and more.
keywords: Ultralytics, NN Modules, Blocks, DFL, HGStem, SPP, C1, C2f, C3x, C3TR, GhostBottleneck, BottleneckCSP, Computer Vision
---
## DFL ## DFL
--- ---
### ::: ultralytics.nn.modules.block.DFL ### ::: ultralytics.nn.modules.block.DFL

@ -1,8 +1,3 @@
---
description: Explore convolutional neural network modules & techniques such as LightConv, DWConv, ConvTranspose, GhostConv, CBAM & autopad with Ultralytics Docs.
keywords: Ultralytics, Convolutional Neural Network, Conv2, DWConv, ConvTranspose, GhostConv, ChannelAttention, CBAM, autopad
---
## Conv ## Conv
--- ---
### ::: ultralytics.nn.modules.conv.Conv ### ::: ultralytics.nn.modules.conv.Conv

@ -1,8 +1,3 @@
---
description: 'Learn about Ultralytics YOLO modules: Segment, Classify, and RTDETRDecoder. Optimize object detection and classification in your project.'
keywords: Ultralytics, YOLO, object detection, pose estimation, RTDETRDecoder, modules, classes, documentation
---
## Detect ## Detect
--- ---
### ::: ultralytics.nn.modules.head.Detect ### ::: ultralytics.nn.modules.head.Detect

@ -1,8 +1,3 @@
---
description: Explore the Ultralytics nn modules pages on Transformer and MLP blocks, LayerNorm2d, and Deformable Transformer Decoder Layer.
keywords: Ultralytics, NN Modules, TransformerEncoderLayer, TransformerLayer, MLPBlock, LayerNorm2d, DeformableTransformerDecoderLayer, examples, code snippets, tutorials
---
## TransformerEncoderLayer ## TransformerEncoderLayer
--- ---
### ::: ultralytics.nn.modules.transformer.TransformerEncoderLayer ### ::: ultralytics.nn.modules.transformer.TransformerEncoderLayer

@ -1,8 +1,3 @@
---
description: 'Learn about Ultralytics NN modules: get_clones, linear_init_, and multi_scale_deformable_attn_pytorch. Code examples and usage tips.'
keywords: Ultralytics, NN Utils, Docs, PyTorch, bias initialization, linear initialization, multi-scale deformable attention
---
## _get_clones ## _get_clones
--- ---
### ::: ultralytics.nn.modules.utils._get_clones ### ::: ultralytics.nn.modules.utils._get_clones

@ -1,8 +1,3 @@
---
description: Learn how to work with Ultralytics YOLO Detection, Segmentation & Classification Models, load weights and parse models in PyTorch.
keywords: neural network, deep learning, computer vision, object detection, image segmentation, image classification, model ensemble, PyTorch
---
## BaseModel ## BaseModel
--- ---
### ::: ultralytics.nn.tasks.BaseModel ### ::: ultralytics.nn.tasks.BaseModel
@ -38,6 +33,11 @@ keywords: neural network, deep learning, computer vision, object detection, imag
### ::: ultralytics.nn.tasks.Ensemble ### ::: ultralytics.nn.tasks.Ensemble
<br><br> <br><br>
## temporary_modules
---
### ::: ultralytics.nn.tasks.temporary_modules
<br><br>
## torch_safe_load ## torch_safe_load
--- ---
### ::: ultralytics.nn.tasks.torch_safe_load ### ::: ultralytics.nn.tasks.torch_safe_load

@ -1,19 +0,0 @@
---
description: Learn how to register custom event-tracking and track predictions with Ultralytics YOLO via on_predict_start and register_tracker methods.
keywords: Ultralytics YOLO, tracker registration, on_predict_start, object detection
---
## on_predict_start
---
### ::: ultralytics.tracker.track.on_predict_start
<br><br>
## on_predict_postprocess_end
---
### ::: ultralytics.tracker.track.on_predict_postprocess_end
<br><br>
## register_tracker
---
### ::: ultralytics.tracker.track.register_tracker
<br><br>

@ -1,14 +0,0 @@
---
description: 'TrackState: A comprehensive guide to Ultralytics tracker''s BaseTrack for monitoring model performance. Improve your tracking capabilities now!'
keywords: object detection, object tracking, Ultralytics YOLO, TrackState, workflow improvement
---
## TrackState
---
### ::: ultralytics.tracker.trackers.basetrack.TrackState
<br><br>
## BaseTrack
---
### ::: ultralytics.tracker.trackers.basetrack.BaseTrack
<br><br>

@ -1,14 +0,0 @@
---
description: '"Optimize tracking with Ultralytics BOTrack. Easily sort and track bots with BOTSORT. Streamline data collection for improved performance."'
keywords: BOTrack, Ultralytics YOLO Docs, features, usage
---
## BOTrack
---
### ::: ultralytics.tracker.trackers.bot_sort.BOTrack
<br><br>
## BOTSORT
---
### ::: ultralytics.tracker.trackers.bot_sort.BOTSORT
<br><br>

@ -1,14 +0,0 @@
---
description: Learn how to track ByteAI model sizes and tips for model optimization with STrack, a byte tracking tool from Ultralytics.
keywords: Byte Tracker, Ultralytics STrack, application monitoring, bytes sent, bytes received, code examples, setup instructions
---
## STrack
---
### ::: ultralytics.tracker.trackers.byte_tracker.STrack
<br><br>
## BYTETracker
---
### ::: ultralytics.tracker.trackers.byte_tracker.BYTETracker
<br><br>

@ -1,9 +0,0 @@
---
description: '"Track Google Marketing Campaigns in GMC with Ultralytics Tracker. Learn to set up and use GMC for detailed analytics. Get started now."'
keywords: Ultralytics, YOLO, object detection, tracker, optimization, models, documentation
---
## GMC
---
### ::: ultralytics.tracker.utils.gmc.GMC
<br><br>

@ -1,14 +0,0 @@
---
description: Improve object tracking with KalmanFilterXYAH in Ultralytics YOLO - an efficient and accurate algorithm for state estimation.
keywords: KalmanFilterXYAH, Ultralytics Docs, Kalman filter algorithm, object tracking, computer vision, YOLO
---
## KalmanFilterXYAH
---
### ::: ultralytics.tracker.utils.kalman_filter.KalmanFilterXYAH
<br><br>
## KalmanFilterXYWH
---
### ::: ultralytics.tracker.utils.kalman_filter.KalmanFilterXYWH
<br><br>

@ -1,64 +0,0 @@
---
description: Learn how to match and fuse object detections for accurate target tracking using Ultralytics' YOLO merge_matches, iou_distance, and embedding_distance.
keywords: Ultralytics, multi-object tracking, object tracking, detection, recognition, matching, indices, iou distance, gate cost matrix, fuse iou, bbox ious
---
## merge_matches
---
### ::: ultralytics.tracker.utils.matching.merge_matches
<br><br>
## _indices_to_matches
---
### ::: ultralytics.tracker.utils.matching._indices_to_matches
<br><br>
## linear_assignment
---
### ::: ultralytics.tracker.utils.matching.linear_assignment
<br><br>
## ious
---
### ::: ultralytics.tracker.utils.matching.ious
<br><br>
## iou_distance
---
### ::: ultralytics.tracker.utils.matching.iou_distance
<br><br>
## v_iou_distance
---
### ::: ultralytics.tracker.utils.matching.v_iou_distance
<br><br>
## embedding_distance
---
### ::: ultralytics.tracker.utils.matching.embedding_distance
<br><br>
## gate_cost_matrix
---
### ::: ultralytics.tracker.utils.matching.gate_cost_matrix
<br><br>
## fuse_motion
---
### ::: ultralytics.tracker.utils.matching.fuse_motion
<br><br>
## fuse_iou
---
### ::: ultralytics.tracker.utils.matching.fuse_iou
<br><br>
## fuse_score
---
### ::: ultralytics.tracker.utils.matching.fuse_score
<br><br>
## bbox_ious
---
### ::: ultralytics.tracker.utils.matching.bbox_ious
<br><br>

@ -0,0 +1,9 @@
## TrackState
---
### ::: ultralytics.trackers.basetrack.TrackState
<br><br>
## BaseTrack
---
### ::: ultralytics.trackers.basetrack.BaseTrack
<br><br>

@ -0,0 +1,9 @@
## BOTrack
---
### ::: ultralytics.trackers.bot_sort.BOTrack
<br><br>
## BOTSORT
---
### ::: ultralytics.trackers.bot_sort.BOTSORT
<br><br>

@ -0,0 +1,9 @@
## STrack
---
### ::: ultralytics.trackers.byte_tracker.STrack
<br><br>
## BYTETracker
---
### ::: ultralytics.trackers.byte_tracker.BYTETracker
<br><br>

@ -0,0 +1,14 @@
## on_predict_start
---
### ::: ultralytics.trackers.track.on_predict_start
<br><br>
## on_predict_postprocess_end
---
### ::: ultralytics.trackers.track.on_predict_postprocess_end
<br><br>
## register_tracker
---
### ::: ultralytics.trackers.track.register_tracker
<br><br>

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save