diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml index 0612585..38f92d2 100644 --- a/.github/workflows/publish.yml +++ b/.github/workflows/publish.yml @@ -23,6 +23,8 @@ jobs: steps: - name: Checkout code uses: actions/checkout@v3 + with: + fetch-depth: "0" # pulls all commits (needed correct last updated dates in Docs) - name: Set up Python environment uses: actions/setup-python@v4 with: diff --git a/docs/README.md b/docs/README.md index 3b3e306..de9d2a1 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,3 +1,7 @@ +--- +description: Learn how to install the Ultralytics package in developer mode and build/serve locally using MkDocs. Deploy your project to your host easily. +--- + # Ultralytics Docs Ultralytics Docs are deployed to [https://docs.ultralytics.com](https://docs.ultralytics.com). @@ -82,4 +86,4 @@ for your repository and updating the "Custom domain" field in the "GitHub Pages" ![196814117-fc16e711-d2be-4722-9536-b7c6d78fd167](https://user-images.githubusercontent.com/26833433/210150206-9e86dcd7-10af-43e4-9eb2-9518b3799eac.png) For more information on deploying your MkDocs documentation site, see -the [MkDocs documentation](https://www.mkdocs.org/user-guide/deploying-your-docs/). +the [MkDocs documentation](https://www.mkdocs.org/user-guide/deploying-your-docs/). \ No newline at end of file diff --git a/docs/SECURITY.md b/docs/SECURITY.md index 0b2dd7b..afbbf28 100644 --- a/docs/SECURITY.md +++ b/docs/SECURITY.md @@ -1,3 +1,7 @@ +--- +description: Learn how Ultralytics prioritize security. Get insights into Snyk and GitHub CodeQL scans, and how to report security issues in YOLOv8. +--- + # Security Policy At [Ultralytics](https://ultralytics.com), the security of our users' data and systems is of utmost importance. To @@ -25,4 +29,4 @@ reach out to us directly via our [contact form](https://ultralytics.com/contact) via [security@ultralytics.com](mailto:security@ultralytics.com). Our security team will investigate and respond as soon as possible. -We appreciate your help in keeping the YOLOv8 repository secure and safe for everyone. +We appreciate your help in keeping the YOLOv8 repository secure and safe for everyone. \ No newline at end of file diff --git a/docs/datasets/classify/index.md b/docs/datasets/classify/index.md index 90342df..0b20667 100644 --- a/docs/datasets/classify/index.md +++ b/docs/datasets/classify/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how torchvision organizes classification image datasets. Use this code to create and train models. CLI and Python code shown. --- # Image Classification Datasets Overview @@ -77,6 +78,7 @@ cifar-10-/ In this example, the `train` directory contains subdirectories for each class in the dataset, and each class subdirectory contains all the images for that class. The `test` directory has a similar structure. The `root` directory also contains other files that are part of the CIFAR10 dataset. ## Usage + !!! example "" === "Python" @@ -98,4 +100,5 @@ In this example, the `train` directory contains subdirectories for each class in ``` ## Supported Datasets + TODO \ No newline at end of file diff --git a/docs/datasets/detect/coco.md b/docs/datasets/detect/coco.md index e516a39..ffd4703 100644 --- a/docs/datasets/detect/coco.md +++ b/docs/datasets/detect/coco.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn about the COCO dataset, designed to encourage research on object detection, segmentation, and captioning with standardized evaluation metrics. --- # COCO Dataset diff --git a/docs/datasets/detect/index.md b/docs/datasets/detect/index.md index a2c3425..ac31cb6 100644 --- a/docs/datasets/detect/index.md +++ b/docs/datasets/detect/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn about supported dataset formats for training YOLO detection models, including Ultralytics YOLO and COCO, in this Object Detection Datasets Overview. --- # Object Detection Datasets Overview @@ -15,11 +16,12 @@ The dataset format used for training YOLO detection models is as follows: 1. One text file per image: Each image in the dataset has a corresponding text file with the same name as the image file and the ".txt" extension. 2. One row per object: Each row in the text file corresponds to one object instance in the image. 3. Object information per row: Each row contains the following information about the object instance: - - Object class index: An integer representing the class of the object (e.g., 0 for person, 1 for car, etc.). - - Object center coordinates: The x and y coordinates of the center of the object, normalized to be between 0 and 1. - - Object width and height: The width and height of the object, normalized to be between 0 and 1. - + - Object class index: An integer representing the class of the object (e.g., 0 for person, 1 for car, etc.). + - Object center coordinates: The x and y coordinates of the center of the object, normalized to be between 0 and 1. + - Object width and height: The width and height of the object, normalized to be between 0 and 1. + The format for a single row in the detection dataset file is as follows: + ``` ``` @@ -55,6 +57,7 @@ The `names` field is a list of the names of the object classes. The order of the NOTE: Either `nc` or `names` must be defined. Defining both are not mandatory Alternatively, you can directly define class names like this: + ```yaml names: 0: person @@ -72,6 +75,7 @@ names: ['person', 'car'] ``` ## Usage + !!! example "" === "Python" @@ -93,6 +97,7 @@ names: ['person', 'car'] ``` ## Supported Datasets + TODO ## Port or Convert label formats @@ -103,4 +108,4 @@ TODO from ultralytics.yolo.data.converter import convert_coco convert_coco(labels_dir='../coco/annotations/') -``` +``` \ No newline at end of file diff --git a/docs/datasets/index.md b/docs/datasets/index.md index c03d490..6a384c5 100644 --- a/docs/datasets/index.md +++ b/docs/datasets/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Ultralytics provides support for various datasets to facilitate multiple computer vision tasks. Check out our list of main datasets and their summaries. --- # Datasets Overview @@ -10,48 +11,48 @@ Ultralytics provides support for various datasets to facilitate computer vision Bounding box object detection is a computer vision technique that involves detecting and localizing objects in an image by drawing a bounding box around each object. - * [Argoverse](detect/argoverse.md): A dataset containing 3D tracking and motion forecasting data from urban environments with rich annotations. - * [COCO](detect/coco.md): A large-scale dataset designed for object detection, segmentation, and captioning with over 200K labeled images. - * [COCO8](detect/coco8.md): Contains the first 4 images from COCO train and COCO val, suitable for quick tests. - * [Global Wheat 2020](detect/globalwheat2020.md): A dataset of wheat head images collected from around the world for object detection and localization tasks. - * [Objects365](detect/objects365.md): A high-quality, large-scale dataset for object detection with 365 object categories and over 600K annotated images. - * [SKU-110K](detect/sku-110k.md): A dataset featuring dense object detection in retail environments with over 11K images and 1.7 million bounding boxes. - * [VisDrone](detect/visdrone.md): A dataset containing object detection and multi-object tracking data from drone-captured imagery with over 10K images and video sequences. - * [VOC](detect/voc.md): The Pascal Visual Object Classes (VOC) dataset for object detection and segmentation with 20 object classes and over 11K images. - * [xView](detect/xview.md): A dataset for object detection in overhead imagery with 60 object categories and over 1 million annotated objects. +* [Argoverse](detect/argoverse.md): A dataset containing 3D tracking and motion forecasting data from urban environments with rich annotations. +* [COCO](detect/coco.md): A large-scale dataset designed for object detection, segmentation, and captioning with over 200K labeled images. +* [COCO8](detect/coco8.md): Contains the first 4 images from COCO train and COCO val, suitable for quick tests. +* [Global Wheat 2020](detect/globalwheat2020.md): A dataset of wheat head images collected from around the world for object detection and localization tasks. +* [Objects365](detect/objects365.md): A high-quality, large-scale dataset for object detection with 365 object categories and over 600K annotated images. +* [SKU-110K](detect/sku-110k.md): A dataset featuring dense object detection in retail environments with over 11K images and 1.7 million bounding boxes. +* [VisDrone](detect/visdrone.md): A dataset containing object detection and multi-object tracking data from drone-captured imagery with over 10K images and video sequences. +* [VOC](detect/voc.md): The Pascal Visual Object Classes (VOC) dataset for object detection and segmentation with 20 object classes and over 11K images. +* [xView](detect/xview.md): A dataset for object detection in overhead imagery with 60 object categories and over 1 million annotated objects. ## [Instance Segmentation Datasets](segment/index.md) Instance segmentation is a computer vision technique that involves identifying and localizing objects in an image at the pixel level. - * [COCO](segment/coco.md): A large-scale dataset designed for object detection, segmentation, and captioning tasks with over 200K labeled images. - * [COCO8-seg](segment/coco8-seg.md): A smaller dataset for instance segmentation tasks, containing a subset of 8 COCO images with segmentation annotations. +* [COCO](segment/coco.md): A large-scale dataset designed for object detection, segmentation, and captioning tasks with over 200K labeled images. +* [COCO8-seg](segment/coco8-seg.md): A smaller dataset for instance segmentation tasks, containing a subset of 8 COCO images with segmentation annotations. ## [Pose Estimation](pose/index.md) Pose estimation is a technique used to determine the pose of the object relative to the camera or the world coordinate system. - * [COCO](pose/coco.md): A large-scale dataset with human pose annotations designed for pose estimation tasks. - * [COCO8-pose](pose/coco8-pose.md): A smaller dataset for pose estimation tasks, containing a subset of 8 COCO images with human pose annotations. +* [COCO](pose/coco.md): A large-scale dataset with human pose annotations designed for pose estimation tasks. +* [COCO8-pose](pose/coco8-pose.md): A smaller dataset for pose estimation tasks, containing a subset of 8 COCO images with human pose annotations. ## [Classification](classify/index.md) Image classification is a computer vision task that involves categorizing an image into one or more predefined classes or categories based on its visual content. - * [Caltech 101](classify/caltech101.md): A dataset containing images of 101 object categories for image classification tasks. - * [Caltech 256](classify/caltech256.md): An extended version of Caltech 101 with 256 object categories and more challenging images. - * [CIFAR-10](classify/cifar10.md): A dataset of 60K 32x32 color images in 10 classes, with 6K images per class. - * [CIFAR-100](classify/cifar100.md): An extended version of CIFAR-10 with 100 object categories and 600 images per class. - * [Fashion-MNIST](classify/fashion-mnist.md): A dataset consisting of 70,000 grayscale images of 10 fashion categories for image classification tasks. - * [ImageNet](classify/imagenet.md): A large-scale dataset for object detection and image classification with over 14 million images and 20,000 categories. - * [ImageNet-10](classify/imagenet10.md): A smaller subset of ImageNet with 10 categories for faster experimentation and testing. - * [Imagenette](classify/imagenette.md): A smaller subset of ImageNet that contains 10 easily distinguishable classes for quicker training and testing. - * [Imagewoof](classify/imagewoof.md): A more challenging subset of ImageNet containing 10 dog breed categories for image classification tasks. - * [MNIST](classify/mnist.md): A dataset of 70,000 grayscale images of handwritten digits for image classification tasks. +* [Caltech 101](classify/caltech101.md): A dataset containing images of 101 object categories for image classification tasks. +* [Caltech 256](classify/caltech256.md): An extended version of Caltech 101 with 256 object categories and more challenging images. +* [CIFAR-10](classify/cifar10.md): A dataset of 60K 32x32 color images in 10 classes, with 6K images per class. +* [CIFAR-100](classify/cifar100.md): An extended version of CIFAR-10 with 100 object categories and 600 images per class. +* [Fashion-MNIST](classify/fashion-mnist.md): A dataset consisting of 70,000 grayscale images of 10 fashion categories for image classification tasks. +* [ImageNet](classify/imagenet.md): A large-scale dataset for object detection and image classification with over 14 million images and 20,000 categories. +* [ImageNet-10](classify/imagenet10.md): A smaller subset of ImageNet with 10 categories for faster experimentation and testing. +* [Imagenette](classify/imagenette.md): A smaller subset of ImageNet that contains 10 easily distinguishable classes for quicker training and testing. +* [Imagewoof](classify/imagewoof.md): A more challenging subset of ImageNet containing 10 dog breed categories for image classification tasks. +* [MNIST](classify/mnist.md): A dataset of 70,000 grayscale images of handwritten digits for image classification tasks. ## [Multi-Object Tracking](track/index.md) Multi-object tracking is a computer vision technique that involves detecting and tracking multiple objects over time in a video sequence. * [Argoverse](detect/argoverse.md): A dataset containing 3D tracking and motion forecasting data from urban environments with rich annotations for multi-object tracking tasks. -* [VisDrone](detect/visdrone.md): A dataset containing object detection and multi-object tracking data from drone-captured imagery with over 10K images and video sequences. +* [VisDrone](detect/visdrone.md): A dataset containing object detection and multi-object tracking data from drone-captured imagery with over 10K images and video sequences. \ No newline at end of file diff --git a/docs/datasets/pose/index.md b/docs/datasets/pose/index.md index 16cd8a2..6dee62d 100644 --- a/docs/datasets/pose/index.md +++ b/docs/datasets/pose/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to format your dataset for training YOLO models with Ultralytics YOLO format using our concise tutorial and example YAML files. --- # Pose Estimation Datasets Overview @@ -15,26 +16,26 @@ The dataset format used for training YOLO segmentation models is as follows: 1. One text file per image: Each image in the dataset has a corresponding text file with the same name as the image file and the ".txt" extension. 2. One row per object: Each row in the text file corresponds to one object instance in the image. 3. Object information per row: Each row contains the following information about the object instance: - - Object class index: An integer representing the class of the object (e.g., 0 for person, 1 for car, etc.). - - Object center coordinates: The x and y coordinates of the center of the object, normalized to be between 0 and 1. - - Object width and height: The width and height of the object, normalized to be between 0 and 1. - - Object keypoint coordinates: The keypoints of the object, normalized to be between 0 and 1. + - Object class index: An integer representing the class of the object (e.g., 0 for person, 1 for car, etc.). + - Object center coordinates: The x and y coordinates of the center of the object, normalized to be between 0 and 1. + - Object width and height: The width and height of the object, normalized to be between 0 and 1. + - Object keypoint coordinates: The keypoints of the object, normalized to be between 0 and 1. Here is an example of the label format for pose estimation task: Format with Dim = 2 ``` - + ... ``` + Format with Dim = 3 ``` ``` -In this format, `` is the index of the class for the object,` ` are coordinates of boudning box, and ` ` are the pixel coordinates of the keypoints. The coordinates are separated by spaces. - +In this format, `` is the index of the class for the object,` ` are coordinates of boudning box, and ` ... ` are the pixel coordinates of the keypoints. The coordinates are separated by spaces. ** Dataset file format ** @@ -62,6 +63,7 @@ The `names` field is a list of the names of the object classes. The order of the NOTE: Either `nc` or `names` must be defined. Defining both are not mandatory Alternatively, you can directly define class names like this: + ``` names: 0: person @@ -69,7 +71,7 @@ names: ``` (Optional) if the points are symmetric then need flip_idx, like left-right side of human or face. -For example let's say there're five keypoints of facial landmark: [left eye, right eye, nose, left point of mouth, right point of mouse], and the original index is [0, 1, 2, 3, 4], then flip_idx is [1, 0, 2, 4, 3].(just exchange the left-right index, i.e 0-1 and 3-4, and do not modify others like nose in this example) +For example let's say there're five keypoints of facial landmark: [left eye, right eye, nose, left point of mouth, right point of mouse], and the original index is [0, 1, 2, 3, 4], then flip_idx is [1, 0, 2, 4, 3].(just exchange the left-right index, i.e 0-1 and 3-4, and do not modify others like nose in this example) ** Example ** @@ -86,6 +88,7 @@ flip_idx: [0, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15] ``` ## Usage + !!! example "" === "Python" @@ -107,6 +110,7 @@ flip_idx: [0, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15] ``` ## Supported Datasets + TODO ## Port or Convert label formats @@ -117,4 +121,4 @@ TODO from ultralytics.yolo.data.converter import convert_coco convert_coco(labels_dir='../coco/annotations/', use_keypoints=True) -``` +``` \ No newline at end of file diff --git a/docs/datasets/segment/index.md b/docs/datasets/segment/index.md index d925483..713beb5 100644 --- a/docs/datasets/segment/index.md +++ b/docs/datasets/segment/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn about the Ultralytics YOLO dataset format for segmentation models. Use YAML to train Detection Models. Convert COCO to YOLO format using Python. --- # Instance Segmentation Datasets Overview @@ -15,8 +16,8 @@ The dataset format used for training YOLO segmentation models is as follows: 1. One text file per image: Each image in the dataset has a corresponding text file with the same name as the image file and the ".txt" extension. 2. One row per object: Each row in the text file corresponds to one object instance in the image. 3. Object information per row: Each row contains the following information about the object instance: - - Object class index: An integer representing the class of the object (e.g., 0 for person, 1 for car, etc.). - - Object bounding coordinates: The bounding coordinates around the mask area, normalized to be between 0 and 1. + - Object class index: An integer representing the class of the object (e.g., 0 for person, 1 for car, etc.). + - Object bounding coordinates: The bounding coordinates around the mask area, normalized to be between 0 and 1. The format for a single row in the segmentation dataset file is as follows: @@ -24,7 +25,7 @@ The format for a single row in the segmentation dataset file is as follows: ... ``` -In this format, `` is the index of the class for the object, and ` ... ` are the bounding coordinates of the object's segmentation mask. The coordinates are separated by spaces. +In this format, `` is the index of the class for the object, and ` ... ` are the bounding coordinates of the object's segmentation mask. The coordinates are separated by spaces. Here is an example of the YOLO dataset format for a single image with two object instances: @@ -32,6 +33,7 @@ Here is an example of the YOLO dataset format for a single image with two object 0 0.6812 0.48541 0.67 0.4875 0.67656 0.487 0.675 0.489 0.66 1 0.5046 0.0 0.5015 0.004 0.4984 0.00416 0.4937 0.010 0.492 0.0104 ``` + Note: The length of each row does not have to be equal. ** Dataset file format ** @@ -56,6 +58,7 @@ The `names` field is a list of the names of the object classes. The order of the NOTE: Either `nc` or `names` must be defined. Defining both are not mandatory. Alternatively, you can directly define class names like this: + ```yaml names: 0: person @@ -73,6 +76,7 @@ names: ['person', 'car'] ``` ## Usage + !!! example "" === "Python" @@ -103,4 +107,4 @@ names: ['person', 'car'] from ultralytics.yolo.data.converter import convert_coco convert_coco(labels_dir='../coco/annotations/', use_segments=True) -``` +``` \ No newline at end of file diff --git a/docs/datasets/track/index.md b/docs/datasets/track/index.md index 82c7d96..e16e8f7 100644 --- a/docs/datasets/track/index.md +++ b/docs/datasets/track/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Discover the datasets compatible with Multi-Object Detector. Train your trackers and make your detections more efficient with Ultralytics' YOLO. --- # Multi-object Tracking Datasets Overview @@ -25,5 +26,4 @@ Support for training trackers alone is coming soon ```bash yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" conf=0.3, iou=0.5 show - ``` - + ``` \ No newline at end of file diff --git a/docs/help/CLA.md b/docs/help/CLA.md index c8dd717..e998bb7 100644 --- a/docs/help/CLA.md +++ b/docs/help/CLA.md @@ -1,3 +1,7 @@ +--- +description: Individual Contributor License Agreement. Settle Intellectual Property issues for Contributions made to anything open source released by Ultralytics. +--- + # Ultralytics Individual Contributor License Agreement Thank you for your interest in contributing to open source software projects (β€œProjects”) made available by Ultralytics @@ -62,4 +66,4 @@ that any of the provisions of this Agreement shall be held by a court or other t to be unenforceable, the remaining portions hereof shall remain in full force and effect. **Assignment.** You agree that Ultralytics may assign this Agreement, and all of its rights, obligations and licenses -hereunder. +hereunder. \ No newline at end of file diff --git a/docs/help/FAQ.md b/docs/help/FAQ.md index e4caa83..0a0e70a 100644 --- a/docs/help/FAQ.md +++ b/docs/help/FAQ.md @@ -1,5 +1,6 @@ --- comments: true +description: 'Get quick answers to common Ultralytics YOLO questions: Hardware requirements, fine-tuning, conversion, real-time detection, and accuracy tips.' --- # Ultralytics YOLO Frequently Asked Questions (FAQ) diff --git a/docs/help/code_of_conduct.md b/docs/help/code_of_conduct.md index ba574f1..2915810 100644 --- a/docs/help/code_of_conduct.md +++ b/docs/help/code_of_conduct.md @@ -1,5 +1,6 @@ --- comments: true +description: Read the Ultralytics Contributor Covenant Code of Conduct. Learn ways to create a welcoming community & consequences for inappropriate conduct. --- # Ultralytics Contributor Covenant Code of Conduct @@ -110,7 +111,7 @@ Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community -standards, including sustained inappropriate behavior, harassment of an +standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within @@ -129,4 +130,4 @@ For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations. -[homepage]: https://www.contributor-covenant.org +[homepage]: https://www.contributor-covenant.org \ No newline at end of file diff --git a/docs/help/contributing.md b/docs/help/contributing.md index 4aced9b..b26f6ff 100644 --- a/docs/help/contributing.md +++ b/docs/help/contributing.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to contribute to Ultralytics Open-Source YOLO Repositories with contributions guidelines, pull requests requirements, and GitHub CI tests. --- # Contributing to Ultralytics Open-Source YOLO Repositories @@ -10,11 +11,11 @@ First of all, thank you for your interest in contributing to Ultralytics open-so - [Code of Conduct](#code-of-conduct) - [Pull Requests](#pull-requests) - - [CLA Signing](#cla-signing) - - [Google-Style Docstrings](#google-style-docstrings) - - [GitHub Actions CI Tests](#github-actions-ci-tests) + - [CLA Signing](#cla-signing) + - [Google-Style Docstrings](#google-style-docstrings) + - [GitHub Actions CI Tests](#github-actions-ci-tests) - [Bug Reports](#bug-reports) - - [Minimum Reproducible Example](#minimum-reproducible-example) + - [Minimum Reproducible Example](#minimum-reproducible-example) - [License and Copyright](#license-and-copyright) ## Code of Conduct diff --git a/docs/help/index.md b/docs/help/index.md index ed4ab10..25e3ebc 100644 --- a/docs/help/index.md +++ b/docs/help/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Get comprehensive resources for Ultralytics YOLO repositories. Find guides, FAQs, MRE creation, CLA & more. Join the supportive community now! --- Welcome to the Ultralytics Help page! We are committed to providing you with comprehensive resources to make your experience with Ultralytics YOLO repositories as smooth and enjoyable as possible. On this page, you'll find essential links to guides and documents that will help you navigate through common tasks and address any questions you might have while using our repositories. diff --git a/docs/help/minimum_reproducible_example.md b/docs/help/minimum_reproducible_example.md index b547f1b..0333543 100644 --- a/docs/help/minimum_reproducible_example.md +++ b/docs/help/minimum_reproducible_example.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to create a Minimum Reproducible Example (MRE) for Ultralytics YOLO bug reports to help maintainers and contributors understand your issue better. --- # Creating a Minimum Reproducible Example for Bug Reports in Ultralytics YOLO Repositories diff --git a/docs/hub/app/android.md b/docs/hub/app/android.md index 79c88ac..bcb95c0 100644 --- a/docs/hub/app/android.md +++ b/docs/hub/app/android.md @@ -1,5 +1,6 @@ --- comments: true +description: Run YOLO models on your Android device for real-time object detection with Ultralytics Android App. Utilizes TensorFlow Lite and hardware delegates. --- # Ultralytics Android App: Real-time Object Detection with YOLO Models @@ -19,7 +20,7 @@ FP16 (or half-precision) quantization converts the model's 32-bit floating-point INT8 (or 8-bit integer) quantization further reduces the model's size and computation requirements by converting its 32-bit floating-point numbers to 8-bit integers. This quantization method can result in a significant speedup, but it may lead to a slight reduction in mean average precision (mAP) due to the lower numerical precision. !!! tip "mAP Reduction in INT8 Models" - + The reduced numerical precision in INT8 models can lead to some loss of information during the quantization process, which may result in a slight decrease in mAP. However, this trade-off is often acceptable considering the substantial performance gains offered by INT8 quantization. ## Delegates and Performance Variability @@ -61,4 +62,4 @@ To get started with the Ultralytics Android App, follow these steps: 6. Explore the app's settings to adjust the detection threshold, enable or disable specific object classes, and more. -With the Ultralytics Android App, you now have the power of real-time object detection using YOLO models right at your fingertips. Enjoy exploring the app's features and optimizing its settings to suit your specific use cases. +With the Ultralytics Android App, you now have the power of real-time object detection using YOLO models right at your fingertips. Enjoy exploring the app's features and optimizing its settings to suit your specific use cases. \ No newline at end of file diff --git a/docs/hub/app/index.md b/docs/hub/app/index.md index efc7ead..8fb977e 100644 --- a/docs/hub/app/index.md +++ b/docs/hub/app/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Experience the power of YOLOv5 and YOLOv8 models with Ultralytics HUB app. Download from Google Play and App Store now. --- # Ultralytics HUB App diff --git a/docs/hub/app/ios.md b/docs/hub/app/ios.md index 084fd87..2c134f6 100644 --- a/docs/hub/app/ios.md +++ b/docs/hub/app/ios.md @@ -1,5 +1,6 @@ --- comments: true +description: Get started with the Ultralytics iOS app and run YOLO models in real-time for object detection on your iPhone or iPad with the Apple Neural Engine. --- # Ultralytics iOS App: Real-time Object Detection with YOLO Models @@ -33,7 +34,6 @@ By combining quantized YOLO models with the Apple Neural Engine, the Ultralytics | 2021 | [iPhone 13](https://en.wikipedia.org/wiki/IPhone_13) | [A15 Bionic](https://en.wikipedia.org/wiki/Apple_A15) | 5 nm | 15.8 | | 2022 | [iPhone 14](https://en.wikipedia.org/wiki/IPhone_14) | [A16 Bionic](https://en.wikipedia.org/wiki/Apple_A16) | 4 nm | 17.0 | - Please note that this list only includes iPhone models from 2017 onwards, and the ANE TOPs values are approximate. ## Getting Started with the Ultralytics iOS App @@ -52,4 +52,4 @@ To get started with the Ultralytics iOS App, follow these steps: 6. Explore the app's settings to adjust the detection threshold, enable or disable specific object classes, and more. -With the Ultralytics iOS App, you can now leverage the power of YOLO models for real-time object detection on your iPhone or iPad, powered by the Apple Neural Engine and optimized with FP16 or INT8 quantization. +With the Ultralytics iOS App, you can now leverage the power of YOLO models for real-time object detection on your iPhone or iPad, powered by the Apple Neural Engine and optimized with FP16 or INT8 quantization. \ No newline at end of file diff --git a/docs/hub/datasets.md b/docs/hub/datasets.md index e8ba0e6..c4f0658 100644 --- a/docs/hub/datasets.md +++ b/docs/hub/datasets.md @@ -1,5 +1,6 @@ --- comments: true +description: Upload custom datasets to Ultralytics HUB for YOLOv5 and YOLOv8 models. Follow YAML structure, zip and upload. Scan & train new models. --- # HUB Datasets @@ -46,4 +47,4 @@ names: After zipping your dataset, sign in to [Ultralytics HUB](https://bit.ly/ultralytics_hub) and click the Datasets tab. Click 'Upload Dataset' to upload, scan and visualize your new dataset before training new YOLOv5 or YOLOv8 models on it! -HUB Dataset Upload +HUB Dataset Upload \ No newline at end of file diff --git a/docs/hub/index.md b/docs/hub/index.md index 7b9fad2..5ff95f1 100644 --- a/docs/hub/index.md +++ b/docs/hub/index.md @@ -1,5 +1,6 @@ --- comments: true +description: 'Ultralytics HUB: Train & deploy YOLO models from one spot! Use drag-and-drop interface with templates & pre-training models. Check quickstart, datasets, and more.' --- # Ultralytics HUB @@ -20,7 +21,6 @@ comments: true launch [Ultralytics HUB](https://bit.ly/ultralytics_hub), a new web tool for training and deploying all your YOLOv5 and YOLOv8 πŸš€ models from one spot! - ## Introduction HUB is designed to be user-friendly and intuitive, with a drag-and-drop interface that allows users to diff --git a/docs/hub/inference_api.md b/docs/hub/inference_api.md index ad13ce6..b69b13a 100644 --- a/docs/hub/inference_api.md +++ b/docs/hub/inference_api.md @@ -6,7 +6,6 @@ comments: true This page is currently under construction!️ πŸ‘·Please check back later for updates. πŸ˜ƒπŸ”œ - # YOLO Inference API The YOLO Inference API allows you to access the YOLOv8 object detection capabilities via a RESTful API. This enables you to run object detection on images without the need to install and set up the YOLOv8 environment locally. @@ -45,7 +44,6 @@ print(response.json()) In this example, replace `API_KEY` with your actual API key, `MODEL_ID` with the desired model ID, and `path/to/image.jpg` with the path to the image you want to analyze. - ## Example Usage with CLI You can use the YOLO Inference API with the command-line interface (CLI) by utilizing the `curl` command. Replace `API_KEY` with your actual API key, `MODEL_ID` with the desired model ID, and `image.jpg` with the path to the image you want to analyze: @@ -334,7 +332,6 @@ YOLO segmentation models, such as `yolov8n-seg.pt`, can return JSON responses fr } ``` - ### Pose Model Format YOLO pose models, such as `yolov8n-pose.pt`, can return JSON responses from local inference, CLI API inference, and Python API inference. All of these methods produce the same JSON response format. diff --git a/docs/hub/models.md b/docs/hub/models.md index b40d29e..1d73d45 100644 --- a/docs/hub/models.md +++ b/docs/hub/models.md @@ -1,5 +1,6 @@ --- comments: true +description: Train and Deploy your Model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle or directly on Mobile. --- # HUB Models @@ -11,7 +12,6 @@ Connect to the Ultralytics HUB notebook and use your model API key to begin trai Open In Colab - ## Deploy to Real World Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run diff --git a/docs/index.md b/docs/index.md index e45b022..73212a7 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Explore Ultralytics YOLOv8, a cutting-edge real-time object detection and image segmentation model for various applications and hardware platforms. ---
@@ -23,7 +24,7 @@ Explore the YOLOv8 Docs, a comprehensive resource designed to help you understan ## Where to Start - **Install** `ultralytics` with pip and get up and running in minutes   [:material-clock-fast: Get Started](quickstart.md){ .md-button } -- **Predict** new images and videos with YOLOv8   [:octicons-image-16: Predict on Images](modes/predict.md){ .md-button } +- **Predict** new images and videos with YOLOv8   [:octicons-image-16: Predict on Images](modes/predict.md){ .md-button } - **Train** a new YOLOv8 model on your own custom dataset   [:fontawesome-solid-brain: Train a Model](modes/train.md){ .md-button } - **Explore** YOLOv8 tasks like segment, classify, pose and track   [:material-magnify-expand: Explore Tasks](tasks/index.md){ .md-button } @@ -37,4 +38,4 @@ Explore the YOLOv8 Docs, a comprehensive resource designed to help you understan - [YOLOv5](https://github.com/ultralytics/yolov5) further improved the model's performance and added new features such as hyperparameter optimization, integrated experiment tracking and automatic export to popular export formats. - [YOLOv6](https://github.com/meituan/YOLOv6) was open-sourced by [Meituan](https://about.meituan.com/) in 2022 and is in use in many of the company's autonomous delivery robots. - [YOLOv7](https://github.com/WongKinYiu/yolov7) added additional tasks such as pose estimation on the COCO keypoints dataset. -- [YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of YOLO by Ultralytics. As a cutting-edge, state-of-the-art (SOTA) model, YOLOv8 builds on the success of previous versions, introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLOv8 supports a full range of vision AI tasks, including [detection](tasks/detect.md), [segmentation](tasks/segment.md), [pose estimation](tasks/pose.md), [tracking](modes/track.md), and [classification](tasks/classify.md). This versatility allows users to leverage YOLOv8's capabilities across diverse applications and domains. +- [YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of YOLO by Ultralytics. As a cutting-edge, state-of-the-art (SOTA) model, YOLOv8 builds on the success of previous versions, introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLOv8 supports a full range of vision AI tasks, including [detection](tasks/detect.md), [segmentation](tasks/segment.md), [pose estimation](tasks/pose.md), [tracking](modes/track.md), and [classification](tasks/classify.md). This versatility allows users to leverage YOLOv8's capabilities across diverse applications and domains. \ No newline at end of file diff --git a/docs/models/index.md b/docs/models/index.md index a10ea2e..f594c05 100644 --- a/docs/models/index.md +++ b/docs/models/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn about the supported models and architectures, such as YOLOv3, YOLOv5, and YOLOv8, and how to contribute your own model to Ultralytics. --- # Models diff --git a/docs/models/sam.md b/docs/models/sam.md index e503fe9..9f22963 100644 --- a/docs/models/sam.md +++ b/docs/models/sam.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn about the Vision Transformer (ViT) and segment anything with SAM models. Train and use pre-trained models with Python API. --- # Vision Transformers @@ -9,11 +10,11 @@ Vit models currently support Python environment: ```python from ultralytics.vit import SAM -# from ultralytics.vit import MODEL_TYPe +# from ultralytics.vit import MODEL_TYPE model = SAM("sam_b.pt") model.info() # display model information -model.predict(...) # train the model +model.predict(...) # predict ``` # Segment Anything @@ -33,4 +34,4 @@ model.predict(...) # train the model |------------|--------------------| | Inference | :heavy_check_mark: | | Validation | :x: | -| Training | :x: | +| Training | :x: | \ No newline at end of file diff --git a/docs/models/yolov5.md b/docs/models/yolov5.md index 5419025..ae8b866 100644 --- a/docs/models/yolov5.md +++ b/docs/models/yolov5.md @@ -1,5 +1,6 @@ --- comments: true +description: Detect objects faster and more accurately using Ultralytics YOLOv5u. Find pre-trained models for each task, including Inference, Validation and Training. --- # YOLOv5u @@ -38,4 +39,4 @@ Anchor-free YOLOv5 models with improved accuracy-speed tradeoff. | [YOLOv5s6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5s6u.pt) | 1280 | 48.6 | - | - | 15.3 | 24.6 | | [YOLOv5m6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5m6u.pt) | 1280 | 53.6 | - | - | 41.2 | 65.7 | | [YOLOv5l6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5l6u.pt) | 1280 | 55.7 | - | - | 86.1 | 137.4 | - | [YOLOv5x6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5x6u.pt) | 1280 | 56.8 | - | - | 155.4 | 250.7 | + | [YOLOv5x6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5x6u.pt) | 1280 | 56.8 | - | - | 155.4 | 250.7 | \ No newline at end of file diff --git a/docs/models/yolov8.md b/docs/models/yolov8.md index 86faf57..d14d6fb 100644 --- a/docs/models/yolov8.md +++ b/docs/models/yolov8.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn about YOLOv8's pre-trained weights supporting detection, instance segmentation, pose, and classification tasks. Get performance details. --- # YOLOv8 @@ -64,4 +65,4 @@ comments: true | [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 | | [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-pose.pt) | 640 | 67.6 | 90.0 | 784.5 | 2.59 | 44.4 | 168.6 | | [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 | - | [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 | + | [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 | \ No newline at end of file diff --git a/docs/modes/benchmark.md b/docs/modes/benchmark.md index 2a9ac9b..a7600e0 100644 --- a/docs/modes/benchmark.md +++ b/docs/modes/benchmark.md @@ -1,5 +1,6 @@ --- comments: true +description: Benchmark mode compares speed and accuracy of various YOLOv8 export formats like ONNX or OpenVINO. Optimize formats for speed or accuracy. --- diff --git a/docs/modes/export.md b/docs/modes/export.md index ddde395..2352cf4 100644 --- a/docs/modes/export.md +++ b/docs/modes/export.md @@ -1,5 +1,6 @@ --- comments: true +description: 'Export mode: Create a deployment-ready YOLOv8 model by converting it to various formats. Export to ONNX or OpenVINO for up to 3x CPU speedup.' --- @@ -82,4 +83,4 @@ i.e. `format='onnx'` or `format='engine'`. | [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | βœ… | `imgsz`, `half`, `int8` | | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | βœ… | `imgsz` | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | βœ… | `imgsz` | -| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | βœ… | `imgsz` | +| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | βœ… | `imgsz` | \ No newline at end of file diff --git a/docs/modes/index.md b/docs/modes/index.md index 8292fdd..8e729ba 100644 --- a/docs/modes/index.md +++ b/docs/modes/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Use Ultralytics YOLOv8 Modes (Train, Val, Predict, Export, Track, Benchmark) to train, validate, predict, track, export or benchmark. --- # Ultralytics YOLOv8 Modes @@ -63,4 +64,4 @@ or `accuracy_top5` metrics (for classification), and the inference time in milli formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for their specific use case based on their requirements for speed and accuracy. -[Benchmark Examples](benchmark.md){ .md-button .md-button--primary} +[Benchmark Examples](benchmark.md){ .md-button .md-button--primary} \ No newline at end of file diff --git a/docs/modes/predict.md b/docs/modes/predict.md index de359d0..ac04cde 100644 --- a/docs/modes/predict.md +++ b/docs/modes/predict.md @@ -1,5 +1,6 @@ --- comments: true +description: Get started with YOLOv8 Predict mode and input sources. Accepts various input sources such as images, videos,Β and directories. --- @@ -58,10 +59,11 @@ whether each source can be used in streaming mode with `stream=True` βœ… and an | YouTube βœ… | `'https://youtu.be/Zgi9g1ksQHc'` | `str` | | | stream βœ… | `'rtsp://example.com/media.mp4'` | `str` | RTSP, RTMP, HTTP | - ## Arguments + `model.predict` accepts multiple arguments that control the prediction operation. These arguments can be passed directly to `model.predict`: !!! example + ``` model.predict(source, save=True, imgsz=320, conf=0.5) ``` @@ -220,6 +222,7 @@ masks, classification logits, etc.) found in the results object res_plotted = res[0].plot() cv2.imshow("result", res_plotted) ``` + | Argument | Description | |-------------------------------|----------------------------------------------------------------------------------------| | `conf (bool)` | Whether to plot the detection confidence score. | @@ -234,7 +237,6 @@ masks, classification logits, etc.) found in the results object | `masks (bool)` | Whether to plot the masks. | | `probs (bool)` | Whether to plot classification probability. | - ## Streaming Source `for`-loop Here's a Python script using OpenCV (cv2) and YOLOv8 to run inference on video frames. This script assumes you have already installed the necessary packages (opencv-python and ultralytics). @@ -277,4 +279,4 @@ Here's a Python script using OpenCV (cv2) and YOLOv8 to run inference on video f # Release the video capture object and close the display window cap.release() cv2.destroyAllWindows() - ``` + ``` \ No newline at end of file diff --git a/docs/modes/val.md b/docs/modes/val.md index c47e965..bc294ec 100644 --- a/docs/modes/val.md +++ b/docs/modes/val.md @@ -1,5 +1,6 @@ --- comments: true +description: Validate and improve YOLOv8n model accuracy on COCO128 and other datasets using hyperparameter & configuration tuning, in Val mode. --- @@ -87,4 +88,4 @@ i.e. `format='onnx'` or `format='engine'`. | [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | βœ… | `imgsz`, `half`, `int8` | | [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | βœ… | `imgsz` | | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | βœ… | `imgsz` | -| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | βœ… | `imgsz` | +| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | βœ… | `imgsz` | \ No newline at end of file diff --git a/docs/overrides/partials/source-file.html b/docs/overrides/partials/source-file.html index 95cc605..84e2ab1 100644 --- a/docs/overrides/partials/source-file.html +++ b/docs/overrides/partials/source-file.html @@ -5,22 +5,22 @@ https://github.com/squidfunk/mkdocs-material/blob/master/src/partials/source-fil
- + - - {% if page.meta.git_revision_date_localized %} - πŸ“… {{ lang.t("source.file.date.updated") }}: - {{ page.meta.git_revision_date_localized }} - {% if page.meta.git_creation_date_localized %} -
+ + {% if page.meta.git_revision_date_localized %} + πŸ“… {{ lang.t("source.file.date.updated") }}: + {{ page.meta.git_revision_date_localized }} + {% if page.meta.git_creation_date_localized %} +
πŸŽ‚ {{ lang.t("source.file.date.created") }}: {{ page.meta.git_creation_date_localized }} - {% endif %} + {% endif %} - - {% elif page.meta.revision_date %} - πŸ“… {{ lang.t("source.file.date.updated") }}: - {{ page.meta.revision_date }} - {% endif %} -
+ + {% elif page.meta.revision_date %} + πŸ“… {{ lang.t("source.file.date.updated") }}: + {{ page.meta.revision_date }} + {% endif %} +
diff --git a/docs/quickstart.md b/docs/quickstart.md index 25cebf6..b1fe2af 100644 --- a/docs/quickstart.md +++ b/docs/quickstart.md @@ -1,5 +1,6 @@ --- comments: true +description: Install and use YOLOv8 via CLI or Python. Run single-line commands or integrate with Python projects for object detection, segmentation, and classification. --- ## Install @@ -32,13 +33,11 @@ See the `ultralytics` [requirements.txt](https://github.com/ultralytics/ultralyt PyTorch Installation Instructions - ## Use with CLI The YOLO command line interface (CLI) allows for simple single-line commands without the need for a Python environment. CLI requires no customization or Python code. You can simply run all tasks from the terminal with the `yolo` command. Check out the [CLI Guide](usage/cli.md) to learn more about using YOLOv8 from the command line. - !!! example === "Syntax" @@ -93,7 +92,6 @@ CLI requires no customization or Python code. You can simply run all tasks from yolo cfg ``` - !!! warning "Warning" Arguments must be passed as `arg=val` pairs, split by an equals `=` sign and delimited by spaces ` ` between pairs. Do not use `--` argument prefixes or commas `,` between arguments. @@ -134,4 +132,4 @@ For example, users can load a model, train it, evaluate its performance on a val success = model.export(format='onnx') ``` -[Python Guide](usage/python.md){.md-button .md-button--primary} +[Python Guide](usage/python.md){.md-button .md-button--primary} \ No newline at end of file diff --git a/docs/reference/hub/auth.md b/docs/reference/hub/auth.md index c8e5f8e..9daa042 100644 --- a/docs/reference/hub/auth.md +++ b/docs/reference/hub/auth.md @@ -1,4 +1,8 @@ +--- +description: Learn how to use Ultralytics hub authentication in your projects with examples and guidelines from the Auth page on Ultralytics Docs. +--- + # Auth --- :::ultralytics.hub.auth.Auth -

+

\ No newline at end of file diff --git a/docs/reference/hub/session.md b/docs/reference/hub/session.md index d945729..1d4eafa 100644 --- a/docs/reference/hub/session.md +++ b/docs/reference/hub/session.md @@ -1,4 +1,8 @@ +--- +description: Accelerate your AI development with the Ultralytics HUB Training Session. High-performance training of object detection models. +--- + # HUBTrainingSession --- :::ultralytics.hub.session.HUBTrainingSession -

+

\ No newline at end of file diff --git a/docs/reference/hub/utils.md b/docs/reference/hub/utils.md index 82dba74..9fc7c0c 100644 --- a/docs/reference/hub/utils.md +++ b/docs/reference/hub/utils.md @@ -1,3 +1,7 @@ +--- +description: Explore Ultralytics events, including 'request_with_credentials' and 'smart_request', to improve your project's performance and efficiency. +--- + # Events --- :::ultralytics.hub.utils.Events @@ -16,4 +20,4 @@ # smart_request --- :::ultralytics.hub.utils.smart_request -

+

\ No newline at end of file diff --git a/docs/reference/nn/autobackend.md b/docs/reference/nn/autobackend.md index 9b93b73..3a06df0 100644 --- a/docs/reference/nn/autobackend.md +++ b/docs/reference/nn/autobackend.md @@ -1,3 +1,7 @@ +--- +description: Ensure class names match filenames for easy imports. Use AutoBackend to automatically rename and refactor model files. +--- + # AutoBackend --- :::ultralytics.nn.autobackend.AutoBackend @@ -6,4 +10,4 @@ # check_class_names --- :::ultralytics.nn.autobackend.check_class_names -

+

\ No newline at end of file diff --git a/docs/reference/nn/autoshape.md b/docs/reference/nn/autoshape.md index d432ac7..3a145a6 100644 --- a/docs/reference/nn/autoshape.md +++ b/docs/reference/nn/autoshape.md @@ -1,3 +1,7 @@ +--- +description: Detect 80+ object categories with bounding box coordinates and class probabilities using AutoShape in Ultralytics YOLO. Explore Detections now. +--- + # AutoShape --- :::ultralytics.nn.autoshape.AutoShape @@ -6,4 +10,4 @@ # Detections --- :::ultralytics.nn.autoshape.Detections -

+

\ No newline at end of file diff --git a/docs/reference/nn/modules.md b/docs/reference/nn/modules.md index 9ec6eaf..9bfbdcf 100644 --- a/docs/reference/nn/modules.md +++ b/docs/reference/nn/modules.md @@ -1,3 +1,7 @@ +--- +description: Explore Ultralytics neural network modules for convolution, attention, detection, pose, and classification in PyTorch. +--- + # Conv --- :::ultralytics.nn.modules.Conv @@ -166,4 +170,4 @@ # autopad --- :::ultralytics.nn.modules.autopad -

+

\ No newline at end of file diff --git a/docs/reference/nn/tasks.md b/docs/reference/nn/tasks.md index 45a231a..977ed65 100644 --- a/docs/reference/nn/tasks.md +++ b/docs/reference/nn/tasks.md @@ -1,3 +1,7 @@ +--- +description: Learn how to work with Ultralytics YOLO Detection, Segmentation & Classification Models, load weights and parse models in PyTorch. +--- + # BaseModel --- :::ultralytics.nn.tasks.BaseModel @@ -56,4 +60,4 @@ # guess_model_task --- :::ultralytics.nn.tasks.guess_model_task -

+

\ No newline at end of file diff --git a/docs/reference/tracker/track.md b/docs/reference/tracker/track.md index 59a2ee2..75feaae 100644 --- a/docs/reference/tracker/track.md +++ b/docs/reference/tracker/track.md @@ -1,3 +1,7 @@ +--- +description: Learn how to register custom event-tracking and track predictions with Ultralytics YOLO via on_predict_start and register_tracker methods. +--- + # on_predict_start --- :::ultralytics.tracker.track.on_predict_start @@ -11,4 +15,4 @@ # register_tracker --- :::ultralytics.tracker.track.register_tracker -

+

\ No newline at end of file diff --git a/docs/reference/tracker/trackers/basetrack.md b/docs/reference/tracker/trackers/basetrack.md index 902da78..486e589 100644 --- a/docs/reference/tracker/trackers/basetrack.md +++ b/docs/reference/tracker/trackers/basetrack.md @@ -1,3 +1,7 @@ +--- +description: 'TrackState: A comprehensive guide to Ultralytics tracker''s BaseTrack for monitoring model performance. Improve your tracking capabilities now!' +--- + # TrackState --- :::ultralytics.tracker.trackers.basetrack.TrackState @@ -6,4 +10,4 @@ # BaseTrack --- :::ultralytics.tracker.trackers.basetrack.BaseTrack -

+

\ No newline at end of file diff --git a/docs/reference/tracker/trackers/bot_sort.md b/docs/reference/tracker/trackers/bot_sort.md index 0f299c8..896b5a9 100644 --- a/docs/reference/tracker/trackers/bot_sort.md +++ b/docs/reference/tracker/trackers/bot_sort.md @@ -1,3 +1,7 @@ +--- +description: '"Optimize tracking with Ultralytics BOTrack. Easily sort and track bots with BOTSORT. Streamline data collection for improved performance."' +--- + # BOTrack --- :::ultralytics.tracker.trackers.bot_sort.BOTrack @@ -6,4 +10,4 @@ # BOTSORT --- :::ultralytics.tracker.trackers.bot_sort.BOTSORT -

+

\ No newline at end of file diff --git a/docs/reference/tracker/trackers/byte_tracker.md b/docs/reference/tracker/trackers/byte_tracker.md index 3799975..5df8cfc 100644 --- a/docs/reference/tracker/trackers/byte_tracker.md +++ b/docs/reference/tracker/trackers/byte_tracker.md @@ -1,3 +1,7 @@ +--- +description: Learn how to track ByteAI model sizes and tips for model optimization with STrack, a byte tracking tool from Ultralytics. +--- + # STrack --- :::ultralytics.tracker.trackers.byte_tracker.STrack @@ -6,4 +10,4 @@ # BYTETracker --- :::ultralytics.tracker.trackers.byte_tracker.BYTETracker -

+

\ No newline at end of file diff --git a/docs/reference/tracker/utils/gmc.md b/docs/reference/tracker/utils/gmc.md index 63ae5d5..9702f03 100644 --- a/docs/reference/tracker/utils/gmc.md +++ b/docs/reference/tracker/utils/gmc.md @@ -1,4 +1,8 @@ +--- +description: '"Track Google Marketing Campaigns in GMC with Ultralytics Tracker. Learn to set up and use GMC for detailed analytics. Get started now."' +--- + # GMC --- :::ultralytics.tracker.utils.gmc.GMC -

+

\ No newline at end of file diff --git a/docs/reference/tracker/utils/kalman_filter.md b/docs/reference/tracker/utils/kalman_filter.md index 0203c02..e51582e 100644 --- a/docs/reference/tracker/utils/kalman_filter.md +++ b/docs/reference/tracker/utils/kalman_filter.md @@ -1,3 +1,7 @@ +--- +description: Improve object tracking with KalmanFilterXYAH in Ultralytics YOLO - an efficient and accurate algorithm for state estimation. +--- + # KalmanFilterXYAH --- :::ultralytics.tracker.utils.kalman_filter.KalmanFilterXYAH @@ -6,4 +10,4 @@ # KalmanFilterXYWH --- :::ultralytics.tracker.utils.kalman_filter.KalmanFilterXYWH -

+

\ No newline at end of file diff --git a/docs/reference/tracker/utils/matching.md b/docs/reference/tracker/utils/matching.md index bd24450..63585f1 100644 --- a/docs/reference/tracker/utils/matching.md +++ b/docs/reference/tracker/utils/matching.md @@ -1,3 +1,7 @@ +--- +description: Learn how to match and fuse object detections for accurate target tracking using Ultralytics' YOLO merge_matches, iou_distance, and embedding_distance. +--- + # merge_matches --- :::ultralytics.tracker.utils.matching.merge_matches @@ -56,4 +60,4 @@ # bbox_ious --- :::ultralytics.tracker.utils.matching.bbox_ious -

+

\ No newline at end of file diff --git a/docs/reference/yolo/data/annotator.md b/docs/reference/yolo/data/annotator.md index ec61aca..189f061 100644 --- a/docs/reference/yolo/data/annotator.md +++ b/docs/reference/yolo/data/annotator.md @@ -1,4 +1,8 @@ +--- +description: Learn how to use auto_annotate in Ultralytics YOLO to generate annotations automatically for your dataset. Simplify object detection workflows. +--- + # auto_annotate --- :::ultralytics.yolo.data.annotator.auto_annotate -

+

\ No newline at end of file diff --git a/docs/reference/yolo/data/augment.md b/docs/reference/yolo/data/augment.md index 59c9a6e..cb982ca 100644 --- a/docs/reference/yolo/data/augment.md +++ b/docs/reference/yolo/data/augment.md @@ -1,3 +1,7 @@ +--- +description: Use Ultralytics YOLO Data Augmentation transforms with Base, MixUp, and Albumentations for object detection and classification. +--- + # BaseTransform --- :::ultralytics.yolo.data.augment.BaseTransform @@ -86,4 +90,4 @@ # classify_albumentations --- :::ultralytics.yolo.data.augment.classify_albumentations -

+

\ No newline at end of file diff --git a/docs/reference/yolo/data/base.md b/docs/reference/yolo/data/base.md index 3a84e74..2a7fd06 100644 --- a/docs/reference/yolo/data/base.md +++ b/docs/reference/yolo/data/base.md @@ -1,4 +1,8 @@ +--- +description: Learn about BaseDataset in Ultralytics YOLO, a flexible dataset class for object detection. Maximize your YOLO performance with custom datasets. +--- + # BaseDataset --- :::ultralytics.yolo.data.base.BaseDataset -

+

\ No newline at end of file diff --git a/docs/reference/yolo/data/build.md b/docs/reference/yolo/data/build.md index d333d9d..2ab2ada 100644 --- a/docs/reference/yolo/data/build.md +++ b/docs/reference/yolo/data/build.md @@ -1,3 +1,7 @@ +--- +description: Maximize YOLO performance with Ultralytics' InfiniteDataLoader, seed_worker, build_dataloader, and load_inference_source functions. +--- + # InfiniteDataLoader --- :::ultralytics.yolo.data.build.InfiniteDataLoader @@ -31,4 +35,4 @@ # load_inference_source --- :::ultralytics.yolo.data.build.load_inference_source -

+

\ No newline at end of file diff --git a/docs/reference/yolo/data/converter.md b/docs/reference/yolo/data/converter.md index 79b184b..07ff991 100644 --- a/docs/reference/yolo/data/converter.md +++ b/docs/reference/yolo/data/converter.md @@ -1,3 +1,7 @@ +--- +description: Convert COCO-91 to COCO-80 class, RLE to polygon, and merge multi-segment images with Ultralytics YOLO data converter. Improve your object detection. +--- + # coco91_to_coco80_class --- :::ultralytics.yolo.data.converter.coco91_to_coco80_class @@ -26,4 +30,4 @@ # delete_dsstore --- :::ultralytics.yolo.data.converter.delete_dsstore -

+

\ No newline at end of file diff --git a/docs/reference/yolo/data/dataloaders/stream_loaders.md b/docs/reference/yolo/data/dataloaders/stream_loaders.md index 7af3b90..0612323 100644 --- a/docs/reference/yolo/data/dataloaders/stream_loaders.md +++ b/docs/reference/yolo/data/dataloaders/stream_loaders.md @@ -1,3 +1,7 @@ +--- +description: 'Ultralytics YOLO Docs: Learn about stream loaders for image and tensor data, as well as autocasting techniques. Check out SourceTypes and more.' +--- + # SourceTypes --- :::ultralytics.yolo.data.dataloaders.stream_loaders.SourceTypes @@ -31,4 +35,4 @@ # autocast_list --- :::ultralytics.yolo.data.dataloaders.stream_loaders.autocast_list -

+

\ No newline at end of file diff --git a/docs/reference/yolo/data/dataloaders/v5augmentations.md b/docs/reference/yolo/data/dataloaders/v5augmentations.md index 67b31e8..4c2a8c3 100644 --- a/docs/reference/yolo/data/dataloaders/v5augmentations.md +++ b/docs/reference/yolo/data/dataloaders/v5augmentations.md @@ -1,3 +1,7 @@ +--- +description: Enhance image data with Albumentations CenterCrop, normalize, augment_hsv, replicate, random_perspective, cutout, & box_candidates. +--- + # Albumentations --- :::ultralytics.yolo.data.dataloaders.v5augmentations.Albumentations @@ -81,4 +85,4 @@ # classify_transforms --- :::ultralytics.yolo.data.dataloaders.v5augmentations.classify_transforms -

+

\ No newline at end of file diff --git a/docs/reference/yolo/data/dataloaders/v5loader.md b/docs/reference/yolo/data/dataloaders/v5loader.md index 90df8c1..2496830 100644 --- a/docs/reference/yolo/data/dataloaders/v5loader.md +++ b/docs/reference/yolo/data/dataloaders/v5loader.md @@ -1,3 +1,7 @@ +--- +description: Efficiently load images and labels to models using Ultralytics YOLO's InfiniteDataLoader, LoadScreenshots, and LoadStreams. +--- + # InfiniteDataLoader --- :::ultralytics.yolo.data.dataloaders.v5loader.InfiniteDataLoader @@ -86,4 +90,4 @@ # create_classification_dataloader --- :::ultralytics.yolo.data.dataloaders.v5loader.create_classification_dataloader -

+

\ No newline at end of file diff --git a/docs/reference/yolo/data/dataset.md b/docs/reference/yolo/data/dataset.md index c0de822..4d722aa 100644 --- a/docs/reference/yolo/data/dataset.md +++ b/docs/reference/yolo/data/dataset.md @@ -1,3 +1,7 @@ +--- +description: Create custom YOLOv5 datasets with Ultralytics YOLODataset and SemanticDataset. Streamline your object detection and segmentation projects. +--- + # YOLODataset --- :::ultralytics.yolo.data.dataset.YOLODataset @@ -11,4 +15,4 @@ # SemanticDataset --- :::ultralytics.yolo.data.dataset.SemanticDataset -

+

\ No newline at end of file diff --git a/docs/reference/yolo/data/dataset_wrappers.md b/docs/reference/yolo/data/dataset_wrappers.md index 59f9f93..5b954de 100644 --- a/docs/reference/yolo/data/dataset_wrappers.md +++ b/docs/reference/yolo/data/dataset_wrappers.md @@ -1,4 +1,8 @@ +--- +description: Create a custom dataset of mixed and oriented rectangular objects with Ultralytics YOLO's MixAndRectDataset. +--- + # MixAndRectDataset --- :::ultralytics.yolo.data.dataset_wrappers.MixAndRectDataset -

+

\ No newline at end of file diff --git a/docs/reference/yolo/data/utils.md b/docs/reference/yolo/data/utils.md index a7067ec..f6d7b25 100644 --- a/docs/reference/yolo/data/utils.md +++ b/docs/reference/yolo/data/utils.md @@ -1,3 +1,7 @@ +--- +description: Efficiently handle data in YOLO with Ultralytics. Utilize HUBDatasetStats and customize dataset with these data utility functions. +--- + # HUBDatasetStats --- :::ultralytics.yolo.data.utils.HUBDatasetStats @@ -61,4 +65,4 @@ # zip_directory --- :::ultralytics.yolo.data.utils.zip_directory -

+

\ No newline at end of file diff --git a/docs/reference/yolo/engine/exporter.md b/docs/reference/yolo/engine/exporter.md index 6e30466..81d7059 100644 --- a/docs/reference/yolo/engine/exporter.md +++ b/docs/reference/yolo/engine/exporter.md @@ -1,3 +1,7 @@ +--- +description: Learn how to export your YOLO model in various formats using Ultralytics' exporter package - iOS, GDC, and more. +--- + # Exporter --- :::ultralytics.yolo.engine.exporter.Exporter @@ -26,4 +30,4 @@ # export --- :::ultralytics.yolo.engine.exporter.export -

+

\ No newline at end of file diff --git a/docs/reference/yolo/engine/model.md b/docs/reference/yolo/engine/model.md index 6b2b318..a6463b0 100644 --- a/docs/reference/yolo/engine/model.md +++ b/docs/reference/yolo/engine/model.md @@ -1,4 +1,8 @@ +--- +description: Discover the YOLO model of Ultralytics engine to simplify your object detection tasks with state-of-the-art models. +--- + # YOLO --- :::ultralytics.yolo.engine.model.YOLO -

+

\ No newline at end of file diff --git a/docs/reference/yolo/engine/predictor.md b/docs/reference/yolo/engine/predictor.md index 2617160..a3e066d 100644 --- a/docs/reference/yolo/engine/predictor.md +++ b/docs/reference/yolo/engine/predictor.md @@ -1,4 +1,8 @@ +--- +description: '"The BasePredictor class in Ultralytics YOLO Engine predicts object detection in images and videos. Learn to implement YOLO with ease."' +--- + # BasePredictor --- :::ultralytics.yolo.engine.predictor.BasePredictor -

+

\ No newline at end of file diff --git a/docs/reference/yolo/engine/results.md b/docs/reference/yolo/engine/results.md index aa25373..b558e99 100644 --- a/docs/reference/yolo/engine/results.md +++ b/docs/reference/yolo/engine/results.md @@ -1,3 +1,7 @@ +--- +description: Learn about BaseTensor & Boxes in Ultralytics YOLO Engine. Check out Ultralytics Docs for quality tutorials and resources on object detection. +--- + # BaseTensor --- :::ultralytics.yolo.engine.results.BaseTensor @@ -16,4 +20,4 @@ # Masks --- :::ultralytics.yolo.engine.results.Masks -

+

\ No newline at end of file diff --git a/docs/reference/yolo/engine/trainer.md b/docs/reference/yolo/engine/trainer.md index 9112838..e5a9ea5 100644 --- a/docs/reference/yolo/engine/trainer.md +++ b/docs/reference/yolo/engine/trainer.md @@ -1,3 +1,7 @@ +--- +description: Train faster with mixed precision. Learn how to use BaseTrainer with Advanced Mixed Precision to optimize YOLOv3 and YOLOv4 models. +--- + # BaseTrainer --- :::ultralytics.yolo.engine.trainer.BaseTrainer @@ -6,4 +10,4 @@ # check_amp --- :::ultralytics.yolo.engine.trainer.check_amp -

+

\ No newline at end of file diff --git a/docs/reference/yolo/engine/validator.md b/docs/reference/yolo/engine/validator.md index 4a15794..6c1ecd4 100644 --- a/docs/reference/yolo/engine/validator.md +++ b/docs/reference/yolo/engine/validator.md @@ -1,4 +1,8 @@ +--- +description: Ensure YOLOv5 models meet constraints and standards with the BaseValidator class. Learn how to use it here. +--- + # BaseValidator --- :::ultralytics.yolo.engine.validator.BaseValidator -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/autobatch.md b/docs/reference/yolo/utils/autobatch.md index a9e075b..a5051c1 100644 --- a/docs/reference/yolo/utils/autobatch.md +++ b/docs/reference/yolo/utils/autobatch.md @@ -1,3 +1,7 @@ +--- +description: Dynamically adjusts input size to optimize GPU memory usage during training. Learn how to use check_train_batch_size with Ultralytics YOLO. +--- + # check_train_batch_size --- :::ultralytics.yolo.utils.autobatch.check_train_batch_size @@ -6,4 +10,4 @@ # autobatch --- :::ultralytics.yolo.utils.autobatch.autobatch -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/benchmarks.md b/docs/reference/yolo/utils/benchmarks.md index 6333b0b..aa47189 100644 --- a/docs/reference/yolo/utils/benchmarks.md +++ b/docs/reference/yolo/utils/benchmarks.md @@ -1,4 +1,8 @@ +--- +description: Improve your YOLO's performance and measure its speed. Benchmark utility for YOLOv5. +--- + # benchmark --- :::ultralytics.yolo.utils.benchmarks.benchmark -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/callbacks/base.md b/docs/reference/yolo/utils/callbacks/base.md index 210a981..caa2cd7 100644 --- a/docs/reference/yolo/utils/callbacks/base.md +++ b/docs/reference/yolo/utils/callbacks/base.md @@ -1,3 +1,7 @@ +--- +description: Learn about YOLO's callback functions from on_train_start to add_integration_callbacks. See how these callbacks modify and save models. +--- + # on_pretrain_routine_start --- :::ultralytics.yolo.utils.callbacks.base.on_pretrain_routine_start @@ -131,4 +135,4 @@ # add_integration_callbacks --- :::ultralytics.yolo.utils.callbacks.base.add_integration_callbacks -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/callbacks/clearml.md b/docs/reference/yolo/utils/callbacks/clearml.md index ab0df6e..ae6fb3a 100644 --- a/docs/reference/yolo/utils/callbacks/clearml.md +++ b/docs/reference/yolo/utils/callbacks/clearml.md @@ -1,3 +1,7 @@ +--- +description: Improve your YOLOv5 model training with callbacks from ClearML. Learn about log debug samples, pre-training routines, validation and more. +--- + # _log_debug_samples --- :::ultralytics.yolo.utils.callbacks.clearml._log_debug_samples @@ -31,4 +35,4 @@ # on_train_end --- :::ultralytics.yolo.utils.callbacks.clearml.on_train_end -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/callbacks/comet.md b/docs/reference/yolo/utils/callbacks/comet.md index 6265e15..7ed6f57 100644 --- a/docs/reference/yolo/utils/callbacks/comet.md +++ b/docs/reference/yolo/utils/callbacks/comet.md @@ -1,3 +1,7 @@ +--- +description: Learn about YOLO callbacks using the Comet.ml platform, enhancing object detection training and testing with custom logging and visualizations. +--- + # _get_comet_mode --- :::ultralytics.yolo.utils.callbacks.comet._get_comet_mode @@ -116,4 +120,4 @@ # on_train_end --- :::ultralytics.yolo.utils.callbacks.comet.on_train_end -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/callbacks/hub.md b/docs/reference/yolo/utils/callbacks/hub.md index d6d35c2..2b66e6b 100644 --- a/docs/reference/yolo/utils/callbacks/hub.md +++ b/docs/reference/yolo/utils/callbacks/hub.md @@ -1,3 +1,7 @@ +--- +description: Improve YOLOv5 model training with Ultralytics' on-train callbacks. Boost performance on-pretrain-routine-end, model-save, train/predict start. +--- + # on_pretrain_routine_end --- :::ultralytics.yolo.utils.callbacks.hub.on_pretrain_routine_end @@ -36,4 +40,4 @@ # on_export_start --- :::ultralytics.yolo.utils.callbacks.hub.on_export_start -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/callbacks/mlflow.md b/docs/reference/yolo/utils/callbacks/mlflow.md index a0b3a2a..5660a0d 100644 --- a/docs/reference/yolo/utils/callbacks/mlflow.md +++ b/docs/reference/yolo/utils/callbacks/mlflow.md @@ -1,3 +1,7 @@ +--- +description: Track model performance and metrics with MLflow in YOLOv5. Use callbacks like on_pretrain_routine_end or on_train_end to log information. +--- + # on_pretrain_routine_end --- :::ultralytics.yolo.utils.callbacks.mlflow.on_pretrain_routine_end @@ -11,4 +15,4 @@ # on_train_end --- :::ultralytics.yolo.utils.callbacks.mlflow.on_train_end -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/callbacks/neptune.md b/docs/reference/yolo/utils/callbacks/neptune.md index bd17a2e..833be67 100644 --- a/docs/reference/yolo/utils/callbacks/neptune.md +++ b/docs/reference/yolo/utils/callbacks/neptune.md @@ -1,3 +1,7 @@ +--- +description: Improve YOLOv5 training with Neptune, a powerful logging tool. Track metrics like images, plots, and epochs for better model performance. +--- + # _log_scalars --- :::ultralytics.yolo.utils.callbacks.neptune._log_scalars @@ -36,4 +40,4 @@ # on_train_end --- :::ultralytics.yolo.utils.callbacks.neptune.on_train_end -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/callbacks/raytune.md b/docs/reference/yolo/utils/callbacks/raytune.md index ba5ca79..1a7653d 100644 --- a/docs/reference/yolo/utils/callbacks/raytune.md +++ b/docs/reference/yolo/utils/callbacks/raytune.md @@ -1,4 +1,8 @@ +--- +description: '"Improve YOLO model performance with on_fit_epoch_end callback. Learn to integrate with Ray Tune for hyperparameter tuning. Ultralytics YOLO docs."' +--- + # on_fit_epoch_end --- :::ultralytics.yolo.utils.callbacks.raytune.on_fit_epoch_end -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/callbacks/tensorboard.md b/docs/reference/yolo/utils/callbacks/tensorboard.md index 7362d39..34e1f37 100644 --- a/docs/reference/yolo/utils/callbacks/tensorboard.md +++ b/docs/reference/yolo/utils/callbacks/tensorboard.md @@ -1,3 +1,7 @@ +--- +description: Learn how to monitor the training process with Tensorboard using Ultralytics YOLO's "_log_scalars" and "on_batch_end" methods. +--- + # _log_scalars --- :::ultralytics.yolo.utils.callbacks.tensorboard._log_scalars @@ -16,4 +20,4 @@ # on_fit_epoch_end --- :::ultralytics.yolo.utils.callbacks.tensorboard.on_fit_epoch_end -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/callbacks/wb.md b/docs/reference/yolo/utils/callbacks/wb.md index f62af85..49fab9e 100644 --- a/docs/reference/yolo/utils/callbacks/wb.md +++ b/docs/reference/yolo/utils/callbacks/wb.md @@ -1,3 +1,7 @@ +--- +description: Learn how to use Ultralytics YOLO's built-in callbacks `on_pretrain_routine_start` and `on_train_epoch_end` for improved training performance. +--- + # on_pretrain_routine_start --- :::ultralytics.yolo.utils.callbacks.wb.on_pretrain_routine_start @@ -16,4 +20,4 @@ # on_train_end --- :::ultralytics.yolo.utils.callbacks.wb.on_train_end -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/checks.md b/docs/reference/yolo/utils/checks.md index b943ff1..dd5f145 100644 --- a/docs/reference/yolo/utils/checks.md +++ b/docs/reference/yolo/utils/checks.md @@ -1,3 +1,7 @@ +--- +description: 'Check functions for YOLO utils: image size, version, font, requirements, filename suffix, YAML file, YOLO, and Git version.' +--- + # is_ascii --- :::ultralytics.yolo.utils.checks.is_ascii @@ -76,4 +80,4 @@ # print_args --- :::ultralytics.yolo.utils.checks.print_args -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/dist.md b/docs/reference/yolo/utils/dist.md index 33b8b23..33576ab 100644 --- a/docs/reference/yolo/utils/dist.md +++ b/docs/reference/yolo/utils/dist.md @@ -1,3 +1,7 @@ +--- +description: Learn how to find free network port and generate DDP (Distributed Data Parallel) command in Ultralytics YOLO with easy examples. +--- + # find_free_network_port --- :::ultralytics.yolo.utils.dist.find_free_network_port @@ -16,4 +20,4 @@ # ddp_cleanup --- :::ultralytics.yolo.utils.dist.ddp_cleanup -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/downloads.md b/docs/reference/yolo/utils/downloads.md index 58b8d9a..2f25de3 100644 --- a/docs/reference/yolo/utils/downloads.md +++ b/docs/reference/yolo/utils/downloads.md @@ -1,3 +1,7 @@ +--- +description: Download and unzip YOLO pretrained models. Ultralytics YOLO docs utils.downloads.unzip_file, checks disk space, downloads and attempts assets. +--- + # is_url --- :::ultralytics.yolo.utils.downloads.is_url @@ -26,4 +30,4 @@ # download --- :::ultralytics.yolo.utils.downloads.download -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/errors.md b/docs/reference/yolo/utils/errors.md index b0b3c79..d193fd3 100644 --- a/docs/reference/yolo/utils/errors.md +++ b/docs/reference/yolo/utils/errors.md @@ -1,4 +1,8 @@ +--- +description: Learn about HUBModelError in Ultralytics YOLO Docs. Resolve the error and get the most out of your YOLO model. +--- + # HUBModelError --- :::ultralytics.yolo.utils.errors.HUBModelError -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/files.md b/docs/reference/yolo/utils/files.md index 722c9ae..2e465a2 100644 --- a/docs/reference/yolo/utils/files.md +++ b/docs/reference/yolo/utils/files.md @@ -1,3 +1,7 @@ +--- +description: 'Learn about Ultralytics YOLO files and directory utilities: WorkingDirectory, file_age, file_size, and make_dirs.' +--- + # WorkingDirectory --- :::ultralytics.yolo.utils.files.WorkingDirectory @@ -31,4 +35,4 @@ # make_dirs --- :::ultralytics.yolo.utils.files.make_dirs -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/instance.md b/docs/reference/yolo/utils/instance.md index 33fb9b8..c3dabe7 100644 --- a/docs/reference/yolo/utils/instance.md +++ b/docs/reference/yolo/utils/instance.md @@ -1,3 +1,7 @@ +--- +description: Learn about Bounding Boxes (Bboxes) and _ntuple in Ultralytics YOLO for object detection. Improve accuracy and speed with these powerful tools. +--- + # Bboxes --- :::ultralytics.yolo.utils.instance.Bboxes @@ -11,4 +15,4 @@ # _ntuple --- :::ultralytics.yolo.utils.instance._ntuple -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/loss.md b/docs/reference/yolo/utils/loss.md index 2e6822a..a2b361a 100644 --- a/docs/reference/yolo/utils/loss.md +++ b/docs/reference/yolo/utils/loss.md @@ -1,3 +1,7 @@ +--- +description: Learn about Varifocal Loss and Keypoint Loss in Ultralytics YOLO for advanced bounding box and pose estimation. Visit our docs for more. +--- + # VarifocalLoss --- :::ultralytics.yolo.utils.loss.VarifocalLoss @@ -11,4 +15,4 @@ # KeypointLoss --- :::ultralytics.yolo.utils.loss.KeypointLoss -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/metrics.md b/docs/reference/yolo/utils/metrics.md index 067595a..4b97cc1 100644 --- a/docs/reference/yolo/utils/metrics.md +++ b/docs/reference/yolo/utils/metrics.md @@ -1,3 +1,7 @@ +--- +description: Explore Ultralytics YOLO's FocalLoss, DetMetrics, PoseMetrics, ClassifyMetrics, and more with Ultralytics Metrics documentation. +--- + # FocalLoss --- :::ultralytics.yolo.utils.metrics.FocalLoss @@ -91,4 +95,4 @@ # ap_per_class --- :::ultralytics.yolo.utils.metrics.ap_per_class -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/ops.md b/docs/reference/yolo/utils/ops.md index d357d29..f3af7a0 100644 --- a/docs/reference/yolo/utils/ops.md +++ b/docs/reference/yolo/utils/ops.md @@ -1,3 +1,7 @@ +--- +description: Learn about various utility functions in Ultralytics YOLO, including x, y, width, height conversions, non-max suppression, and more. +--- + # Profile --- :::ultralytics.yolo.utils.ops.Profile @@ -131,4 +135,4 @@ # clean_str --- :::ultralytics.yolo.utils.ops.clean_str -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/plotting.md b/docs/reference/yolo/utils/plotting.md index a84e17a..0f2f7e8 100644 --- a/docs/reference/yolo/utils/plotting.md +++ b/docs/reference/yolo/utils/plotting.md @@ -1,3 +1,7 @@ +--- +description: 'Discover the power of YOLO''s plotting functions: Colors, Labels and Images. Code examples to output targets and visualize features. Check it now.' +--- + # Colors --- :::ultralytics.yolo.utils.plotting.Colors @@ -36,4 +40,4 @@ # feature_visualization --- :::ultralytics.yolo.utils.plotting.feature_visualization -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/tal.md b/docs/reference/yolo/utils/tal.md index 71171f5..6cfb2cd 100644 --- a/docs/reference/yolo/utils/tal.md +++ b/docs/reference/yolo/utils/tal.md @@ -1,3 +1,7 @@ +--- +description: Improve your YOLO models with Ultralytics' TaskAlignedAssigner, select_highest_overlaps, and dist2bbox utilities. Streamline your workflow today. +--- + # TaskAlignedAssigner --- :::ultralytics.yolo.utils.tal.TaskAlignedAssigner @@ -26,4 +30,4 @@ # bbox2dist --- :::ultralytics.yolo.utils.tal.bbox2dist -

+

\ No newline at end of file diff --git a/docs/reference/yolo/utils/torch_utils.md b/docs/reference/yolo/utils/torch_utils.md index 67add55..c3cf42e 100644 --- a/docs/reference/yolo/utils/torch_utils.md +++ b/docs/reference/yolo/utils/torch_utils.md @@ -1,3 +1,7 @@ +--- +description: Optimize your PyTorch models with Ultralytics YOLO's torch_utils functions such as ModelEMA, select_device, and is_parallel. +--- + # ModelEMA --- :::ultralytics.yolo.utils.torch_utils.ModelEMA @@ -116,4 +120,4 @@ # profile --- :::ultralytics.yolo.utils.torch_utils.profile -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/classify/predict.md b/docs/reference/yolo/v8/classify/predict.md index 3b72459..b2a083c 100644 --- a/docs/reference/yolo/v8/classify/predict.md +++ b/docs/reference/yolo/v8/classify/predict.md @@ -1,3 +1,7 @@ +--- +description: Learn how to use ClassificationPredictor in Ultralytics YOLOv8 for object classification tasks in a simple and efficient way. +--- + # ClassificationPredictor --- :::ultralytics.yolo.v8.classify.predict.ClassificationPredictor @@ -6,4 +10,4 @@ # predict --- :::ultralytics.yolo.v8.classify.predict.predict -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/classify/train.md b/docs/reference/yolo/v8/classify/train.md index 365249e..013b5b4 100644 --- a/docs/reference/yolo/v8/classify/train.md +++ b/docs/reference/yolo/v8/classify/train.md @@ -1,3 +1,7 @@ +--- +description: Train a custom image classification model using Ultralytics YOLOv8 with ClassificationTrainer. Boost accuracy and efficiency today. +--- + # ClassificationTrainer --- :::ultralytics.yolo.v8.classify.train.ClassificationTrainer @@ -6,4 +10,4 @@ # train --- :::ultralytics.yolo.v8.classify.train.train -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/classify/val.md b/docs/reference/yolo/v8/classify/val.md index 0c53d6f..18d5683 100644 --- a/docs/reference/yolo/v8/classify/val.md +++ b/docs/reference/yolo/v8/classify/val.md @@ -1,3 +1,7 @@ +--- +description: Ensure model classification accuracy with Ultralytics YOLO's ClassificationValidator. Validate and improve your model with ease. +--- + # ClassificationValidator --- :::ultralytics.yolo.v8.classify.val.ClassificationValidator @@ -6,4 +10,4 @@ # val --- :::ultralytics.yolo.v8.classify.val.val -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/detect/predict.md b/docs/reference/yolo/v8/detect/predict.md index df40e4f..6286396 100644 --- a/docs/reference/yolo/v8/detect/predict.md +++ b/docs/reference/yolo/v8/detect/predict.md @@ -1,3 +1,7 @@ +--- +description: Detect and predict objects in images and videos using the Ultralytics YOLO v8 model with DetectionPredictor. +--- + # DetectionPredictor --- :::ultralytics.yolo.v8.detect.predict.DetectionPredictor @@ -6,4 +10,4 @@ # predict --- :::ultralytics.yolo.v8.detect.predict.predict -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/detect/train.md b/docs/reference/yolo/v8/detect/train.md index 96606c1..9ddcdf6 100644 --- a/docs/reference/yolo/v8/detect/train.md +++ b/docs/reference/yolo/v8/detect/train.md @@ -1,3 +1,7 @@ +--- +description: Train and optimize custom object detection models with Ultralytics DetectionTrainer and train functions. Get started with YOLO v8 today. +--- + # DetectionTrainer --- :::ultralytics.yolo.v8.detect.train.DetectionTrainer @@ -11,4 +15,4 @@ # train --- :::ultralytics.yolo.v8.detect.train.train -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/detect/val.md b/docs/reference/yolo/v8/detect/val.md index 665f073..f7aca2c 100644 --- a/docs/reference/yolo/v8/detect/val.md +++ b/docs/reference/yolo/v8/detect/val.md @@ -1,3 +1,7 @@ +--- +description: Validate YOLOv5 detections using this PyTorch module. Ensure model accuracy with NMS IOU threshold tuning and label mapping. +--- + # DetectionValidator --- :::ultralytics.yolo.v8.detect.val.DetectionValidator @@ -6,4 +10,4 @@ # val --- :::ultralytics.yolo.v8.detect.val.val -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/pose/predict.md b/docs/reference/yolo/v8/pose/predict.md index a95ba1a..50620b0 100644 --- a/docs/reference/yolo/v8/pose/predict.md +++ b/docs/reference/yolo/v8/pose/predict.md @@ -1,3 +1,7 @@ +--- +description: Predict human pose coordinates and confidence scores using YOLOv5. Use on real-time video streams or static images. +--- + # PosePredictor --- :::ultralytics.yolo.v8.pose.predict.PosePredictor @@ -6,4 +10,4 @@ # predict --- :::ultralytics.yolo.v8.pose.predict.predict -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/pose/train.md b/docs/reference/yolo/v8/pose/train.md index f22e347..be3cc59 100644 --- a/docs/reference/yolo/v8/pose/train.md +++ b/docs/reference/yolo/v8/pose/train.md @@ -1,3 +1,7 @@ +--- +description: Boost posture detection using PoseTrainer and train models using train() API. Learn PoseLoss for ultra-fast and accurate pose detection with Ultralytics YOLO. +--- + # PoseTrainer --- :::ultralytics.yolo.v8.pose.train.PoseTrainer @@ -11,4 +15,4 @@ # train --- :::ultralytics.yolo.v8.pose.train.train -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/pose/val.md b/docs/reference/yolo/v8/pose/val.md index 323624d..d00aecb 100644 --- a/docs/reference/yolo/v8/pose/val.md +++ b/docs/reference/yolo/v8/pose/val.md @@ -1,3 +1,7 @@ +--- +description: Ensure proper human poses in images with YOLOv8 Pose Validation, part of the Ultralytics YOLO v8 suite. +--- + # PoseValidator --- :::ultralytics.yolo.v8.pose.val.PoseValidator @@ -6,4 +10,4 @@ # val --- :::ultralytics.yolo.v8.pose.val.val -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/segment/predict.md b/docs/reference/yolo/v8/segment/predict.md index e632ae6..61ebf04 100644 --- a/docs/reference/yolo/v8/segment/predict.md +++ b/docs/reference/yolo/v8/segment/predict.md @@ -1,3 +1,7 @@ +--- +description: '"Use SegmentationPredictor in YOLOv8 for efficient object detection and segmentation. Explore Ultralytics YOLO Docs for more information."' +--- + # SegmentationPredictor --- :::ultralytics.yolo.v8.segment.predict.SegmentationPredictor @@ -6,4 +10,4 @@ # predict --- :::ultralytics.yolo.v8.segment.predict.predict -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/segment/train.md b/docs/reference/yolo/v8/segment/train.md index 208dc5b..e7dd9ff 100644 --- a/docs/reference/yolo/v8/segment/train.md +++ b/docs/reference/yolo/v8/segment/train.md @@ -1,3 +1,7 @@ +--- +description: Learn about SegmentationTrainer and Train in Ultralytics YOLO v8 for efficient object detection models. Improve your training with Ultralytics Docs. +--- + # SegmentationTrainer --- :::ultralytics.yolo.v8.segment.train.SegmentationTrainer @@ -11,4 +15,4 @@ # train --- :::ultralytics.yolo.v8.segment.train.train -

+

\ No newline at end of file diff --git a/docs/reference/yolo/v8/segment/val.md b/docs/reference/yolo/v8/segment/val.md index dbe4e22..87ad502 100644 --- a/docs/reference/yolo/v8/segment/val.md +++ b/docs/reference/yolo/v8/segment/val.md @@ -1,3 +1,7 @@ +--- +description: Ensure segmentation quality on large datasets with SegmentationValidator. Review and visualize results with ease. Learn more at Ultralytics Docs. +--- + # SegmentationValidator --- :::ultralytics.yolo.v8.segment.val.SegmentationValidator @@ -6,4 +10,4 @@ # val --- :::ultralytics.yolo.v8.segment.val.val -

+

\ No newline at end of file diff --git a/docs/tasks/classify.md b/docs/tasks/classify.md index 3230234..8411e2b 100644 --- a/docs/tasks/classify.md +++ b/docs/tasks/classify.md @@ -1,5 +1,6 @@ --- comments: true +description: Check YOLO class label with only one class for the whole image, using image classification. Get strategies for training and validation models. --- Image classification is the simplest of the three tasks and involves classifying an entire image into one of a set of @@ -74,7 +75,9 @@ see the [Configuration](../usage/cfg.md) page. ``` ### Dataset format + The YOLO classification dataset format is same as the torchvision format. Each class of images has its own folder and you have to simply pass the path of the dataset folder, i.e, `yolo classify train data="path/to/dataset"` + ``` dataset/ β”œβ”€β”€ train/ @@ -88,6 +91,7 @@ dataset/ β”œβ”€β”€β”€β”€ class3/ β”œβ”€β”€β”€β”€ ... ``` + ## Val Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains @@ -171,19 +175,19 @@ Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc. Available YOLOv8-cls export formats are in the table below. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n-cls.onnx`. Usage examples are shown for your model after export completes. -| Format | `format` Argument | Model | Metadata | Arguments | -|--------------------------------------------------------------------|-------------------|------------------------------|----------|-----------------------------------------------------| -| [PyTorch](https://pytorch.org/) | - | `yolov8n-cls.pt` | βœ… | - | -| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-cls.torchscript` | βœ… | `imgsz`, `optimize` | -| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-cls.onnx` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `opset` | +| Format | `format` Argument | Model | Metadata | Arguments | +|--------------------------------------------------------------------|-------------------|-------------------------------|----------|-----------------------------------------------------| +| [PyTorch](https://pytorch.org/) | - | `yolov8n-cls.pt` | βœ… | - | +| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-cls.torchscript` | βœ… | `imgsz`, `optimize` | +| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-cls.onnx` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `opset` | | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-cls_openvino_model/` | βœ… | `imgsz`, `half` | -| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-cls.engine` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` | -| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-cls.mlmodel` | βœ… | `imgsz`, `half`, `int8`, `nms` | -| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-cls_saved_model/` | βœ… | `imgsz`, `keras` | -| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-cls.pb` | ❌ | `imgsz` | -| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-cls.tflite` | βœ… | `imgsz`, `half`, `int8` | -| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-cls_edgetpu.tflite` | βœ… | `imgsz` | -| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` | βœ… | `imgsz` | -| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` | βœ… | `imgsz` | - -See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. +| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-cls.engine` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` | +| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-cls.mlmodel` | βœ… | `imgsz`, `half`, `int8`, `nms` | +| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-cls_saved_model/` | βœ… | `imgsz`, `keras` | +| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-cls.pb` | ❌ | `imgsz` | +| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-cls.tflite` | βœ… | `imgsz`, `half`, `int8` | +| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-cls_edgetpu.tflite` | βœ… | `imgsz` | +| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` | βœ… | `imgsz` | +| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` | βœ… | `imgsz` | + +See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. \ No newline at end of file diff --git a/docs/tasks/detect.md b/docs/tasks/detect.md index 8ed02e0..a3e5728 100644 --- a/docs/tasks/detect.md +++ b/docs/tasks/detect.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to use YOLOv8, an object detection model pre-trained with COCO and about the different YOLOv8 models and how to train and export them. --- Object detection is a task that involves identifying the location and class of objects in an image or video stream. @@ -166,4 +167,4 @@ Available YOLOv8 export formats are in the table below. You can predict or valid | [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | βœ… | `imgsz` | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | βœ… | `imgsz` | -See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. +See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. \ No newline at end of file diff --git a/docs/tasks/index.md b/docs/tasks/index.md index 2077118..0931a3f 100644 --- a/docs/tasks/index.md +++ b/docs/tasks/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how Ultralytics YOLOv8 AI framework supports detection, segmentation, classification, and pose/keypoint estimation tasks. --- # Ultralytics YOLOv8 Tasks @@ -35,7 +36,7 @@ images based on their content. It uses a variant of the EfficientNet architectur ## [Pose](pose.md) -Pose/keypoint detection is a task that involves detecting specific points in an image or video frame. These points are +Pose/keypoint detection is a task that involves detecting specific points in an image or video frame. These points are referred to as keypoints and are used to track movement or pose estimation. YOLOv8 can detect keypoints in an image or video frame with high accuracy and speed. diff --git a/docs/tasks/pose.md b/docs/tasks/pose.md index 8156834..6c43d6c 100644 --- a/docs/tasks/pose.md +++ b/docs/tasks/pose.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to use YOLOv8 pose estimation models to identify the position of keypoints on objects in an image, and how to train, validate, predict, and export these models for use with various formats such as ONNX or CoreML. --- Pose estimation is a task that involves identifying the location of specific points in an image, usually referred @@ -28,7 +29,7 @@ the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/ Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use. | Model | size
(pixels) | mAPpose
50-95 | mAPpose
50 | Speed
CPU ONNX
(ms) | Speed
A100 TensorRT
(ms) | params
(M) | FLOPs
(B) | -| ---------------------------------------------------------------------------------------------------- | --------------------- |-----------------------|--------------------| ------------------------------ | ----------------------------------- | ------------------ | ----------------- | +|------------------------------------------------------------------------------------------------------|-----------------------|-----------------------|--------------------|--------------------------------|-------------------------------------|--------------------|-------------------| | [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 50.4 | 80.1 | 131.8 | 1.18 | 3.3 | 9.2 | | [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | 60.0 | 86.2 | 233.2 | 1.42 | 11.6 | 30.2 | | [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 | @@ -161,19 +162,19 @@ Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc. Available YOLOv8-pose export formats are in the table below. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n-pose.onnx`. Usage examples are shown for your model after export completes. -| Format | `format` Argument | Model | Metadata | Arguments | -|--------------------------------------------------------------------|-------------------|-------------------------------|----------|-----------------------------------------------------| -| [PyTorch](https://pytorch.org/) | - | `yolov8n-pose.pt` | βœ… | - | -| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-pose.torchscript` | βœ… | `imgsz`, `optimize` | -| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-pose.onnx` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `opset` | +| Format | `format` Argument | Model | Metadata | Arguments | +|--------------------------------------------------------------------|-------------------|--------------------------------|----------|-----------------------------------------------------| +| [PyTorch](https://pytorch.org/) | - | `yolov8n-pose.pt` | βœ… | - | +| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-pose.torchscript` | βœ… | `imgsz`, `optimize` | +| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-pose.onnx` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `opset` | | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-pose_openvino_model/` | βœ… | `imgsz`, `half` | -| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-pose.engine` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` | -| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-pose.mlmodel` | βœ… | `imgsz`, `half`, `int8`, `nms` | -| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-pose_saved_model/` | βœ… | `imgsz`, `keras` | -| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-pose.pb` | ❌ | `imgsz` | -| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-pose.tflite` | βœ… | `imgsz`, `half`, `int8` | -| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-pose_edgetpu.tflite` | βœ… | `imgsz` | -| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-pose_web_model/` | βœ… | `imgsz` | -| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-pose_paddle_model/` | βœ… | `imgsz` | - -See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. +| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-pose.engine` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` | +| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-pose.mlmodel` | βœ… | `imgsz`, `half`, `int8`, `nms` | +| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-pose_saved_model/` | βœ… | `imgsz`, `keras` | +| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-pose.pb` | ❌ | `imgsz` | +| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-pose.tflite` | βœ… | `imgsz`, `half`, `int8` | +| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-pose_edgetpu.tflite` | βœ… | `imgsz` | +| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-pose_web_model/` | βœ… | `imgsz` | +| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-pose_paddle_model/` | βœ… | `imgsz` | + +See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. \ No newline at end of file diff --git a/docs/tasks/segment.md b/docs/tasks/segment.md index b5ae4f1..0b9d4d2 100644 --- a/docs/tasks/segment.md +++ b/docs/tasks/segment.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn what Instance segmentation is. Get pretrained YOLOv8 segment models, and how to train and export them to segments masks. Check the preformance metrics! --- Instance segmentation goes a step further than object detection and involves identifying individual objects in an image @@ -73,6 +74,7 @@ arguments see the [Configuration](../usage/cfg.md) page. ``` ### Dataset format + YOLO segmentation dataset label format extends detection format with segment points. `cls x1 y1 x2 y2 p1 p2 ... pn` @@ -168,19 +170,19 @@ Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc. Available YOLOv8-seg export formats are in the table below. You can predict or validate directly on exported models, i.e. `yolo predict model=yolov8n-seg.onnx`. Usage examples are shown for your model after export completes. -| Format | `format` Argument | Model | Metadata | Arguments | -|--------------------------------------------------------------------|-------------------|------------------------------|----------|-----------------------------------------------------| -| [PyTorch](https://pytorch.org/) | - | `yolov8n-seg.pt` | βœ… | - | -| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-seg.torchscript` | βœ… | `imgsz`, `optimize` | -| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-seg.onnx` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `opset` | +| Format | `format` Argument | Model | Metadata | Arguments | +|--------------------------------------------------------------------|-------------------|-------------------------------|----------|-----------------------------------------------------| +| [PyTorch](https://pytorch.org/) | - | `yolov8n-seg.pt` | βœ… | - | +| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-seg.torchscript` | βœ… | `imgsz`, `optimize` | +| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-seg.onnx` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `opset` | | [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-seg_openvino_model/` | βœ… | `imgsz`, `half` | -| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-seg.engine` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` | -| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-seg.mlmodel` | βœ… | `imgsz`, `half`, `int8`, `nms` | -| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-seg_saved_model/` | βœ… | `imgsz`, `keras` | -| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-seg.pb` | ❌ | `imgsz` | -| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-seg.tflite` | βœ… | `imgsz`, `half`, `int8` | -| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-seg_edgetpu.tflite` | βœ… | `imgsz` | -| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` | βœ… | `imgsz` | -| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` | βœ… | `imgsz` | - -See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. +| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-seg.engine` | βœ… | `imgsz`, `half`, `dynamic`, `simplify`, `workspace` | +| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-seg.mlmodel` | βœ… | `imgsz`, `half`, `int8`, `nms` | +| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-seg_saved_model/` | βœ… | `imgsz`, `keras` | +| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-seg.pb` | ❌ | `imgsz` | +| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-seg.tflite` | βœ… | `imgsz`, `half`, `int8` | +| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-seg_edgetpu.tflite` | βœ… | `imgsz` | +| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` | βœ… | `imgsz` | +| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` | βœ… | `imgsz` | + +See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page. \ No newline at end of file diff --git a/docs/usage/callbacks.md b/docs/usage/callbacks.md index 1a11fcc..031d644 100644 --- a/docs/usage/callbacks.md +++ b/docs/usage/callbacks.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to leverage callbacks in Ultralytics YOLO framework to perform custom tasks in trainer, validator, predictor and exporter modes. --- ## Callbacks @@ -40,7 +41,6 @@ for (result, frame) in model.track/predict(): Here are all supported callbacks. See callbacks [source code](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/utils/callbacks/base.py) for additional details. - ### Trainer Callbacks | Callback | Description | @@ -60,7 +60,6 @@ Here are all supported callbacks. See callbacks [source code](https://github.com | `on_params_update` | Triggered when model parameters are updated | | `teardown` | Triggered when the training process is being cleaned up | - ### Validator Callbacks | Callback | Description | @@ -70,7 +69,6 @@ Here are all supported callbacks. See callbacks [source code](https://github.com | `on_val_batch_end` | Triggered at the end of each validation batch | | `on_val_end` | Triggered when the validation ends | - ### Predictor Callbacks | Callback | Description | @@ -86,4 +84,4 @@ Here are all supported callbacks. See callbacks [source code](https://github.com | Callback | Description | |-------------------|------------------------------------------| | `on_export_start` | Triggered when the export process starts | -| `on_export_end` | Triggered when the export process ends | +| `on_export_end` | Triggered when the export process ends | \ No newline at end of file diff --git a/docs/usage/cfg.md b/docs/usage/cfg.md index 3084689..9113ecf 100644 --- a/docs/usage/cfg.md +++ b/docs/usage/cfg.md @@ -1,5 +1,6 @@ --- comments: true +description: 'Learn about YOLO settings and modes for different tasks like detection, segmentation etc. Train and predict with custom argparse commands.' --- YOLO settings and hyperparameters play a critical role in the model's performance, speed, and accuracy. These settings @@ -247,4 +248,4 @@ it easier to debug and optimize the training process. | `name` | `'exp'` | experiment name. `exp` gets automatically incremented if not specified, i.e, `exp`, `exp2` ... | | `exist_ok` | `False` | whether to overwrite existing experiment | | `plots` | `False` | save plots during train/val | -| `save` | `False` | save train checkpoints and predict results | +| `save` | `False` | save train checkpoints and predict results | \ No newline at end of file diff --git a/docs/usage/cli.md b/docs/usage/cli.md index ebfab8f..1b07b61 100644 --- a/docs/usage/cli.md +++ b/docs/usage/cli.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to use YOLOv8 from the Command Line Interface (CLI) through simple, single-line commands with `yolo` without Python code. --- # Command Line Interface Usage @@ -222,4 +223,4 @@ like `imgsz=320` in this example: ```bash yolo copy-cfg yolo cfg=default_copy.yaml imgsz=320 - ``` + ``` \ No newline at end of file diff --git a/docs/usage/engine.md b/docs/usage/engine.md index 1bb823f..2a77f7e 100644 --- a/docs/usage/engine.md +++ b/docs/usage/engine.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to train and customize your models fast with the Ultralytics YOLO 'DetectionTrainer' and 'CustomTrainer'. Read more here! --- Both the Ultralytics YOLO command-line and python interfaces are simply a high-level abstraction on the base engine @@ -83,5 +84,4 @@ To know more about Callback triggering events and entry point, checkout our [Cal ## Other engine components There are other components that can be customized similarly like `Validators` and `Predictors` -See Reference section for more information on these. - +See Reference section for more information on these. \ No newline at end of file diff --git a/docs/usage/hyperparameter_tuning.md b/docs/usage/hyperparameter_tuning.md index c69f574..2fc271c 100644 --- a/docs/usage/hyperparameter_tuning.md +++ b/docs/usage/hyperparameter_tuning.md @@ -1,5 +1,6 @@ --- comments: true +description: Discover how to integrate hyperparameter tuning with Ray Tune and Ultralytics YOLOv8. Speed up the tuning process and optimize your model's performance. --- # Hyperparameter Tuning with Ray Tune and YOLOv8 @@ -10,7 +11,7 @@ Hyperparameter tuning (or hyperparameter optimization) is the process of determi [Ultralytics](https://ultralytics.com) YOLOv8 integrates hyperparameter tuning with Ray Tune, allowing you to easily optimize your YOLOv8 model's hyperparameters. By using Ray Tune, you can leverage advanced search algorithms, parallelism, and early stopping to speed up the tuning process and achieve better model performance. -### Ray Tune +### Ray Tune
@@ -88,7 +89,6 @@ The following table lists the default search space parameters for hyperparameter | mixup | `tune.uniform(0.0, 1.0)` | Mixup augmentation probability | | copy_paste | `tune.uniform(0.0, 1.0)` | Copy-paste augmentation probability | - ## Custom Search Space Example In this example, we demonstrate how to use a custom search space for hyperparameter tuning with Ray Tune and YOLOv8. By providing a custom search space, you can focus the tuning process on specific hyperparameters of interest. diff --git a/docs/usage/python.md b/docs/usage/python.md index 25f2448..04b813b 100644 --- a/docs/usage/python.md +++ b/docs/usage/python.md @@ -1,5 +1,6 @@ --- comments: true +description: Integrate YOLOv8 in Python. Load, use pretrained models, train, and infer images. Export to ONNX. Track objects in videos. --- # Python Usage @@ -278,4 +279,4 @@ You can easily customize Trainers to support custom tasks or explore R&D ideas. Learn more about Customizing `Trainers`, `Validators` and `Predictors` to suit your project needs in the Customization Section. -[Customization tutorials](engine.md){ .md-button .md-button--primary} +[Customization tutorials](engine.md){ .md-button .md-button--primary} \ No newline at end of file diff --git a/docs/yolov5/environments/aws_quickstart_tutorial.md b/docs/yolov5/environments/aws_quickstart_tutorial.md index 72dc714..dbcfb3a 100644 --- a/docs/yolov5/environments/aws_quickstart_tutorial.md +++ b/docs/yolov5/environments/aws_quickstart_tutorial.md @@ -1,10 +1,11 @@ --- comments: true +description: Get started with YOLOv5 on AWS. Our comprehensive guide provides everything you need to know to run YOLOv5 on an Amazon Deep Learning instance. --- # YOLOv5 πŸš€ on AWS Deep Learning Instance: A Comprehensive Guide -This guide will help new users run YOLOv5 on an Amazon Web Services (AWS) Deep Learning instance. AWS offers a [Free Tier](https://aws.amazon.com/free/) and a [credit program](https://aws.amazon.com/activate/) for a quick and affordable start. +This guide will help new users run YOLOv5 on an Amazon Web Services (AWS) Deep Learning instance. AWS offers a [Free Tier](https://aws.amazon.com/free/) and a [credit program](https://aws.amazon.com/activate/) for a quick and affordable start. Other quickstart options for YOLOv5 include our [Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb) Open In Colab Open In Kaggle, [GCP Deep Learning VM](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial), and our Docker image at [Docker Hub](https://hub.docker.com/r/ultralytics/yolov5) Docker Pulls. *Updated: 21 April 2023*. diff --git a/docs/yolov5/environments/docker_image_quickstart_tutorial.md b/docs/yolov5/environments/docker_image_quickstart_tutorial.md index 0e408f5..365139d 100644 --- a/docs/yolov5/environments/docker_image_quickstart_tutorial.md +++ b/docs/yolov5/environments/docker_image_quickstart_tutorial.md @@ -1,10 +1,11 @@ --- comments: true +description: Get started with YOLOv5 in a Docker container. Learn to set up and run YOLOv5 models and explore other quickstart options. πŸš€ --- # Get Started with YOLOv5 πŸš€ in Docker -This tutorial will guide you through the process of setting up and running YOLOv5 in a Docker container. +This tutorial will guide you through the process of setting up and running YOLOv5 in a Docker container. You can also explore other quickstart options for YOLOv5, such as our [Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb) Open In Colab Open In Kaggle, [GCP Deep Learning VM](https://docs.ultralytics.com/yolov5/environments/google_cloud_quickstart_tutorial), and [Amazon AWS](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial). *Updated: 21 April 2023*. diff --git a/docs/yolov5/environments/google_cloud_quickstart_tutorial.md b/docs/yolov5/environments/google_cloud_quickstart_tutorial.md index bec7526..47f53b1 100644 --- a/docs/yolov5/environments/google_cloud_quickstart_tutorial.md +++ b/docs/yolov5/environments/google_cloud_quickstart_tutorial.md @@ -1,10 +1,11 @@ --- comments: true +description: Set up YOLOv5 on a Google Cloud Platform (GCP) Deep Learning VM. Train, test, detect, and export YOLOv5 models. Tutorial updated April 2023. --- # Run YOLOv5 πŸš€ on Google Cloud Platform (GCP) Deep Learning Virtual Machine (VM) ⭐ -This tutorial will guide you through the process of setting up and running YOLOv5 on a GCP Deep Learning VM. New GCP users are eligible for a [$300 free credit offer](https://cloud.google.com/free/docs/gcp-free-tier#free-trial). +This tutorial will guide you through the process of setting up and running YOLOv5 on a GCP Deep Learning VM. New GCP users are eligible for a [$300 free credit offer](https://cloud.google.com/free/docs/gcp-free-tier#free-trial). You can also explore other quickstart options for YOLOv5, such as our [Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb) Open In Colab Open In Kaggle, [Amazon AWS](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial) and our Docker image at [Docker Hub](https://hub.docker.com/r/ultralytics/yolov5) Docker Pulls. *Updated: 21 April 2023*. @@ -44,4 +45,4 @@ python detect.py --weights yolov5s.pt --source path/to/images # run inference o python export.py --weights yolov5s.pt --include onnx coreml tflite # export models to other formats ``` -GCP terminal +GCP terminal \ No newline at end of file diff --git a/docs/yolov5/index.md b/docs/yolov5/index.md index 92ef9cd..e9db84d 100644 --- a/docs/yolov5/index.md +++ b/docs/yolov5/index.md @@ -1,5 +1,6 @@ --- comments: true +description: Discover the YOLOv5 object detection model designed to deliver fast and accurate real-time results. Let's dive into this documentation to harness its full potential! --- # Ultralytics YOLOv5 @@ -10,13 +11,13 @@ comments: true

- YOLOv5 CI - YOLOv5 Citation - Docker Pulls -
- Run on Gradient - Open In Colab - Open In Kaggle +YOLOv5 CI +YOLOv5 Citation +Docker Pulls +
+Run on Gradient +Open In Colab +Open In Kaggle

diff --git a/docs/yolov5/quickstart_tutorial.md b/docs/yolov5/quickstart_tutorial.md index 01bc539..055a4ab 100644 --- a/docs/yolov5/quickstart_tutorial.md +++ b/docs/yolov5/quickstart_tutorial.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to quickly start using YOLOv5 including installation, inference, and training on this Ultralytics Docs page. --- # YOLOv5 Quickstart @@ -18,8 +19,6 @@ cd yolov5 pip install -r requirements.txt # install ``` - - ## Inference YOLOv5 [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading) inference. [Models](https://github.com/ultralytics/yolov5/tree/master/models) download automatically from the latest @@ -77,4 +76,4 @@ python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml - yolov5x 16 ``` - + \ No newline at end of file diff --git a/docs/yolov5/tutorials/architecture_description.md b/docs/yolov5/tutorials/architecture_description.md index 3781d0f..71ef2bb 100644 --- a/docs/yolov5/tutorials/architecture_description.md +++ b/docs/yolov5/tutorials/architecture_description.md @@ -1,10 +1,12 @@ --- comments: true +description: 'Ultralytics YOLOv5 Docs: Learn model structure, data augmentation & training strategies. Build targets and the losses of object detection.' --- ## 1. Model Structure YOLOv5 (v6.0/6.1) consists of: + - **Backbone**: `New CSP-Darknet53` - **Neck**: `SPPF`, `New CSP-PAN` - **Head**: `YOLOv3 Head` @@ -13,10 +15,9 @@ Model structure (`yolov5l.yaml`): ![yolov5](https://user-images.githubusercontent.com/31005897/172404576-c260dcf9-76bb-4bc8-b6a9-f2d987792583.png) - Some minor changes compared to previous versions: -1. Replace the `Focus` structure with `6x6 Conv2d`(more efficient, refer #4825) +1. Replace the `Focus` structure with `6x6 Conv2d`(more efficient, refer #4825) 2. Replace the `SPP` structure with `SPPF`(more than double the speed)
@@ -79,6 +80,7 @@ if __name__ == '__main__': ``` result: + ``` True spp time: 0.5373051166534424 @@ -87,30 +89,26 @@ sppf time: 0.20780706405639648
- - ## 2. Data Augmentation - Mosaic - + - Copy paste - + - Random affine(Rotation, Scale, Translation and Shear) - + - MixUp - + - Albumentations - Augment HSV(Hue, Saturation, Value) - + - Random horizontal flip - - - + ## 3. Training Strategies @@ -121,13 +119,11 @@ sppf time: 0.20780706405639648 - Mixed precision - Evolve hyper-parameters - - ## 4. Others ### 4.1 Compute Losses -The YOLOv5 loss consists of three parts: +The YOLOv5 loss consists of three parts: - Classes loss(BCE loss) - Objectness loss(BCE loss) @@ -136,12 +132,14 @@ The YOLOv5 loss consists of three parts: ![loss](https://latex.codecogs.com/svg.image?Loss=\lambda_1L_{cls}+\lambda_2L_{obj}+\lambda_3L_{loc}) ### 4.2 Balance Losses + The objectness losses of the three prediction layers(`P3`, `P4`, `P5`) are weighted differently. The balance weights are `[4.0, 1.0, 0.4]` respectively. ![obj_loss](https://latex.codecogs.com/svg.image?L_{obj}=4.0\cdot&space;L_{obj}^{small}+1.0\cdot&space;L_{obj}^{medium}+0.4\cdot&space;L_{obj}^{large}) ### 4.3 Eliminate Grid Sensitivity -In YOLOv2 and YOLOv3, the formula for calculating the predicted target information is: + +In YOLOv2 and YOLOv3, the formula for calculating the predicted target information is: ![b_x](https://latex.codecogs.com/svg.image?b_x=\sigma(t_x)+c_x) ![b_y](https://latex.codecogs.com/svg.image?b_y=\sigma(t_y)+c_y) @@ -152,12 +150,12 @@ In YOLOv2 and YOLOv3, the formula for calculating the predicted target informati -In YOLOv5, the formula is: +In YOLOv5, the formula is: ![bx](https://latex.codecogs.com/svg.image?b_x=(2\cdot\sigma(t_x)-0.5)+c_x) ![by](https://latex.codecogs.com/svg.image?b_y=(2\cdot\sigma(t_y)-0.5)+c_y) ![bw](https://latex.codecogs.com/svg.image?b_w=p_w\cdot(2\cdot\sigma(t_w))^2) -![bh](https://latex.codecogs.com/svg.image?b_h=p_h\cdot(2\cdot\sigma(t_h))^2) +![bh](https://latex.codecogs.com/svg.image?b_h=p_h\cdot(2\cdot\sigma(t_h))^2) Compare the center point offset before and after scaling. The center point offset range is adjusted from (0, 1) to (-0.5, 1.5). Therefore, offset can easily get 0 or 1. @@ -168,8 +166,8 @@ Compare the height and width scaling ratio(relative to anchor) before and after - ### 4.4 Build Targets + Match positive samples: - Calculate the aspect ratio of GT and Anchor Templates @@ -194,4 +192,4 @@ Match positive samples: - Because the center point offset range is adjusted from (0, 1) to (-0.5, 1.5). GT Box can be assigned to more anchors. - + \ No newline at end of file diff --git a/docs/yolov5/tutorials/clearml_logging_integration.md b/docs/yolov5/tutorials/clearml_logging_integration.md index 306e565..f0843cf 100644 --- a/docs/yolov5/tutorials/clearml_logging_integration.md +++ b/docs/yolov5/tutorials/clearml_logging_integration.md @@ -1,5 +1,6 @@ --- comments: true +description: Integrate ClearML with YOLOv5 to track experiments and manage data versions. Optimize hyperparameters and remotely monitor your runs. --- # ClearML Integration @@ -238,4 +239,4 @@ ClearML comes with autoscalers too! This tool will automatically spin up new rem Check out the autoscalers getting started video below. -[![Watch the video](https://img.youtube.com/vi/j4XVMAaUt3E/0.jpg)](https://youtu.be/j4XVMAaUt3E) +[![Watch the video](https://img.youtube.com/vi/j4XVMAaUt3E/0.jpg)](https://youtu.be/j4XVMAaUt3E) \ No newline at end of file diff --git a/docs/yolov5/tutorials/comet_logging_integration.md b/docs/yolov5/tutorials/comet_logging_integration.md index 5f7fd08..e1716c9 100644 --- a/docs/yolov5/tutorials/comet_logging_integration.md +++ b/docs/yolov5/tutorials/comet_logging_integration.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to use YOLOv5 with Comet, a tool for logging and visualizing machine learning model metrics in real-time. Install, log and analyze seamlessly. --- @@ -218,7 +219,7 @@ If your training run is interrupted for any reason, e.g. disrupted internet conn The Run Path has the following format `comet:////`. -This will restore the run to its state before the interruption, which includes restoring the model from a checkpoint, restoring all hyperparameters and training arguments and downloading Comet dataset Artifacts if they were used in the original run. The resumed run will continue logging to the existing Experiment in the Comet UI +This will restore the run to its state before the interruption, which includes restoring the model from a checkpoint, restoring all hyperparameters and training arguments and downloading Comet dataset Artifacts if they were used in the original run. The resumed run will continue logging to the existing Experiment in the Comet UI ```shell python train.py \ @@ -259,4 +260,4 @@ comet optimizer -j utils/loggers/comet/hpo.py \ Comet provides a number of ways to visualize the results of your sweep. Take a look at a [project with a completed sweep here](https://www.comet.com/examples/comet-example-yolov5/view/PrlArHGuuhDTKC1UuBmTtOSXD/panels?utm_source=yolov5&utm_medium=partner&utm_campaign=partner_yolov5_2022&utm_content=github) -hyperparameter-yolo +hyperparameter-yolo \ No newline at end of file diff --git a/docs/yolov5/tutorials/hyperparameter_evolution.md b/docs/yolov5/tutorials/hyperparameter_evolution.md index bcbbefa..eebb554 100644 --- a/docs/yolov5/tutorials/hyperparameter_evolution.md +++ b/docs/yolov5/tutorials/hyperparameter_evolution.md @@ -1,12 +1,12 @@ --- comments: true +description: Learn to find optimum YOLOv5 hyperparameters via **evolution**. A guide to learn hyperparameter tuning with Genetic Algorithms. --- -πŸ“š This guide explains **hyperparameter evolution** for YOLOv5 πŸš€. Hyperparameter evolution is a method of [Hyperparameter Optimization](https://en.wikipedia.org/wiki/Hyperparameter_optimization) using a [Genetic Algorithm](https://en.wikipedia.org/wiki/Genetic_algorithm) (GA) for optimization. UPDATED 25 September 2022. +πŸ“š This guide explains **hyperparameter evolution** for YOLOv5 πŸš€. Hyperparameter evolution is a method of [Hyperparameter Optimization](https://en.wikipedia.org/wiki/Hyperparameter_optimization) using a [Genetic Algorithm](https://en.wikipedia.org/wiki/Genetic_algorithm) (GA) for optimization. UPDATED 25 September 2022. Hyperparameters in ML control various aspects of training, and finding optimal values for them can be a challenge. Traditional methods like grid searches can quickly become intractable due to 1) the high dimensional search space 2) unknown correlations among the dimensions, and 3) expensive nature of evaluating the fitness at each point, making GA a suitable candidate for hyperparameter searches. - ## Before You Start Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a [**Python>=3.7.0**](https://www.python.org/) environment, including [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/). [Models](https://github.com/ultralytics/yolov5/tree/master/models) and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) download automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). @@ -17,7 +17,6 @@ cd yolov5 pip install -r requirements.txt # install ``` - ## 1. Initialize Hyperparameters YOLOv5 has about 30 hyperparameters used for various training settings. These are defined in `*.yaml` files in the `/data/hyps` directory. Better initial guesses will produce better final results, so it is important to initialize these values properly before evolving. If in doubt, simply use the default values, which are optimized for YOLOv5 COCO training from scratch. @@ -73,10 +72,13 @@ def fitness(x): ## 3. Evolve Evolution is performed about a base scenario which we seek to improve upon. The base scenario in this example is finetuning COCO128 for 10 epochs using pretrained YOLOv5s. The base scenario training command is: + ```bash python train.py --epochs 10 --data coco128.yaml --weights yolov5s.pt --cache ``` + To evolve hyperparameters **specific to this scenario**, starting from our initial values defined in **Section 1.**, and maximizing the fitness defined in **Section 2.**, append `--evolve`: + ```bash # Single-GPU python train.py --epochs 10 --data coco128.yaml --weights yolov5s.pt --cache --evolve @@ -100,6 +102,7 @@ The default evolution settings will run the base scenario 300 times, i.e. for 30 https://github.com/ultralytics/yolov5/blob/6a3ee7cf03efb17fbffde0e68b1a854e80fe3213/train.py#L608 The main genetic operators are **crossover** and **mutation**. In this work mutation is used, with an 80% probability and a 0.04 variance to create new offspring based on a combination of the best parents from all previous generations. Results are logged to `runs/evolve/exp/evolve.csv`, and the highest fitness offspring is saved every generation as `runs/evolve/hyp_evolved.yaml`: + ```yaml # YOLOv5 Hyperparameter Evolution Results # Best generation: 287 @@ -140,14 +143,12 @@ copy_paste: 0.0 # segment copy-paste (probability) We recommend a minimum of 300 generations of evolution for best results. Note that **evolution is generally expensive and time-consuming**, as the base scenario is trained hundreds of times, possibly requiring hundreds or thousands of GPU hours. - ## 4. Visualize `evolve.csv` is plotted as `evolve.png` by `utils.plots.plot_evolve()` after evolution finishes with one subplot per hyperparameter showing fitness (y-axis) vs hyperparameter values (x-axis). Yellow indicates higher concentrations. Vertical distributions indicate that a parameter has been disabled and does not mutate. This is user selectable in the `meta` dictionary in train.py, and is useful for fixing parameters and preventing them from evolving. ![evolve](https://user-images.githubusercontent.com/26833433/89130469-f43e8e00-d4b9-11ea-9e28-f8ae3622516d.png) - ## Environments YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled): @@ -157,7 +158,6 @@ YOLOv5 may be run in any of the following up-to-date verified environments (with - **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/) - **Docker Image**. See [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) Docker Pulls - ## Status YOLOv5 CI diff --git a/docs/yolov5/tutorials/model_ensembling.md b/docs/yolov5/tutorials/model_ensembling.md index f0b7ef5..a76996d 100644 --- a/docs/yolov5/tutorials/model_ensembling.md +++ b/docs/yolov5/tutorials/model_ensembling.md @@ -1,14 +1,14 @@ --- comments: true +description: Learn how to ensemble YOLOv5 models for improved mAP and Recall! Clone the repo, install requirements, and start testing and inference. --- -πŸ“š This guide explains how to use YOLOv5 πŸš€ **model ensembling** during testing and inference for improved mAP and Recall. +πŸ“š This guide explains how to use YOLOv5 πŸš€ **model ensembling** during testing and inference for improved mAP and Recall. UPDATED 25 September 2022. From [https://en.wikipedia.org/wiki/Ensemble_learning](https://en.wikipedia.org/wiki/Ensemble_learning): > Ensemble modeling is a process where multiple diverse models are created to predict an outcome, either by using many different modeling algorithms or using different training data sets. The ensemble model then aggregates the prediction of each base model and results in once final prediction for the unseen data. The motivation for using ensemble models is to reduce the generalization error of the prediction. As long as the base models are diverse and independent, the prediction error of the model decreases when the ensemble approach is used. The approach seeks the wisdom of crowds in making a prediction. Even though the ensemble model has multiple base models within the model, it acts and performs as a single model. - ## Before You Start Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a [**Python>=3.7.0**](https://www.python.org/) environment, including [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/). [Models](https://github.com/ultralytics/yolov5/tree/master/models) and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) download automatically from the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). @@ -22,11 +22,13 @@ pip install -r requirements.txt # install ## Test Normally Before ensembling we want to establish the baseline performance of a single model. This command tests YOLOv5x on COCO val2017 at image size 640 pixels. `yolov5x.pt` is the largest and most accurate model available. Other options are `yolov5s.pt`, `yolov5m.pt` and `yolov5l.pt`, or you own checkpoint from training a custom dataset `./weights/best.pt`. For details on all available models please see our README [table](https://github.com/ultralytics/yolov5#pretrained-checkpoints). + ```bash python val.py --weights yolov5x.pt --data coco.yaml --img 640 --half ``` Output: + ```shell val: data=./data/coco.yaml, weights=['yolov5x.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True YOLOv5 πŸš€ v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB) @@ -59,6 +61,7 @@ Evaluating pycocotools mAP... saving runs/val/exp/yolov5x_predictions.json... ## Ensemble Test Multiple pretrained models may be ensembled together at test and inference time by simply appending extra models to the `--weights` argument in any existing val.py or detect.py command. This example tests an ensemble of 2 models together: + - YOLOv5x - YOLOv5l6 @@ -67,6 +70,7 @@ python val.py --weights yolov5x.pt yolov5l6.pt --data coco.yaml --img 640 --half ``` Output: + ```shell val: data=./data/coco.yaml, weights=['yolov5x.pt', 'yolov5l6.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.6, task=val, device=, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True YOLOv5 πŸš€ v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB) @@ -101,11 +105,13 @@ Evaluating pycocotools mAP... saving runs/val/exp3/yolov5x_predictions.json... ## Ensemble Inference Append extra models to the `--weights` argument to run ensemble inference: + ```bash python detect.py --weights yolov5x.pt yolov5l6.pt --img 640 --source data/images ``` Output: + ```bash detect: weights=['yolov5x.pt', 'yolov5l6.pt'], source=data/images, imgsz=640, conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_width=3, hide_labels=False, hide_conf=False, half=False YOLOv5 πŸš€ v5.0-267-g6a3ee7c torch 1.9.0+cu102 CUDA:0 (Tesla P100-PCIE-16GB, 16280.875MB) @@ -121,8 +127,8 @@ image 2/2 /content/yolov5/data/images/zidane.jpg: 384x640 3 persons, 2 ties, Don Results saved to runs/detect/exp2 Done. (0.223s) ``` - + ## Environments @@ -133,7 +139,6 @@ YOLOv5 may be run in any of the following up-to-date verified environments (with - **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/) - **Docker Image**. See [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) Docker Pulls - ## Status YOLOv5 CI diff --git a/docs/yolov5/tutorials/model_export.md b/docs/yolov5/tutorials/model_export.md index 1c562dd..09e7268 100644 --- a/docs/yolov5/tutorials/model_export.md +++ b/docs/yolov5/tutorials/model_export.md @@ -1,5 +1,6 @@ --- comments: true +description: Export YOLOv5 models to TFLite, ONNX, CoreML, and TensorRT formats. Achieve up to 5x GPU speedup using TensorRT. Benchmarks included. --- # TFLite, ONNX, CoreML, TensorRT Export @@ -41,10 +42,10 @@ YOLOv5 inference is officially supported in 11 formats: | [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov5s_web_model/` | | [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov5s_paddle_model/` | - ## Benchmarks Benchmarks below run on a Colab Pro with the YOLOv5 tutorial notebook Open In Colab. To reproduce: + ```bash python benchmarks.py --weights yolov5s.pt --imgsz 640 --device 0 ``` @@ -98,6 +99,7 @@ Benchmarks complete (241.20s) ## Export a Trained YOLOv5 Model This command exports a pretrained YOLOv5s model to TorchScript and ONNX formats. `yolov5s.pt` is the 'small' model, the second-smallest model available. Other options are `yolov5n.pt`, `yolov5m.pt`, `yolov5l.pt` and `yolov5x.pt`, along with their P6 counterparts i.e. `yolov5s6.pt` or you own custom training checkpoint i.e. `runs/exp/weights/best.pt`. For details on all available models please see our README [table](https://github.com/ultralytics/yolov5#pretrained-checkpoints). + ```bash python export.py --weights yolov5s.pt --include torchscript onnx ``` @@ -105,6 +107,7 @@ python export.py --weights yolov5s.pt --include torchscript onnx πŸ’‘ ProTip: Add `--half` to export models at FP16 half precision for smaller file sizes Output: + ```bash export: data=data/coco128.yaml, weights=['yolov5s.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, train=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=12, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['torchscript', 'onnx'] YOLOv5 πŸš€ v6.2-104-ge3e5122 Python-3.7.13 torch-1.12.1+cu113 CPU @@ -137,10 +140,10 @@ The 3 exported models will be saved alongside the original PyTorch model: [Netron Viewer](https://github.com/lutzroeder/netron) is recommended for visualizing exported models:

- ## Exported Model Usage Examples `detect.py` runs inference on exported models: + ```bash python detect.py --weights yolov5s.pt # PyTorch yolov5s.torchscript # TorchScript @@ -156,6 +159,7 @@ python detect.py --weights yolov5s.pt # PyTorch ``` `val.py` runs validation on exported models: + ```bash python val.py --weights yolov5s.pt # PyTorch yolov5s.torchscript # TorchScript @@ -171,6 +175,7 @@ python val.py --weights yolov5s.pt # PyTorch ``` Use PyTorch Hub with exported YOLOv5 models: + ``` python import torch @@ -200,6 +205,7 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc. ## OpenCV DNN inference OpenCV inference with ONNX models: + ```bash python export.py --weights yolov5s.pt --include onnx @@ -232,7 +238,6 @@ YOLOv5 may be run in any of the following up-to-date verified environments (with - **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/) - **Docker Image**. See [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) Docker Pulls - ## Status YOLOv5 CI diff --git a/docs/yolov5/tutorials/model_pruning_and_sparsity.md b/docs/yolov5/tutorials/model_pruning_and_sparsity.md index 1848ebe..0793f66 100644 --- a/docs/yolov5/tutorials/model_pruning_and_sparsity.md +++ b/docs/yolov5/tutorials/model_pruning_and_sparsity.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to apply pruning to your YOLOv5 models. See the before and after performance with an explanation of sparsity and more. --- πŸ“š This guide explains how to apply **pruning** to YOLOv5 πŸš€ models. @@ -18,11 +19,13 @@ pip install -r requirements.txt # install ## Test Normally Before pruning we want to establish a baseline performance to compare to. This command tests YOLOv5x on COCO val2017 at image size 640 pixels. `yolov5x.pt` is the largest and most accurate model available. Other options are `yolov5s.pt`, `yolov5m.pt` and `yolov5l.pt`, or you own checkpoint from training a custom dataset `./weights/best.pt`. For details on all available models please see our README [table](https://github.com/ultralytics/yolov5#pretrained-checkpoints). + ```bash python val.py --weights yolov5x.pt --data coco.yaml --img 640 --half ``` Output: + ```shell val: data=/content/yolov5/data/coco.yaml, weights=['yolov5x.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True, dnn=False YOLOv5 πŸš€ v6.0-224-g4c40933 torch 1.10.0+cu111 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB) @@ -58,6 +61,7 @@ We repeat the above test with a pruned model by using the `torch_utils.prune()` Screenshot 2022-02-02 at 22 54 18 30% pruned output: + ```bash val: data=/content/yolov5/data/coco.yaml, weights=['yolov5x.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True, dnn=False YOLOv5 πŸš€ v6.0-224-g4c40933 torch 1.10.0+cu111 CUDA:0 (Tesla V100-SXM2-16GB, 16160MiB) @@ -89,7 +93,6 @@ Results saved to runs/val/exp3 In the results we can observe that we have achieved a **sparsity of 30%** in our model after pruning, which means that 30% of the model's weight parameters in `nn.Conv2d` layers are equal to 0. **Inference time is essentially unchanged**, while the model's **AP and AR scores a slightly reduced**. - ## Environments YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled): @@ -99,7 +102,6 @@ YOLOv5 may be run in any of the following up-to-date verified environments (with - **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/) - **Docker Image**. See [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) Docker Pulls - ## Status YOLOv5 CI diff --git a/docs/yolov5/tutorials/multi_gpu_training.md b/docs/yolov5/tutorials/multi_gpu_training.md index 4f81d27..d002d05 100644 --- a/docs/yolov5/tutorials/multi_gpu_training.md +++ b/docs/yolov5/tutorials/multi_gpu_training.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to train your dataset on single or multiple machines using YOLOv5 on multiple GPUs. Use simple commands with DDP mode for faster performance. --- πŸ“š This guide explains how to properly use **multiple** GPUs to train a dataset with YOLOv5 πŸš€ on single or multiple machine(s). @@ -21,11 +22,10 @@ pip install -r requirements.txt # install ## Training -Select a pretrained model to start training from. Here we select [YOLOv5s](https://github.com/ultralytics/yolov5/blob/master/models/yolov5s.yaml), the smallest and fastest model available. See our README [table](https://github.com/ultralytics/yolov5#pretrained-checkpoints) for a full comparison of all models. We will train this model with Multi-GPU on the [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset. +Select a pretrained model to start training from. Here we select [YOLOv5s](https://github.com/ultralytics/yolov5/blob/master/models/yolov5s.yaml), the smallest and fastest model available. See our README [table](https://github.com/ultralytics/yolov5#pretrained-checkpoints) for a full comparison of all models. We will train this model with Multi-GPU on the [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset.

YOLOv5 Models

- ### Single GPU ```bash @@ -35,6 +35,7 @@ python train.py --batch 64 --data coco.yaml --weights yolov5s.pt --device 0 ### Multi-GPU [DataParallel](https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel) Mode (⚠️ not recommended) You can increase the `device` to use Multiple GPUs in DataParallel mode. + ```bash python train.py --batch 64 --data coco.yaml --weights yolov5s.pt --device 0,1 ``` @@ -68,21 +69,22 @@ python -m torch.distributed.run --nproc_per_node 2 train.py --batch 64 --data co
Use SyncBatchNorm (click to expand) -[SyncBatchNorm](https://pytorch.org/docs/master/generated/torch.nn.SyncBatchNorm.html) could increase accuracy for multiple gpu training, however, it will slow down training by a significant factor. It is **only** available for Multiple GPU DistributedDataParallel training. +[SyncBatchNorm](https://pytorch.org/docs/master/generated/torch.nn.SyncBatchNorm.html) could increase accuracy for multiple gpu training, however, it will slow down training by a significant factor. It is **only** available for Multiple GPU DistributedDataParallel training. It is best used when the batch-size on **each** GPU is small (<= 8). -To use SyncBatchNorm, simple pass `--sync-bn` to the command like below, +To use SyncBatchNorm, simple pass `--sync-bn` to the command like below, ```bash python -m torch.distributed.run --nproc_per_node 2 train.py --batch 64 --data coco.yaml --cfg yolov5s.yaml --weights '' --sync-bn ``` +
Use Multiple machines (click to expand) -This is **only** available for Multiple GPU DistributedDataParallel training. +This is **only** available for Multiple GPU DistributedDataParallel training. Before we continue, make sure the files on all machines are the same, dataset, codebase, etc. Afterwards, make sure the machines can communicate to each other. @@ -94,18 +96,19 @@ To use it, you can do as the following, # On master machine 0 python -m torch.distributed.run --nproc_per_node G --nnodes N --node_rank 0 --master_addr "192.168.1.1" --master_port 1234 train.py --batch 64 --data coco.yaml --cfg yolov5s.yaml --weights '' ``` + ```bash # On machine R python -m torch.distributed.run --nproc_per_node G --nnodes N --node_rank R --master_addr "192.168.1.1" --master_port 1234 train.py --batch 64 --data coco.yaml --cfg yolov5s.yaml --weights '' ``` -where `G` is number of GPU per machine, `N` is the number of machines, and `R` is the machine number from `0...(N-1)`. + +where `G` is number of GPU per machine, `N` is the number of machines, and `R` is the machine number from `0...(N-1)`. Let's say I have two machines with two GPUs each, it would be `G = 2` , `N = 2`, and `R = 1` for the above. Training will not start until all `N` machines are connected. Output will only be shown on master machine!
- ### Notes - Windows support is untested, Linux is recommended. @@ -167,7 +170,6 @@ If you went through all the above, feel free to raise an Issue by giving as much - ## Environments YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled): @@ -177,14 +179,12 @@ YOLOv5 may be run in any of the following up-to-date verified environments (with - **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/aws_quickstart_tutorial/) - **Docker Image**. See [Docker Quickstart Guide](https://docs.ultralytics.com/yolov5/environments/docker_image_quickstart_tutorial/) Docker Pulls - ## Status YOLOv5 CI If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 [training](https://github.com/ultralytics/yolov5/blob/master/train.py), [validation](https://github.com/ultralytics/yolov5/blob/master/val.py), [inference](https://github.com/ultralytics/yolov5/blob/master/detect.py), [export](https://github.com/ultralytics/yolov5/blob/master/export.py) and [benchmarks](https://github.com/ultralytics/yolov5/blob/master/benchmarks.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit. - ## Credits I would like to thank @MagicFrogSJTU, who did all the heavy lifting, and @glenn-jocher for guiding us along the way. \ No newline at end of file diff --git a/docs/yolov5/tutorials/neural_magic_pruning_quantization.md b/docs/yolov5/tutorials/neural_magic_pruning_quantization.md index b709652..532ced7 100644 --- a/docs/yolov5/tutorials/neural_magic_pruning_quantization.md +++ b/docs/yolov5/tutorials/neural_magic_pruning_quantization.md @@ -1,5 +1,6 @@ --- comments: true +description: Learn how to deploy YOLOv5 with DeepSparse to achieve exceptional CPU performance close to GPUs, using pruning, and quantization.
---