README and Docs updates (#166)

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
single_channel
Ayush Chaurasia 2 years ago committed by GitHub
parent 5e290e0d28
commit cb4801888e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,76 +1,230 @@
[![Ultralytics CI](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg)](https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml)
<div align="center">
<p>
<a align="center" href="https://ultralytics.com/yolov8" target="_blank">
<img width="850" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png"></a>
</p>
## Install
[English](README.md) | [简体中文](README.zh-CN.md)
<br>
```bash
pip install ultralytics
```
<div>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="YOLOv8 Citation"></a>
<a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
<br>
<a href="https://bit.ly/yolov5-paperspace-notebook"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"></a>
<a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
<a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
</div>
<br>
Development
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO object detection and image segmentation model developed by [Ultralytics](https://ultralytics.com). YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility.
```
git clone https://github.com/ultralytics/ultralytics
cd ultralytics
pip install -e .
```
The YOLOv8 models are designed to be fast, accurate, and easy to use, making them an excellent choice for a wide range of object detection, image segmentation and image classification tasks.
Whether you are a seasoned machine learning practitioner or new to the field, we hope that the resources on this page will help you get the most out of YOLOv8.
## Usage
<div align="center">
<a href="https://github.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.linkedin.com/company/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://twitter.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.producthunt.com/@glenn_jocher" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-producthunt.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://youtube.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.facebook.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-facebook.png" width="2%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
<a href="https://www.instagram.com/ultralytics/" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-instagram.png" width="2%" alt="" /></a>
</div>
</div>
### 1. CLI
## <div align="center">Documentation</div>
To simply use the latest Ultralytics YOLO models
See below for quickstart intallation and usage example, and see the [YOLOv8 Docs](https://docs.ultralytics.com) for full documentation on training, validation, prediction and deployment.
<details open>
<summary>Install</summary>
Pip install the ultralytics package including all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a
[**Python>=3.7.0**](https://www.python.org/) environment, including
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
```bash
yolo task=detect mode=train model=yolov8n.yaml args=...
classify predict yolov8n-cls.yaml args=...
segment val yolov8n-seg.yaml args=...
export yolov8n.pt format=onnx
pip install ultralytics
```
### 2. Python SDK
</details>
To use pythonic interface of Ultralytics YOLO model
<details open>
<summary>Usage</summary>
YOLOv8 may be used in a python environment:
```python
from ultralytics import YOLO
model = YOLO("yolov8n.yaml") # create a new model from scratch
model = YOLO(
"yolov8n.pt"
) # load a pretrained model (recommended for best training results)
results = model.train(data="coco128.yaml", epochs=100, imgsz=640)
results = model.val()
results = model.predict(source="bus.jpg")
success = model.export(format="onnx")
model = YOLO("yolov8n.pt") # load a pretrained YOLOv8n model
model.train(data="coco128.yaml") # train the model
model.val() # evaluate model performance on the validation set
model.predict(source="https://ultralytics.com/images/bus.jpg") # predict on an image
model.export(format="onnx") # export the model to ONNX format
```
Or with CLI `yolo` commands:
```bash
yolo task=detect mode=train model=yolov8n.pt args...
classify predict yolov8n-cls.yaml args...
segment val yolov8n-seg.yaml args...
export yolov8n.pt format=onnx args...
```
## Models
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/yolo/v8/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/ultralytics/releases).
</details>
## <div align="center">Checkpoints</div>
All YOLOv8 pretrained models are available here. Detection and Segmentation models are pretrained on the COCO dataset, while Classification models are pretrained on the ImageNet dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/yolo/v8/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/ultralytics/releases) on first use.
<details open><summary>Detection</summary>
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU<br>(ms) | Speed<br><sup>T4 GPU<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------- | ---------------------------- | ------------------ | ----------------- |
| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5n.pt) | 640 | 28.0 | - | - | **1.9** | **4.5** |
| [YOLOv6n](url) | 640 | 35.9 | - | - | 4.3 | 11.1 |
| **[YOLOv8n](url)** | 640 | **37.3** | - | - | 3.2 | 8.9 |
| | | | | | | |
| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5s.pt) | 640 | 37.4 | - | - | 7.2 | 16.5 |
| [YOLOv6s](url) | 640 | 43.5 | - | - | 17.2 | 44.2 |
| **[YOLOv8s](url)** | 640 | **44.9** | - | - | 11.2 | 28.8 |
| | | | | | | |
| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5m.pt) | 640 | 45.4 | - | - | 21.2 | 49.0 |
| [YOLOv6m](url) | 640 | 49.5 | - | - | 34.3 | 82.2 |
| **[YOLOv8m](url)** | 640 | **50.2** | - | - | 25.9 | 79.3 |
| | | | | | | |
| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5l.pt) | 640 | 49.0 | - | - | 46.5 | 109.1 |
| [YOLOv6l](url) | 640 | 52.5 | - | - | 58.5 | 144.0 |
| [YOLOv7](url) | 640 | 51.2 | - | - | 36.9 | 104.7 |
| **[YOLOv8l](url)** | 640 | **52.9** | - | - | 43.7 | 165.7 |
| | | | | | | |
| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x.pt) | 640 | 50.7 | - | - | 86.7 | 205.7 |
| [YOLOv7-X](url) | 640 | 52.9 | - | - | 71.3 | 189.9 |
| **[YOLOv8x](url)** | 640 | **53.9** | - | - | 68.2 | 258.5 |
| | | | | | | |
| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x6.pt) | 1280 | 55.0 | - | - | 140.7 | 839.2 |
| [YOLOv7-E6E](url) | 1280 | 56.8 | - | - | 151.7 | 843.2 |
| **[YOLOv8x6](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5x6.pt)**<br>+TTA | 1280 | -<br>- | -<br>- | -<br>- | 97.4 | 1047.2<br>- |
If you're looking to modify YOLO for R&D or to build on top of it, refer to [Using Trainer](<>) Guide on our docs.
| ----------------------------------------------------------------------------------------- | --------------------- | -------------------- | ------------------------- | ---------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n.pt) | 640 | 37.3 | - | - | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s.pt) | 640 | 44.9 | - | - | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m.pt) | 640 | 50.2 | - | - | 25.9 | 78.9 |
| [YOLOv8l](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l.pt) | 640 | 52.9 | - | - | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x.pt) | 640 | 53.9 | - | - | 68.2 | 257.8 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
<br>Reproduce by `yolo mode=val task=detect data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
<br>Reproduce by `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu`
</details>
<details><summary>Segmentation</summary>
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU<br>(ms) | Speed<br><sup>T4 GPU<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| --------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------- | ---------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | - | - | 3.4 | 12.6 |
| [YOLOv8s](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | - | - | 11.8 | 42.6 |
| [YOLOv8m](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | - | - | 27.3 | 110.2 |
| [YOLOv8l](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | - | - | 46.0 | 220.5 |
| [YOLOv8x](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | - | - | 71.8 | 344.1 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
<br>Reproduce by `yolo mode=val task=detect data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
<br>Reproduce by `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu`
</details>
<details><summary>Classification</summary>
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU<br>(ms) | Speed<br><sup>T4 GPU<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| --------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------- | ---------------------------- | ------------------ | ------------------------ |
| [YOLOv8n](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | - | - | 2.7 | 4.3 |
| [YOLOv8s](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | - | - | 6.4 | 13.5 |
| [YOLOv8m](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | - | - | 17.0 | 42.7 |
| [YOLOv8l](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | - | - | 37.5 | 99.7 |
| [YOLOv8x](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | - | - | 57.4 | 154.8 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [ImageNet](https://www.image-net.org/) dataset.
<br>Reproduce by `yolo mode=val task=detect data=coco.yaml device=0`
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
<br>Reproduce by `yolo mode=val task=detect data=coco128.yaml batch=1 device=0/cpu`
</details>
## <div align="center">Integrations</div>
<br>
<a align="center" href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/integrations-loop.png"></a>
<br>
<br>
<div align="center">
<a href="https://roboflow.com/?ref=ultralytics">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-roboflow.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="" />
<a href="https://cutt.ly/yolov5-readme-clearml">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-clearml.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="" />
<a href="https://bit.ly/yolov5-readme-comet">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-comet.png" width="10%" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="15%" height="0" alt="" />
<a href="https://bit.ly/yolov5-neuralmagic">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-neuralmagic.png" width="10%" /></a>
</div>
| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
| :--------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------: |
| Label and export your custom datasets directly to YOLOv8 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) | Automatically track, visualize and even remotely train YOLOv8 using [ClearML](https://cutt.ly/yolov5-readme-clearml) (open-source!) | Free forever, [Comet](https://bit.ly/yolov5-readme-comet2) lets you save YOLOv8 models, resume training, and interactively visualise and debug predictions | Run YOLOv8 inference up to 6x faster with [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) |
## <div align="center">Ultralytics HUB</div>
[Ultralytics HUB](https://bit.ly/ultralytics_hub) is our ⭐ **NEW** no-code solution to visualize datasets, train YOLOv8 🚀 models, and deploy to the real world in a seamless experience. Get started for **Free** now! Also run YOLOv8 models on your iOS or Android device by downloading the [Ultralytics App](https://ultralytics.com/app_install)!
<a align="center" href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
## <div align="center">Contribute</div>
We love your input! YOLOv5 and YOLOv8 would not be possible without help from our community. Please see our [Contributing Guide](CONTRIBUTING.md) to get started, and fill out our [Survey](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) to send us feedback on your experience. Thank you 🙏 to all our contributors!
<!-- SVG image from https://opencollective.com/ultralytics/contributors.svg?width=990 -->
<a href="https://github.com/ultralytics/yolov5/graphs/contributors"><img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/image-contributors-1280.png"/></a>
## <div align="center">License</div>
YOLOv8 is available under two different licenses:
- **GPL-3.0 License**: See [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for details.
- **Enterprise License**: Provides greater flexibility for commercial product development without the open-source requirements of GPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and applications. Request an Enterprise License at [Ultralytics Licensing](https://ultralytics.com/license).
## <div align="center">Contact</div>
For YOLOv8 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For professional support please [Contact Us](https://ultralytics.com/contact).
<br>
<div align="center">
<a href="https://github.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.linkedin.com/company/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://twitter.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.producthunt.com/@glenn_jocher" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-producthunt.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://youtube.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.facebook.com/ultralytics" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-facebook.png" width="3%" alt="" /></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="" />
<a href="https://www.instagram.com/ultralytics/" style="text-decoration:none;">
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-instagram.png" width="3%" alt="" /></a>
</div>

@ -101,7 +101,7 @@ given task.
| Key | Value | Description |
|----------------|----------------------|-------------------------------------------------|
| source | `ultralytics/assets` | Input source. Accepts image, folder, video, url |
| view_img | `False` | View the prediction images |
| show | `False` | View the prediction images |
| save_txt | `False` | Save the results in a txt file |
| save_conf | `False` | Save the condidence scores |
| save_crop | `Fasle` | |
@ -136,7 +136,7 @@ validation dataset and to detect and prevent overfitting.
| dnn | `False` | Use OpenCV DNN for ONNX inference |
| plots | `False` | |
### Export settings
### Export
Export settings for YOLO models refer to the various configurations and options used to save or
export the model for use in other environments or platforms. These settings can affect the model's performance, size,

@ -0,0 +1,80 @@
# Ultralytics HUB
<div align="center">
<a href="https://hub.ultralytics.com" target="_blank">
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
<br>
<a href="https://github.com/ultralytics/hub/actions/workflows/ci.yaml">
<img src="https://github.com/ultralytics/hub/actions/workflows/ci.yaml/badge.svg" alt="CI CPU"></a>
</div>
[Ultralytics HUB](https://hub.ultralytics.com) is a new no-code online tool developed
by [Ultralytics](https://ultralytics.com), the creators of the popular [YOLOv5](https://github.com/ultralytics/yolov5)
object detection and image segmentation models. With Ultralytics HUB, users can easily train and deploy YOLOv5 models
without any coding or technical expertise.
Ultralytics HUB is designed to be user-friendly and intuitive, with a drag-and-drop interface that allows users to
easily upload their data and select their model configurations. It also offers a range of pre-trained models and
templates to choose from, making it easy for users to get started with training their own models. Once a model is
trained, it can be easily deployed and used for real-time object detection and image segmentation tasks. Overall,
Ultralytics HUB is an essential tool for anyone looking to use YOLOv5 for their object detection and image segmentation
projects.
**[Get started now](https://hub.ultralytics.com)** and experience the power and simplicity of Ultralytics HUB for yourself. Sign up for a free account and
start building, training, and deploying YOLOv5 and YOLOv8 models today.
## 1. Upload a Dataset
Ultralytics HUB datasets are just like YOLOv5 🚀 datasets, they use the same structure and the same label formats to keep everything simple.
When you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML inside the dataset root directory** as in the example shown below, and then zip for upload to https://hub.ultralytics.com/. Your **dataset YAML, directory and zip** should all share the same name. For example, if your dataset is called 'coco6' as in our example [ultralytics/hub/coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip), then you should have a coco6.yaml inside your coco6/ directory, which should zip to create coco6.zip for upload:
```bash
zip -r coco6.zip coco6
```
The example [coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip) dataset in this repository can be downloaded and unzipped to see exactly how to structure your custom dataset.
<p align="center"><img width="80%" src="https://user-images.githubusercontent.com/26833433/201424843-20fa081b-ad4b-4d6c-a095-e810775908d8.png" title="COCO6" /></p>
The dataset YAML is the same standard YOLOv5 YAML format. See the [YOLOv5 Train Custom Data tutorial](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) for full details.
```yaml
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: # dataset root dir (leave empty for HUB)
train: images/train # train images (relative to 'path') 8 images
val: images/val # val images (relative to 'path') 8 images
test: # test images (optional)
# Classes
names:
0: person
1: bicycle
2: car
3: motorcycle
...
```
After zipping your dataset, sign in to [Ultralytics HUB](https://bit.ly/ultralytics_hub) and click the Datasets tab. Click 'Upload Dataset' to upload, scan and visualize your new dataset before training new YOLOv5 models on it!
<img width="100%" alt="HUB Dataset Upload" src="https://user-images.githubusercontent.com/26833433/198611715-540c9856-49d7-4069-a2fd-7c9eb70e772e.png">
## 2. Train a Model
Connect to the Ultralytics HUB notebook and use your model API key to begin training! <a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
## 3. Deploy to Real World
Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run models directly on your mobile device by downloading the [Ultralytics App](https://ultralytics.com/app_install)!
<a align="center" href="https://ultralytics.com/app_install" target="_blank">
<img width="100%" alt="Ultralytics mobile app" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-app.png"></a>
## ❓ Issues
If you are a new [Ultralytics HUB](https://bit.ly/ultralytics_hub) user and have questions or comments, you are in the right place! Please raise a [New Issue](https://github.com/ultralytics/hub/issues/new/choose) and let us know what we can do to make your life better 😃!

@ -35,7 +35,7 @@ This is the simplest way of simply using yolo models in a python environment. It
model = YOLO("model.pt")
model.predict(source="0") # accepts all formats - img/folder/vid.*(mp4/format). 0 for webcam
model.predict(source="folder", view_img=True) # Display preds. Accepts all yolo predict arguments
model.predict(source="folder", show=True) # Display preds. Accepts all yolo predict arguments
```

@ -1,5 +1,5 @@
site_name: Ultralytics YOLO Docs
repo_url: https://github.com/ultralytics/yolov5
site_name: Ultralytics Docs
repo_url: https://github.com/ultralytics/ultralytics
repo_name: Ultralytics
theme:
@ -82,6 +82,7 @@ nav:
- Python Interface: sdk.md
- Configuration: config.md
- Customization Guide: engine.md
- Ultralytics HUB: hub.md
- iOS and Android App: app.md
- Reference:
- Python Model interface: reference/model.md

@ -7,6 +7,7 @@ from ultralytics.yolo import v8
from ultralytics.yolo.data import build_classification_dataloader
from ultralytics.yolo.engine.trainer import BaseTrainer
from ultralytics.yolo.utils import DEFAULT_CONFIG
from ultralytics.yolo.utils.torch_utils import strip_optimizer
class ClassificationTrainer(BaseTrainer):
@ -117,7 +118,16 @@ class ClassificationTrainer(BaseTrainer):
pass
def final_eval(self):
pass
for f in self.last, self.best:
if f.exists():
strip_optimizer(f) # strip optimizers
# TODO: validate best.pt after training completes
# if f is self.best:
# self.console.info(f'\nValidating {f}...')
# self.validator.args.save_json = True
# self.metrics = self.validator(model=f)
# self.metrics.pop('fitness', None)
# self.run_callbacks('on_fit_epoch_end')
@hydra.main(version_base=None, config_path=str(DEFAULT_CONFIG.parent), config_name=DEFAULT_CONFIG.name)

Loading…
Cancel
Save