Update docs with YOLOv8 banner (#160)

Co-authored-by: Paula Derrenger <107626595+pderrenger@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Glenn Jocher
2023-01-09 15:24:01 +01:00
committed by GitHub
parent fdf294e4e8
commit 96fbf9ce58
12 changed files with 252 additions and 41 deletions

23
docs/app.md Normal file
View File

@ -0,0 +1,23 @@
# Ultralytics HUB App for YOLOv8
<div align="center">
<a href="https://ultralytics.com/app_install" target="_blank">
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-app.png"></a>
<br>
</div>
Welcome to the Ultralytics HUB app for demonstrating YOLOv5 and YOLOv8 models! In this app, available on the [Apple App
Store](https://apps.apple.com/xk/app/ultralytics/id1583935240) and the [Google Play Store](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app), you will be able to see the power and capabilities of YOLOv5, a state-of-the-art object
detection model developed by Ultralytics.
**To install simply scan the QR code above**. The App currently features YOLOv5 models, with YOLOv8 models coming soon.
With YOLOv5, you can detect and classify objects in images and videos with high accuracy and speed. The model has been
trained on a large dataset and is able to detect a wide range of objects, including cars, pedestrians, and traffic
signs.
In this app, you will be able to try out YOLOv5 on your own images and videos, and see the model in action. You can also
learn more about how YOLOv5 works and how it can be used in real-world applications.
We hope you enjoy using YOLOv5 and seeing its capabilities firsthand. Thank you for choosing Ultralytics for your object
detection needs!

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.2 KiB

View File

@ -10,7 +10,7 @@ More details and source code can be found in [`BaseTrainer` Reference](../refere
## DetectionTrainer
Here's how you can use the YOLOv8 `DetectionTrainer` and customize it.
```python
from Ultrlaytics.yolo.v8 import DetectionTrainer
from ultralytics.yolo.v8 import DetectionTrainer
trainer = DetectionTrainer(overrides={...})
trainer.train()
@ -20,7 +20,7 @@ trained_model = trainer.best # get best model
### Customizing the DetectionTrainer
Let's customize the trainer **to train a custom detection model** that is not supported directly. You can do this by simply overloading the existing the `get_model` functionality:
```python
from Ultrlaytics.yolo.v8 import DetectionTrainer
from ultralytics.yolo.v8 import DetectionTrainer
class CustomTrainer(DetectionTrainer):
def get_model(self, cfg, weights):
@ -36,7 +36,7 @@ You now realize that you need to customize the trainer further to:
Here's how you can do it:
```python
from Ultrlaytics.yolo.v8 import DetectionTrainer
from ultralytics.yolo.v8 import DetectionTrainer
class CustomTrainer(DetectionTrainer):
def get_model(self, cfg, weights):

View File

@ -1,17 +1,18 @@
<div align="center">
<a href="https://ultralytics.com/yolov5" target="_blank">
<img width="1024" src="https://user-images.githubusercontent.com/26833433/210431393-39c997b8-92a7-4957-864f-1f312004eb54.png"></a>
<a href="https://github.com/ultralytics/ultralytics" target="_blank">
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png"></a>
<br>
<a href="https://bit.ly/yolov5-paperspace-notebook"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"></a>
<a href="https://colab.research.google.com/github/glenn-jocher/glenn-jocher.github.io/blob/main/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
<a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
<br>
<br>
</div>
# Welcome to Ultralytics YOLOv8
Welcome to the Ultralytics YOLOv8 documentation landing page! Ultralytics YOLOv8 is the latest version of the YOLO (You
Only Look Once) object detection and image segmentation model developed by Ultralytics. This page serves as the starting
Welcome to the Ultralytics YOLOv8 documentation landing page! [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO (You
Only Look Once) object detection and image segmentation model developed by [Ultralytics](https://ultralytics.com). This page serves as the starting
point for exploring the various resources available to help you get started with YOLOv8 and understand its features and
capabilities.
@ -20,10 +21,9 @@ object detection and image segmentation tasks. It can be trained on large datase
variety of hardware platforms, from CPUs to GPUs.
Whether you are a seasoned machine learning practitioner or new to the field, we hope that the resources on this page
will help you get the most out of YOLOv8. Please feel free to browse the documentation and reach out to us with any
questions or feedback.
will help you get the most out of YOLOv8. For any bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For professional support please [Contact Us](https://ultralytics.com/contact).
### A Brief History of YOLO
## A Brief History of YOLO
YOLO (You Only Look Once) is a popular object detection and image segmentation model developed by Joseph Redmon and Ali
Farhadi at the University of Washington. The first version of YOLO was released in 2015 and quickly gained popularity
@ -36,7 +36,7 @@ backbone network, adding a feature pyramid, and making use of focal loss.
In 2020, YOLOv4 was released which introduced a number of innovations such as the use of Mosaic data augmentation, a new
anchor-free detection head, and a new loss function.
In 2021, Ultralytics released YOLOv5, which further improved the model's performance and added new features such as
In 2021, Ultralytics released [YOLOv5](https://github.com/ultralytics/yolov5), which further improved the model's performance and added new features such as
support for panoptic segmentation and object tracking.
YOLO has been widely used in a variety of applications, including autonomous vehicles, security and surveillance, and
@ -49,9 +49,9 @@ For more information about the history and development of YOLO, you can refer to
conference on computer vision and pattern recognition (pp. 779-788).
- Redmon, J., & Farhadi, A. (2016). YOLO9000: Better, faster, stronger. In Proceedings
### Ultralytics YOLOv8
## Ultralytics YOLOv8
YOLOv8 is the latest version of the YOLO object detection and image segmentation model developed by
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO object detection and image segmentation model developed by
Ultralytics. YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO
versions and introduces new features and improvements to further boost performance and flexibility.
@ -66,4 +66,4 @@ detection head, and a new loss function. YOLOv8 is also highly efficient and can
platforms, from CPUs to GPUs.
Overall, YOLOv8 is a powerful and flexible tool for object detection and image segmentation that offers the best of both
worlds: the latest SOTA technology and the ability to use and compare all previous YOLO versions.
worlds: the latest SOTA technology and the ability to use and compare all previous YOLO versions.

View File

@ -1,9 +1,12 @@
## Installation
!!! note "Latest Stable Release"
Install YOLOv8 via the `ultralytics` pip package for the latest stable release or by cloning the [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics) repository for the most up-to-date version.
!!! note "pip install (recommended)"
```
pip install ultralytics
```
??? tip "Development and Contributing"
!!! note "git clone"
```
git clone https://github.com/ultralytics/ultralytics
cd ultralytics
@ -13,38 +16,41 @@
## CLI
The command line YOLO interface let's you simply train, validate or infer models on various tasks and versions.
CLI requires no customization or code. You can simply run all tasks from the terminal
!!! tip
The command line YOLO interface lets you simply train, validate or infer models on various tasks and versions.
CLI requires no customization or code. You can simply run all tasks from the terminal with the `yolo` command.
!!! note
=== "Syntax"
```bash
yolo task=detect mode=train model=s.yaml epochs=1 ...
... ... ...
segment infer s-cls.pt
classify val s-seg.pt
yolo task=detect mode=train model=yolov8n.yaml args...
classify predict yolov8n-cls.yaml args...
segment val yolov8n-seg.yaml args...
export yolov8n.pt format=onnx args...
```
=== "Example training"
```bash
yolo task=detect mode=train model=s.yaml
yolo task=detect mode=train model=yolov8n.pt data=coco128.yaml device=0
```
TODO: add terminal screen/gif
=== "Example training DDP"
=== "Example Multi-GPU training"
```bash
yolo task=detect mode=train model=s.yaml device=\'0,1,2,3\'
yolo task=detect mode=train model=yolov8n.pt data=coco128.yaml device=\'0,1,2,3\'
```
[CLI Guide](cli.md){ .md-button .md-button--primary}
## Python API
Ultralytics YOLO comes with pythonic Model and Trainer interface.
!!! tip
The Python API allows users to easily use YOLOv8 in their Python projects. It provides functions for loading and running the model, as well as for processing the model's output. The interface is designed to be easy to use, so that users can quickly implement object detection in their projects.
Overall, the Python interface is a useful tool for anyone looking to incorporate object detection, segmentation or classification into their Python projects using YOLOv8.
!!! note
```python
import ultralytics
from ultralytics import YOLO
model = YOLO("yolov8n-seg.yaml") # automatically detects task type
model = YOLO("yolov8n.pt") # load checkpoint
model.train(data="coco128-seg.yaml", epochs=1, lr0=0.01, ...)
model.train(data="coco128-seg.yaml", epochs=1, lr0=0.01, device="0,1,2,3") # DDP mode
model = YOLO('yolov8n.yaml') # build a new model from scratch
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for best training results)
results = model.train(data='coco128.yaml') # train the model
results = model.val() # evaluate model performance on the validation set
results = model.predict(source='bus.jpg') # predict on an image
success = model.export(format='onnx') # export the model to ONNX format
```
[API Guide](sdk.md){ .md-button .md-button--primary}