Update docs with YOLOv8 banner (#160)

Co-authored-by: Paula Derrenger <107626595+pderrenger@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
single_channel
Glenn Jocher 2 years ago committed by GitHub
parent fdf294e4e8
commit 96fbf9ce58
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,85 @@
name: 🐛 Bug Report
# title: " "
description: Problems with YOLOv8
labels: [bug, triage]
body:
- type: markdown
attributes:
value: |
Thank you for submitting a YOLOv8 🐛 Bug Report!
- type: checkboxes
attributes:
label: Search before asking
description: >
Please search the [issues](https://github.com/ultralytics/ultralytics/issues) to see if a similar bug report already exists.
options:
- label: >
I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
required: true
- type: dropdown
attributes:
label: YOLOv8 Component
description: |
Please select the part of YOLOv8 where you found the bug.
multiple: true
options:
- "Training"
- "Validation"
- "Detection"
- "Export"
- "PyTorch Hub"
- "Multi-GPU"
- "Evolution"
- "Integrations"
- "Other"
validations:
required: false
- type: textarea
attributes:
label: Bug
description: Provide console output with error messages and/or screenshots of the bug.
placeholder: |
💡 ProTip! Include as much information as possible (screenshots, logs, tracebacks etc.) to receive the most helpful response.
validations:
required: true
- type: textarea
attributes:
label: Environment
description: Please specify the software and hardware you used to produce the bug.
placeholder: |
- YOLO: YOLOv8 🚀 v6.0-67-g60e42e1 torch 1.9.0+cu111 CUDA:0 (A100-SXM4-40GB, 40536MiB)
- OS: Ubuntu 20.04
- Python: 3.9.0
validations:
required: false
- type: textarea
attributes:
label: Minimal Reproducible Example
description: >
When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to **reproduce** the problem.
This is referred to by community members as creating a [minimal reproducible example](https://stackoverflow.com/help/minimal-reproducible-example).
placeholder: |
```
# Code to reproduce your issue here
```
validations:
required: false
- type: textarea
attributes:
label: Additional
description: Anything else you would like to share?
- type: checkboxes
attributes:
label: Are you willing to submit a PR?
description: >
(Optional) We encourage you to submit a [Pull Request](https://github.com/ultralytics/ultralytics/pulls) (PR) to help improve YOLOv8 for everyone, especially if you have a good understanding of how to implement a fix or feature.
See the YOLOv8 [Contributing Guide](https://github.com/ultralytics/ultralytics/blob/master/CONTRIBUTING.md) to get started.
options:
- label: Yes I'd like to help by submitting a PR!

@ -0,0 +1,11 @@
blank_issues_enabled: true
contact_links:
- name: 📄Docs
url: https://docs.ultralytics.com/
about: Full Ultralytics YOLOv8 Documentation
- name: 💬 Forum
url: https://community.ultralytics.com/
about: Ask on Ultralytics Community Forum
- name: Stack Overflow
url: https://stackoverflow.com/search?q=YOLOv8
about: Ask on Stack Overflow with 'YOLOv8' tag

@ -0,0 +1,50 @@
name: 🚀 Feature Request
description: Suggest a YOLOv8 idea
# title: " "
labels: [enhancement]
body:
- type: markdown
attributes:
value: |
Thank you for submitting a YOLOv8 🚀 Feature Request!
- type: checkboxes
attributes:
label: Search before asking
description: >
Please search the [issues](https://github.com/ultralytics/ultralytics/issues) to see if a similar feature request already exists.
options:
- label: >
I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
required: true
- type: textarea
attributes:
label: Description
description: A short description of your feature.
placeholder: |
What new feature would you like to see in YOLOv8?
validations:
required: true
- type: textarea
attributes:
label: Use case
description: |
Describe the use case of your feature request. It will help us understand and prioritize the feature request.
placeholder: |
How would this feature be used, and who would use it?
- type: textarea
attributes:
label: Additional
description: Anything else you would like to share?
- type: checkboxes
attributes:
label: Are you willing to submit a PR?
description: >
(Optional) We encourage you to submit a [Pull Request](https://github.com/ultralytics/ultralytics/pulls) (PR) to help improve YOLOv8 for everyone, especially if you have a good understanding of how to implement a fix or feature.
See the YOLOv8 [Contributing Guide](https://github.com/ultralytics/ultralytics/blob/master/CONTRIBUTING.md) to get started.
options:
- label: Yes I'd like to help by submitting a PR!

@ -0,0 +1,33 @@
name: ❓ Question
description: Ask a YOLOv8 question
# title: " "
labels: [question]
body:
- type: markdown
attributes:
value: |
Thank you for asking a YOLOv8 ❓ Question!
- type: checkboxes
attributes:
label: Search before asking
description: >
Please search the [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) to see if a similar question already exists.
options:
- label: >
I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
required: true
- type: textarea
attributes:
label: Question
description: What is your question?
placeholder: |
💡 ProTip! Include as much information as possible (screenshots, logs, tracebacks etc.) to receive the most helpful response.
validations:
required: true
- type: textarea
attributes:
label: Additional
description: Anything else you would like to share?

@ -62,7 +62,7 @@ To allow your work to be integrated as seamlessly as possible, we advise you to:
Not all functions or classes require docstrings but when they do, we follow [google-stlye docstrings format](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings). Here is an example:
```python
'''
"""
What the function does - performs nms on given detection predictions
Args:
@ -74,7 +74,7 @@ Not all functions or classes require docstrings but when they do, we follow [goo
Raises:
Exception Class: When and why this exception can be raised by the function.
'''
"""
```
## Submitting a Bug Report 🐛

@ -0,0 +1,23 @@
# Ultralytics HUB App for YOLOv8
<div align="center">
<a href="https://ultralytics.com/app_install" target="_blank">
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-app.png"></a>
<br>
</div>
Welcome to the Ultralytics HUB app for demonstrating YOLOv5 and YOLOv8 models! In this app, available on the [Apple App
Store](https://apps.apple.com/xk/app/ultralytics/id1583935240) and the [Google Play Store](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app), you will be able to see the power and capabilities of YOLOv5, a state-of-the-art object
detection model developed by Ultralytics.
**To install simply scan the QR code above**. The App currently features YOLOv5 models, with YOLOv8 models coming soon.
With YOLOv5, you can detect and classify objects in images and videos with high accuracy and speed. The model has been
trained on a large dataset and is able to detect a wide range of objects, including cars, pedestrians, and traffic
signs.
In this app, you will be able to try out YOLOv5 on your own images and videos, and see the model in action. You can also
learn more about how YOLOv5 works and how it can be used in real-world applications.
We hope you enjoy using YOLOv5 and seeing its capabilities firsthand. Thank you for choosing Ultralytics for your object
detection needs!

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.2 KiB

@ -10,7 +10,7 @@ More details and source code can be found in [`BaseTrainer` Reference](../refere
## DetectionTrainer
Here's how you can use the YOLOv8 `DetectionTrainer` and customize it.
```python
from Ultrlaytics.yolo.v8 import DetectionTrainer
from ultralytics.yolo.v8 import DetectionTrainer
trainer = DetectionTrainer(overrides={...})
trainer.train()
@ -20,7 +20,7 @@ trained_model = trainer.best # get best model
### Customizing the DetectionTrainer
Let's customize the trainer **to train a custom detection model** that is not supported directly. You can do this by simply overloading the existing the `get_model` functionality:
```python
from Ultrlaytics.yolo.v8 import DetectionTrainer
from ultralytics.yolo.v8 import DetectionTrainer
class CustomTrainer(DetectionTrainer):
def get_model(self, cfg, weights):
@ -36,7 +36,7 @@ You now realize that you need to customize the trainer further to:
Here's how you can do it:
```python
from Ultrlaytics.yolo.v8 import DetectionTrainer
from ultralytics.yolo.v8 import DetectionTrainer
class CustomTrainer(DetectionTrainer):
def get_model(self, cfg, weights):

@ -1,17 +1,18 @@
<div align="center">
<a href="https://ultralytics.com/yolov5" target="_blank">
<img width="1024" src="https://user-images.githubusercontent.com/26833433/210431393-39c997b8-92a7-4957-864f-1f312004eb54.png"></a>
<a href="https://github.com/ultralytics/ultralytics" target="_blank">
<img width="1024" src="https://raw.githubusercontent.com/ultralytics/assets/main/yolov8/banner-yolov8.png"></a>
<br>
<a href="https://bit.ly/yolov5-paperspace-notebook"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run on Gradient"></a>
<a href="https://colab.research.google.com/github/glenn-jocher/glenn-jocher.github.io/blob/main/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
<a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
<br>
<br>
</div>
# Welcome to Ultralytics YOLOv8
Welcome to the Ultralytics YOLOv8 documentation landing page! Ultralytics YOLOv8 is the latest version of the YOLO (You
Only Look Once) object detection and image segmentation model developed by Ultralytics. This page serves as the starting
Welcome to the Ultralytics YOLOv8 documentation landing page! [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO (You
Only Look Once) object detection and image segmentation model developed by [Ultralytics](https://ultralytics.com). This page serves as the starting
point for exploring the various resources available to help you get started with YOLOv8 and understand its features and
capabilities.
@ -20,10 +21,9 @@ object detection and image segmentation tasks. It can be trained on large datase
variety of hardware platforms, from CPUs to GPUs.
Whether you are a seasoned machine learning practitioner or new to the field, we hope that the resources on this page
will help you get the most out of YOLOv8. Please feel free to browse the documentation and reach out to us with any
questions or feedback.
will help you get the most out of YOLOv8. For any bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/ultralytics/issues). For professional support please [Contact Us](https://ultralytics.com/contact).
### A Brief History of YOLO
## A Brief History of YOLO
YOLO (You Only Look Once) is a popular object detection and image segmentation model developed by Joseph Redmon and Ali
Farhadi at the University of Washington. The first version of YOLO was released in 2015 and quickly gained popularity
@ -36,7 +36,7 @@ backbone network, adding a feature pyramid, and making use of focal loss.
In 2020, YOLOv4 was released which introduced a number of innovations such as the use of Mosaic data augmentation, a new
anchor-free detection head, and a new loss function.
In 2021, Ultralytics released YOLOv5, which further improved the model's performance and added new features such as
In 2021, Ultralytics released [YOLOv5](https://github.com/ultralytics/yolov5), which further improved the model's performance and added new features such as
support for panoptic segmentation and object tracking.
YOLO has been widely used in a variety of applications, including autonomous vehicles, security and surveillance, and
@ -49,9 +49,9 @@ For more information about the history and development of YOLO, you can refer to
conference on computer vision and pattern recognition (pp. 779-788).
- Redmon, J., & Farhadi, A. (2016). YOLO9000: Better, faster, stronger. In Proceedings
### Ultralytics YOLOv8
## Ultralytics YOLOv8
YOLOv8 is the latest version of the YOLO object detection and image segmentation model developed by
[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of the YOLO object detection and image segmentation model developed by
Ultralytics. YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO
versions and introduces new features and improvements to further boost performance and flexibility.

@ -1,9 +1,12 @@
## Installation
!!! note "Latest Stable Release"
Install YOLOv8 via the `ultralytics` pip package for the latest stable release or by cloning the [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics) repository for the most up-to-date version.
!!! note "pip install (recommended)"
```
pip install ultralytics
```
??? tip "Development and Contributing"
!!! note "git clone"
```
git clone https://github.com/ultralytics/ultralytics
cd ultralytics
@ -13,38 +16,41 @@
## CLI
The command line YOLO interface let's you simply train, validate or infer models on various tasks and versions.
CLI requires no customization or code. You can simply run all tasks from the terminal
!!! tip
The command line YOLO interface lets you simply train, validate or infer models on various tasks and versions.
CLI requires no customization or code. You can simply run all tasks from the terminal with the `yolo` command.
!!! note
=== "Syntax"
```bash
yolo task=detect mode=train model=s.yaml epochs=1 ...
... ... ...
segment infer s-cls.pt
classify val s-seg.pt
yolo task=detect mode=train model=yolov8n.yaml args...
classify predict yolov8n-cls.yaml args...
segment val yolov8n-seg.yaml args...
export yolov8n.pt format=onnx args...
```
=== "Example training"
```bash
yolo task=detect mode=train model=s.yaml
yolo task=detect mode=train model=yolov8n.pt data=coco128.yaml device=0
```
TODO: add terminal screen/gif
=== "Example training DDP"
=== "Example Multi-GPU training"
```bash
yolo task=detect mode=train model=s.yaml device=\'0,1,2,3\'
yolo task=detect mode=train model=yolov8n.pt data=coco128.yaml device=\'0,1,2,3\'
```
[CLI Guide](cli.md){ .md-button .md-button--primary}
## Python API
Ultralytics YOLO comes with pythonic Model and Trainer interface.
!!! tip
The Python API allows users to easily use YOLOv8 in their Python projects. It provides functions for loading and running the model, as well as for processing the model's output. The interface is designed to be easy to use, so that users can quickly implement object detection in their projects.
Overall, the Python interface is a useful tool for anyone looking to incorporate object detection, segmentation or classification into their Python projects using YOLOv8.
!!! note
```python
import ultralytics
from ultralytics import YOLO
model = YOLO("yolov8n-seg.yaml") # automatically detects task type
model = YOLO("yolov8n.pt") # load checkpoint
model.train(data="coco128-seg.yaml", epochs=1, lr0=0.01, ...)
model.train(data="coco128-seg.yaml", epochs=1, lr0=0.01, device="0,1,2,3") # DDP mode
model = YOLO('yolov8n.yaml') # build a new model from scratch
model = YOLO('yolov8n.pt') # load a pretrained model (recommended for best training results)
results = model.train(data='coco128.yaml') # train the model
results = model.val() # evaluate model performance on the validation set
results = model.predict(source='bus.jpg') # predict on an image
success = model.export(format='onnx') # export the model to ONNX format
```
[API Guide](sdk.md){ .md-button .md-button--primary}

@ -4,7 +4,7 @@ repo_name: Ultralytics
theme:
name: "material"
logo: assets/logo.png
logo: https://github.com/ultralytics/assets/raw/main/logo/Ultralytics-logomark-white.png
icon:
repo: fontawesome/brands/github
admonition:
@ -82,6 +82,7 @@ nav:
- Python Interface: sdk.md
- Configuration: config.md
- Customization Guide: engine.md
- iOS and Android App: app.md
- Reference:
- Python Model interface: reference/model.md
- Engine:

@ -72,7 +72,7 @@ def split_key(key=''):
return api_key, model_id
def smart_request(*args, retry=3, timeout=30, thread=True, code=-1, method="post", **kwargs):
def smart_request(*args, retry=3, timeout=30, thread=True, code=-1, method="post", verbose=True, **kwargs):
"""
Makes an HTTP request using the 'requests' library, with exponential backoff retries up to a specified timeout.
@ -83,6 +83,7 @@ def smart_request(*args, retry=3, timeout=30, thread=True, code=-1, method="post
thread (bool, optional): Whether to execute the request in a separate daemon thread. Default is True.
code (int, optional): An identifier for the request, used for logging purposes. Default is -1.
method (str, optional): The HTTP method to use for the request. Choices are 'post' and 'get'. Default is 'post'.
verbose (bool, optional): A flag to determine whether to print out to console or not. Default is True.
**kwargs: Keyword arguments to be passed to the requests function specified in method.
Returns:
@ -111,7 +112,8 @@ def smart_request(*args, retry=3, timeout=30, thread=True, code=-1, method="post
h = r.headers # response headers
m = f"Rate limit reached ({h['X-RateLimit-Remaining']}/{h['X-RateLimit-Limit']}). " \
f"Please retry after {h['Retry-After']}s."
LOGGER.warning(f"{PREFIX}{m} {HELP_MSG} ({r.status_code} #{code})")
if verbose:
LOGGER.warning(f"{PREFIX}{m} {HELP_MSG} ({r.status_code} #{code})")
if r.status_code not in retry_codes:
return r
time.sleep(2 ** i) # exponential standoff
@ -139,4 +141,4 @@ def sync_analytics(cfg, all_keys=False, enabled=False):
cfg['uuid'] = SETTINGS['uuid'] # add the device UUID to the configuration data
# Send a request to the HUB API to sync the analytics data
smart_request(f'{HUB_API_ROOT}/v1/usage/anonymous', data=cfg, headers=None, code=3, retry=0)
smart_request(f'{HUB_API_ROOT}/v1/usage/anonymous', data=cfg, headers=None, code=3, retry=0, verbose=False)

Loading…
Cancel
Save