Release 8.0.5 PR (#279)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: Izam Mohammed <106471909+izam-mohammed@users.noreply.github.com>
Co-authored-by: Yue WANG 王跃 <92371174+yuewangg@users.noreply.github.com>
Co-authored-by: Thibaut Lucas <thibautlucas13@gmail.com>
This commit is contained in:
Laughing
2023-01-13 00:09:26 +08:00
committed by GitHub
parent 9552827157
commit c42e44a021
28 changed files with 940 additions and 311 deletions

View File

@ -0,0 +1,133 @@
Image classification is the simplest of the three tasks and involves classifying an entire image into one of a set of
predefined classes.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
The output of an image classifier is a single class label and a confidence score. Image
classification is useful when you need to know only what class an image belongs to and don't need to know where objects
of that class are located or what their exact shape is.
!!! tip "Tip"
YOLOv8 _classification_ models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on ImageNet.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8/cls){.md-button .md-button--primary}
## Train
Train YOLOv8n-cls on the MNIST160 dataset for 100 epochs at image size 64. For a full list of available arguments
see the [Configuration](../config.md) page.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.yaml") # build a new model from scratch
model = YOLO("yolov8n-cls.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="mnist160", epochs=100, imgsz=64)
```
=== "CLI"
```bash
yolo task=classify mode=train data=mnist160 model=yolov8n-cls.pt epochs=100 imgsz=64
```
## Val
Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains
it's training `data` and arguments as model attributes.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
results = model.val() # no arguments needed, dataset and settings remembered
```
=== "CLI"
```bash
yolo task=classify mode=val model=yolov8n-cls.pt # val official model
yolo task=classify mode=val model=path/to/best.pt # val custom model
```
## Predict
Use a trained YOLOv8n-cls model to run predictions on images.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
```
=== "CLI"
```bash
yolo task=classify mode=predict model=yolov8n-cls.pt source="https://ultralytics.com/images/bus.jpg" # predict with official model
yolo task=classify mode=predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg" # predict with custom model
```
## Export
Export a YOLOv8n-cls model to a different format like ONNX, CoreML, etc.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-cls.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained
# Export the model
model.export(format="onnx")
```
=== "CLI"
```bash
yolo mode=export model=yolov8n-cls.pt format=onnx # export official model
yolo mode=export model=path/to/best.pt format=onnx # export custom trained model
```
Available YOLOv8-cls export formats include:
| Format | `format=` | Model |
|----------------------------------------------------------------------------|---------------|-------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n-cls.pt` |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-cls.torchscript` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-cls.onnx` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-cls_openvino_model/` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-cls.engine` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-cls.mlmodel` |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-cls_saved_model/` |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-cls.pb` |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-cls.tflite` |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-cls_edgetpu.tflite` |
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` |

View File

@ -0,0 +1,132 @@
Object detection is a task that involves identifying the location and class of objects in an image or video stream.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
The output of an object detector is a set of bounding boxes that enclose the objects in the image, along with class
labels
and confidence scores for each box. Object detection is a good choice when you need to identify objects of interest in a
scene, but don't need to know exactly where the object is or its exact shape.
!!! tip "Tip"
YOLOv8 _detection_ models have no suffix and are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on COCO.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8){ .md-button .md-button--primary}
## Train
Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a full list of available arguments see
the [Configuration](../config.md) page.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco128.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo task=detect mode=train data=coco128.yaml model=yolov8n.pt epochs=100 imgsz=640
```
## Val
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's
training `data` and arguments as model attributes.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
results = model.val() # no arguments needed, dataset and settings remembered
```
=== "CLI"
```bash
yolo task=detect mode=val model=yolov8n.pt # val official model
yolo task=detect mode=val model=path/to/best.pt # val custom model
```
## Predict
Use a trained YOLOv8n model to run predictions on images.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
```
=== "CLI"
```bash
yolo task=detect mode=predict model=yolov8n.pt source="https://ultralytics.com/images/bus.jpg" # predict with official model
yolo task=detect mode=predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg" # predict with custom model
```
## Export
Export a YOLOv8n model to a different format like ONNX, CoreML, etc.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained
# Export the model
model.export(format="onnx")
```
=== "CLI"
```bash
yolo mode=export model=yolov8n.pt format=onnx # export official model
yolo mode=export model=path/to/best.pt format=onnx # export custom trained model
```
Available YOLOv8 export formats include:
| Format | `format=` | Model |
|----------------------------------------------------------------------------|--------------------|---------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n.pt` |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n.torchscript` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n.onnx` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n_openvino_model/` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n.engine` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n.mlmodel` |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n_saved_model/` |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n.pb` |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` |
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` |

View File

@ -0,0 +1,135 @@
Instance segmentation goes a step further than object detection and involves identifying individual objects in an image
and segmenting them from the rest of the image.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
The output of an instance segmentation model is a set of masks or
contours that outline each object in the image, along with class labels and confidence scores for each object. Instance
segmentation is useful when you need to know not only where objects are in an image, but also what their exact shape is.
!!! tip "Tip"
YOLOv8 _segmentation_ models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on COCO.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8/seg){.md-button .md-button--primary}
## Train
Train YOLOv8n-seg on the COCO128-seg dataset for 100 epochs at image size 640. For a full list of available
arguments see the [Configuration](../config.md) page.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.yaml") # build a new model from scratch
model = YOLO("yolov8n-seg.pt") # load a pretrained model (recommended for training)
# Train the model
results = model.train(data="coco128-seg.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
yolo task=segment mode=train data=coco128-seg.yaml model=yolov8n-seg.pt epochs=100 imgsz=640
```
## Val
Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the `model`
retains it's training `data` and arguments as model attributes.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
results = model.val() # no arguments needed, dataset and settings remembered
```
=== "CLI"
```bash
yolo task=segment mode=val model=yolov8n-seg.pt # val official model
yolo task=segment mode=val model=path/to/best.pt # val custom model
```
## Predict
Use a trained YOLOv8n-seg model to run predictions on images.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Predict with the model
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
```
=== "CLI"
```bash
yolo task=segment mode=predict model=yolov8n-seg.pt source="https://ultralytics.com/images/bus.jpg" # predict with official model
yolo task=segment mode=predict model=path/to/best.pt source="https://ultralytics.com/images/bus.jpg" # predict with custom model
```
## Export
Export a YOLOv8n-seg model to a different format like ONNX, CoreML, etc.
!!! example ""
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolov8n-seg.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom trained
# Export the model
model.export(format="onnx")
```
=== "CLI"
```bash
yolo mode=export model=yolov8n-seg.pt format=onnx # export official model
yolo mode=export model=path/to/best.pt format=onnx # export custom trained model
```
Available YOLOv8-seg export formats include:
| Format | `format=` | Model |
|----------------------------------------------------------------------------|---------------|-------------------------------|
| [PyTorch](https://pytorch.org/) | - | `yolov8n-seg.pt` |
| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-seg.torchscript` |
| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-seg.onnx` |
| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-seg_openvino_model/` |
| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-seg.engine` |
| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-seg.mlmodel` |
| [TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-seg_saved_model/` |
| [TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-seg.pb` |
| [TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-seg.tflite` |
| [TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-seg_edgetpu.tflite` |
| [TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` |