ultralytics 8.0.106 (#2736)

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: vyskocj <whiskey1939@seznam.cz>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: triple Mu <gpu@163.com>
Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
This commit is contained in:
Glenn Jocher
2023-05-22 13:31:19 +02:00
committed by GitHub
parent 23fc50641c
commit 4db686a315
41 changed files with 1159 additions and 151 deletions

View File

@ -76,21 +76,7 @@ see the [Configuration](../usage/cfg.md) page.
### Dataset format
The YOLO classification dataset format is same as the torchvision format. Each class of images has its own folder and you have to simply pass the path of the dataset folder, i.e, `yolo classify train data="path/to/dataset"`
```
dataset/
├── train/
├──── class1/
├──── class2/
├──── class3/
├──── ...
├── val/
├──── class1/
├──── class2/
├──── class3/
├──── ...
```
YOLO classification dataset format can be found in detail in the [Dataset Guide](../datasets/classify/index.md).
## Val
@ -190,4 +176,4 @@ i.e. `yolo predict model=yolov8n-cls.onnx`. Usage examples are shown for your mo
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-cls_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-cls_paddle_model/` | ✅ | `imgsz` |
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.

View File

@ -67,7 +67,7 @@ Train YOLOv8n on the COCO128 dataset for 100 epochs at image size 640. For a ful
### Dataset format
YOLO detection dataset format can be found in detail in the [Dataset Guide](../yolov5/tutorials/train_custom_data.md). To convert your existing dataset from other formats( like COCO, VOC etc.) to YOLO format, please use [json2yolo tool](https://github.com/ultralytics/JSON2YOLO) by Ultralytics.
YOLO detection dataset format can be found in detail in the [Dataset Guide](../datasets/detect/index.md). To convert your existing dataset from other formats( like COCO etc.) to YOLO format, please use [json2yolo tool](https://github.com/ultralytics/JSON2YOLO) by Ultralytics.
## Val
@ -167,4 +167,4 @@ Available YOLOv8 export formats are in the table below. You can predict or valid
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.

View File

@ -8,7 +8,7 @@ to as keypoints. The keypoints can represent various parts of the object such as
features. The locations of the keypoints are usually represented as a set of 2D `[x, y]` or 3D `[x, y, visible]`
coordinates.
<img width="1024" src="https://user-images.githubusercontent.com/26833433/212094133-6bb8c21c-3d47-41df-a512-81c5931054ae.png">
<img width="1024" src="https://user-images.githubusercontent.com/26833433/239691398-d62692dc-713e-4207-9908-2f6710050e5c.jpg">
The output of a pose estimation model is a set of points that represent the keypoints on an object in the image, usually
along with the confidence scores for each point. Pose estimation is a good choice when you need to identify specific
@ -76,6 +76,10 @@ Train a YOLOv8-pose model on the COCO128-pose dataset.
yolo pose train data=coco8-pose.yaml model=yolov8n-pose.yaml pretrained=yolov8n-pose.pt epochs=100 imgsz=640
```
### Dataset format
YOLO pose dataset format can be found in detail in the [Dataset Guide](../datasets/pose/index.md). To convert your existing dataset from other formats( like COCO etc.) to YOLO format, please use [json2yolo tool](https://github.com/ultralytics/JSON2YOLO) by Ultralytics.
## Val
Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the `model`
@ -177,4 +181,4 @@ i.e. `yolo predict model=yolov8n-pose.onnx`. Usage examples are shown for your m
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-pose_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-pose_paddle_model/` | ✅ | `imgsz` |
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.

View File

@ -75,11 +75,7 @@ arguments see the [Configuration](../usage/cfg.md) page.
### Dataset format
YOLO segmentation dataset label format extends detection format with segment points.
`cls x1 y1 x2 y2 p1 p2 ... pn`
To convert your existing dataset from other formats( like COCO, VOC etc.) to YOLO format, please use [json2yolo tool](https://github.com/ultralytics/JSON2YOLO) by Ultralytics.
YOLO segmentation dataset format can be found in detail in the [Dataset Guide](../datasets/segment/index.md). To convert your existing dataset from other formats( like COCO etc.) to YOLO format, please use [json2yolo tool](https://github.com/ultralytics/JSON2YOLO) by Ultralytics.
## Val
@ -185,4 +181,4 @@ i.e. `yolo predict model=yolov8n-seg.onnx`. Usage examples are shown for your mo
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-seg_web_model/` | ✅ | `imgsz` |
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-seg_paddle_model/` | ✅ | `imgsz` |
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.