👋 Hello @${{ github.actor }}, thank you for submitting a YOLOv8 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:
👋 Hello @${{ github.actor }}, thank you for submitting a YOLOv8 🚀 PR! To allow your work to be integrated as seamlessly as possible, we advise you to:
- ✅ Verify your PR is **up-to-date** with `ultralytics/ultralytics` `main` branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running `git pull` and `git merge master` locally.
- ✅ Verify your PR is **up-to-date** with `ultralytics/ultralytics` `main` branch. If your PR is behind you can update your code by clicking the 'Update branch' button or by running `git pull` and `git merge main` locally.
- ✅ Verify all YOLOv8 Continuous Integration (CI) **checks are passing**.
- ✅ Verify all YOLOv8 Continuous Integration (CI) **checks are passing**.
- ✅ Update YOLOv8 [Docs](https://docs.ultralytics.com) for any new or updated features.
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
- ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ — Bruce Lee
See our [Contributing Guide](https://github.com/ultralytics/ultralytics/blob/main/CONTRIBUTING.md) for details and let us know if you have any questions!
See our [Contributing Guide](https://github.com/ultralytics/ultralytics/blob/main/CONTRIBUTING.md) for details and let us know if you have any questions!
@ -33,7 +34,7 @@ jobs:
## Install
## Install
Pip install the `ultralytics` package including all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a [**Python>=3.7**](https://www.python.org/) environment with [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
Pip install the `ultralytics` package including all [requirements](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a [**Python>=3.7**](https://www.python.org/) environment with [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
@ -105,28 +105,11 @@ success = model.export(format="onnx") # export the model to ONNX format
Ultralytics [release](https://github.com/ultralytics/assets/releases). See
Ultralytics [release](https://github.com/ultralytics/assets/releases). See
YOLOv8 [Python Docs](https://docs.ultralytics.com/usage/python) for more examples.
YOLOv8 [Python Docs](https://docs.ultralytics.com/usage/python) for more examples.
#### Model Architectures
⭐ **NEW** YOLOv5u anchor free models are now available.
All supported model architectures can be found in the [Models](./ultralytics/models/) section.
#### Known Issues / TODOs
We are still working on several parts of YOLOv8! We aim to have these completed soon to bring the YOLOv8 feature set up
to par with YOLOv5, including export and inference to all the same formats. We are also writing a YOLOv8 paper which we
will submit to [arxiv.org](https://arxiv.org) once complete.
- [x] TensorFlow exports
- [x] DDP resume
- [ ] [arxiv.org](https://arxiv.org) paper
</details>
</details>
## <divalign="center">Models</div>
## <divalign="center">Models</div>
All YOLOv8 pretrained models are available here. Detection and Segmentation models are pretrained on the COCO dataset,
All YOLOv8 pretrained models are available here. Detect, Segment and Pose models are pretrained on the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) dataset, while Classify models are pretrained on the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) dataset.
while Classification models are pretrained on the ImageNet dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
@ -147,7 +130,7 @@ See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examp
<br>Reproduce by `yolo val detect data=coco.yaml device=0`
<br>Reproduce by `yolo val detect data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
instance.
<br>Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0/cpu`
<br>Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0|cpu`
</details>
</details>
@ -167,7 +150,7 @@ See [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage e
<br>Reproduce by `yolo val segment data=coco.yaml device=0`
<br>Reproduce by `yolo val segment data=coco.yaml device=0`
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
instance.
<br>Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0/cpu`
<br>Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu`
</details>
</details>
@ -187,7 +170,7 @@ See [Classification Docs](https://docs.ultralytics.com/tasks/classify/) for usag
<br>Reproduce by `yolo val classify data=path/to/ImageNet device=0`
<br>Reproduce by `yolo val classify data=path/to/ImageNet device=0`
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
instance.
instance.
<br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0/cpu`
<br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
@ -9,9 +9,31 @@ of that class are located or what their exact shape is.
!!! tip "Tip"
!!! tip "Tip"
YOLOv8 _classification_ models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on ImageNet.
YOLOv8 Classify models use the `-cls` suffix, i.e. `yolov8n-cls.pt` and are pretrained on [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml).
@ -9,9 +9,31 @@ scene, but don't need to know exactly where the object is or its exact shape.
!!! tip "Tip"
!!! tip "Tip"
YOLOv8 _detection_ models have no suffix and are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on COCO.
YOLOv8 Detect models are the default YOLOv8 models, i.e. `yolov8n.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml).
@ -9,9 +9,31 @@ segmentation is useful when you need to know not only where objects are in an im
!!! tip "Tip"
!!! tip "Tip"
YOLOv8 _segmentation_ models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on COCO.
YOLOv8 Segment models use the `-seg` suffix, i.e. `yolov8n-seg.pt` and are pretrained on [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml).
"Transferred 355/355 items from pretrained weights\n",
"Transferred 355/355 items from pretrained weights\n",
"2023-03-26 14:57:47.224672: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n",
"To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
"2023-03-26 14:57:48.209047: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib/python3.9/dist-packages/cv2/../../lib64:/usr/local/lib/python3.9/dist-packages/cv2/../../lib64:/usr/lib64-nvidia\n",
"2023-03-26 14:57:48.209179: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib/python3.9/dist-packages/cv2/../../lib64:/usr/local/lib/python3.9/dist-packages/cv2/../../lib64:/usr/lib64-nvidia\n",
"2023-03-26 14:57:48.209199: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\n",
"\u001b[34m\u001b[1mTensorBoard: \u001b[0mStart with 'tensorboard --logdir runs/detect/train', view at http://localhost:6006/\n",
"\u001b[34m\u001b[1mAMP: \u001b[0mrunning Automatic Mixed Precision (AMP) checks with YOLOv8n...\n",