`ultralytics 8.0.75` fixes and updates (#1967)

Co-authored-by: Laughing-q <1185102784@qq.com>
Co-authored-by: Jonathan Rayner <jonathan.j.rayner@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
single_channel
Glenn Jocher 2 years ago committed by GitHub
parent e5cb35edfc
commit 48c4483795
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -95,7 +95,7 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
</details> </details>
## <div align="center">Models</div> ## <div align="center">模型</div>
所有的 YOLOv8 预训练模型都可以在此找到。检测、分割和姿态模型在 [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) 数据集上进行预训练,而分类模型在 [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) 数据集上进行预训练。 所有的 YOLOv8 预训练模型都可以在此找到。检测、分割和姿态模型在 [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) 数据集上进行预训练,而分类模型在 [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) 数据集上进行预训练。
@ -105,18 +105,18 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
查看 [检测文档](https://docs.ultralytics.com/tasks/detect/) 以获取使用这些模型的示例。 查看 [检测文档](https://docs.ultralytics.com/tasks/detect/) 以获取使用这些模型的示例。
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) | | 模型 | 尺寸<br><sup>(像素) | mAP<sup>val<br>50-95 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>A100 TensorRT<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- | | ------------------------------------------------------------------------------------ | --------------- | -------------------- | --------------------------- | -------------------------------- | -------------- | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 | | [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 | | [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 | | [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 | | [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 | | [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset. - **mAP<sup>val</sup>** 值是基于单模型单尺度在 [COCO val2017](http://cocodataset.org) 数据集上的结果。
<br>Reproduce by `yolo val detect data=coco.yaml device=0` <br>通过 `yolo val detect data=coco.yaml device=0` 复现
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. - **速度** 是使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例对 COCO val 图像进行平均计算的。
<br>Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0|cpu` <br>通过 `yolo val detect data=coco128.yaml batch=1 device=0|cpu` 复现
</details> </details>
@ -124,18 +124,18 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
查看 [分割文档](https://docs.ultralytics.com/tasks/segment/) 以获取使用这些模型的示例。 查看 [分割文档](https://docs.ultralytics.com/tasks/segment/) 以获取使用这些模型的示例。
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) | | 模型 | 尺寸<br><sup>(像素) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>A100 TensorRT<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- | | -------------------------------------------------------------------------------------------- | --------------- | -------------------- | --------------------- | --------------------------- | -------------------------------- | -------------- | ----------------- |
| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 | | [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 | | [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 | | [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 | | [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 | | [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset. - **mAP<sup>val</sup>** 值是基于单模型单尺度在 [COCO val2017](http://cocodataset.org) 数据集上的结果。
<br>Reproduce by `yolo val segment data=coco.yaml device=0` <br>通过 `yolo val segment data=coco.yaml device=0` 复现
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. - **速度** 是使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例对 COCO val 图像进行平均计算的。
<br>Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu` <br>通过 `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu` 复现
</details> </details>
@ -143,18 +143,18 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
查看 [分类文档](https://docs.ultralytics.com/tasks/classify/) 以获取使用这些模型的示例。 查看 [分类文档](https://docs.ultralytics.com/tasks/classify/) 以获取使用这些模型的示例。
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 | | 模型 | 尺寸<br><sup>(像素) | acc<br><sup>top1 | acc<br><sup>top5 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>A100 TensorRT<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ | | -------------------------------------------------------------------------------------------- | --------------- | ---------------- | ---------------- | --------------------------- | -------------------------------- | -------------- | ------------------------ |
| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 | | [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 |
| [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | 23.4 | 0.35 | 6.4 | 13.5 | | [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | 23.4 | 0.35 | 6.4 | 13.5 |
| [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | 85.4 | 0.62 | 17.0 | 42.7 | | [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | 85.4 | 0.62 | 17.0 | 42.7 |
| [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | 163.0 | 0.87 | 37.5 | 99.7 | | [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | 163.0 | 0.87 | 37.5 | 99.7 |
| [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 | | [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 |
- **acc** values are model accuracies on the [ImageNet](https://www.image-net.org/) dataset validation set. - **acc** 值是模型在 [ImageNet](https://www.image-net.org/) 数据集验证集上的准确率。
<br>Reproduce by `yolo val classify data=path/to/ImageNet device=0` <br>通过 `yolo val classify data=path/to/ImageNet device=0` 复现
- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. - **速度** 是使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例对 ImageNet val 图像进行平均计算的。
<br>Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu` <br>通过 `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu` 复现
</details> </details>
@ -162,8 +162,8 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
查看 [姿态文档](https://docs.ultralytics.com/tasks/) 以获取使用这些模型的示例。 查看 [姿态文档](https://docs.ultralytics.com/tasks/) 以获取使用这些模型的示例。
| Model | size<br><sup>(pixels) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) | | 模型 | 尺寸<br><sup>(像素) | mAP<sup>pose<br>50-95 | mAP<sup>pose<br>50 | 速度<br><sup>CPU ONNX<br>(ms) | 速度<br><sup>A100 TensorRT<br>(ms) | 参数<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- | | ---------------------------------------------------------------------------------------------------- | --------------- | --------------------- | ------------------ | --------------------------- | -------------------------------- | -------------- | ----------------- |
| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 49.7 | 79.7 | 131.8 | 1.18 | 3.3 | 9.2 | | [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 49.7 | 79.7 | 131.8 | 1.18 | 3.3 | 9.2 |
| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | 59.2 | 85.8 | 233.2 | 1.42 | 11.6 | 30.2 | | [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | 59.2 | 85.8 | 233.2 | 1.42 | 11.6 | 30.2 |
| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 63.6 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 | | [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 63.6 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
@ -171,15 +171,14 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 68.9 | 90.4 | 1607.1 | 3.73 | 69.4 | 263.2 | | [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 68.9 | 90.4 | 1607.1 | 3.73 | 69.4 | 263.2 |
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.5 | 91.3 | 4088.7 | 10.04 | 99.1 | 1066.4 | | [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.5 | 91.3 | 4088.7 | 10.04 | 99.1 | 1066.4 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO Keypoints val2017](http://cocodataset.org) - **mAP<sup>val</sup>** 值是基于单模型单尺度在 [COCO Keypoints val2017](http://cocodataset.org) 数据集上的结果。
dataset. <br>通过 `yolo val pose data=coco-pose.yaml device=0` 复现
<br>Reproduce by `yolo val pose data=coco-pose.yaml device=0` - **速度** 是使用 [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) 实例对 COCO val 图像进行平均计算的。
- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>通过 `yolo val pose data=coco8-pose.yaml batch=1 device=0|cpu` 复现
<br>Reproduce by `yolo val pose data=coco8-pose.yaml batch=1 device=0|cpu`
</details> </details>
## <div align="center">Integrations</div> ## <div align="center">集成</div>
<br> <br>
<a href="https://bit.ly/ultralytics_hub" target="_blank"> <a href="https://bit.ly/ultralytics_hub" target="_blank">
@ -212,7 +211,7 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
<a href="https://bit.ly/ultralytics_hub" target="_blank"> <a href="https://bit.ly/ultralytics_hub" target="_blank">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a> <img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
## <div align="center">Contribute</div> ## <div align="center">贡献</div>
我们喜欢您的参与没有社区的帮助YOLOv5 和 YOLOv8 将无法实现。请参阅我们的[贡献指南](CONTRIBUTING.md)以开始使用,并填写我们的[调查问卷](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey)向我们提供您的使用体验反馈。感谢所有贡献者的支持!🙏 我们喜欢您的参与没有社区的帮助YOLOv5 和 YOLOv8 将无法实现。请参阅我们的[贡献指南](CONTRIBUTING.md)以开始使用,并填写我们的[调查问卷](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey)向我们提供您的使用体验反馈。感谢所有贡献者的支持!🙏
@ -221,14 +220,14 @@ success = model.export(format="onnx") # 将模型导出为 ONNX 格式
<a href="https://github.com/ultralytics/yolov5/graphs/contributors"> <a href="https://github.com/ultralytics/yolov5/graphs/contributors">
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png"></a> <img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/image-contributors.png"></a>
## <div align="center">License</div> ## <div align="center">许可证</div>
YOLOv8 提供两种不同的许可证: YOLOv8 提供两种不同的许可证:
- **GPL-3.0 许可证**:详细信息请参阅 [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) 文件。 - **GPL-3.0 许可证**:详细信息请参阅 [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) 文件。
- **企业许可证**:为商业产品开发提供更大的灵活性,无需遵循 GPL-3.0 的开源要求。典型的用例是将 Ultralytics 软件和 AI 模型嵌入商业产品和应用中。在 [Ultralytics 授权](https://ultralytics.com/license) 处申请企业许可证。 - **企业许可证**:为商业产品开发提供更大的灵活性,无需遵循 GPL-3.0 的开源要求。典型的用例是将 Ultralytics 软件和 AI 模型嵌入商业产品和应用中。在 [Ultralytics 授权](https://ultralytics.com/license) 处申请企业许可证。
## <div align="center">Contact</div> ## <div align="center">联系方式</div>
如需报告 YOLOv8 的错误或提出功能需求,请访问 [GitHub Issues](https://github.com/ultralytics/ultralytics/issues) 或 [Ultralytics 社区论坛](https://community.ultralytics.com/)。 如需报告 YOLOv8 的错误或提出功能需求,请访问 [GitHub Issues](https://github.com/ultralytics/ultralytics/issues) 或 [Ultralytics 社区论坛](https://community.ultralytics.com/)。

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, GPL-3.0 license # Ultralytics YOLO 🚀, GPL-3.0 license
__version__ = '8.0.74' __version__ = '8.0.75'
from ultralytics.hub import start from ultralytics.hub import start
from ultralytics.yolo.engine.model import YOLO from ultralytics.yolo.engine.model import YOLO

@ -56,7 +56,7 @@ download: |
cls = int(row[5]) - 1 cls = int(row[5]) - 1
box = convert_box(img_size, tuple(map(int, row[:4]))) box = convert_box(img_size, tuple(map(int, row[:4])))
lines.append(f"{cls} {' '.join(f'{x:.6f}' for x in box)}\n") lines.append(f"{cls} {' '.join(f'{x:.6f}' for x in box)}\n")
with open(str(f).replace(os.sep + 'annotations' + os.sep, os.sep + 'labels' + os.sep), 'w') as fl: with open(str(f).replace(f'{os.sep}annotations{os.sep}', f'{os.sep}labels{os.sep}'), 'w') as fl:
fl.writelines(lines) # write label.txt fl.writelines(lines) # write label.txt

@ -21,7 +21,7 @@ class BaseTensor(SimpleClass):
""" """
Attributes: Attributes:
tensor (torch.Tensor): A tensor. data (torch.Tensor): Base tensor.
orig_shape (tuple): Original image size, in the format (height, width). orig_shape (tuple): Original image size, in the format (height, width).
Methods: Methods:
@ -31,20 +31,14 @@ class BaseTensor(SimpleClass):
to(): Returns a copy of the tensor with the specified device and dtype. to(): Returns a copy of the tensor with the specified device and dtype.
""" """
def __init__(self, tensor, orig_shape) -> None: def __init__(self, data, orig_shape) -> None:
super().__init__() self.data = data
assert isinstance(tensor, torch.Tensor)
self.tensor = tensor
self.orig_shape = orig_shape self.orig_shape = orig_shape
@property @property
def shape(self): def shape(self):
return self.data.shape return self.data.shape
@property
def data(self):
return self.tensor
def cpu(self): def cpu(self):
return self.__class__(self.data.cpu(), self.orig_shape) return self.__class__(self.data.cpu(), self.orig_shape)
@ -164,7 +158,6 @@ class Results(SimpleClass):
font_size=None, font_size=None,
font='Arial.ttf', font='Arial.ttf',
pil=False, pil=False,
example='abc',
img=None, img=None,
img_gpu=None, img_gpu=None,
kpt_line=True, kpt_line=True,
@ -183,7 +176,6 @@ class Results(SimpleClass):
font_size (float, optional): The font size of the text. If None, it is scaled to the image size. font_size (float, optional): The font size of the text. If None, it is scaled to the image size.
font (str): The font to use for the text. font (str): The font to use for the text.
pil (bool): Whether to return the image as a PIL Image. pil (bool): Whether to return the image as a PIL Image.
example (str): An example string to display. Useful for indicating the expected format of the output.
img (numpy.ndarray): Plot to another image. if not, plot to original image. img (numpy.ndarray): Plot to another image. if not, plot to original image.
img_gpu (torch.Tensor): Normalized image in gpu with shape (1, 3, 640, 640), for faster mask plotting. img_gpu (torch.Tensor): Normalized image in gpu with shape (1, 3, 640, 640), for faster mask plotting.
kpt_line (bool): Whether to draw lines connecting keypoints. kpt_line (bool): Whether to draw lines connecting keypoints.
@ -201,12 +193,16 @@ class Results(SimpleClass):
conf = kwargs['show_conf'] conf = kwargs['show_conf']
assert type(conf) == bool, '`show_conf` should be of boolean type, i.e, show_conf=True/False' assert type(conf) == bool, '`show_conf` should be of boolean type, i.e, show_conf=True/False'
annotator = Annotator(deepcopy(self.orig_img if img is None else img), line_width, font_size, font, pil, names = self.names
example) annotator = Annotator(deepcopy(self.orig_img if img is None else img),
line_width,
font_size,
font,
pil,
example=names)
pred_boxes, show_boxes = self.boxes, boxes pred_boxes, show_boxes = self.boxes, boxes
pred_masks, show_masks = self.masks, masks pred_masks, show_masks = self.masks, masks
pred_probs, show_probs = self.probs, probs pred_probs, show_probs = self.probs, probs
names = self.names
keypoints = self.keypoints keypoints = self.keypoints
if pred_masks and show_masks: if pred_masks and show_masks:
if img_gpu is None: if img_gpu is None:
@ -236,13 +232,13 @@ class Results(SimpleClass):
def verbose(self): def verbose(self):
""" """
Return log string for each tasks. Return log string for each task.
""" """
log_string = '' log_string = ''
probs = self.probs probs = self.probs
boxes = self.boxes boxes = self.boxes
if len(self) == 0: if len(self) == 0:
return log_string if probs is not None else log_string + '(no detections), ' return log_string if probs is not None else f'{log_string}(no detections), '
if probs is not None: if probs is not None:
n5 = min(len(self.names), 5) n5 = min(len(self.names), 5)
top5i = probs.argsort(0, descending=True)[:n5].tolist() # top 5 indices top5i = probs.argsort(0, descending=True)[:n5].tolist() # top 5 indices
@ -346,26 +342,26 @@ class Boxes(BaseTensor):
boxes = boxes[None, :] boxes = boxes[None, :]
n = boxes.shape[-1] n = boxes.shape[-1]
assert n in (6, 7), f'expected `n` in [6, 7], but got {n}' # xyxy, (track_id), conf, cls assert n in (6, 7), f'expected `n` in [6, 7], but got {n}' # xyxy, (track_id), conf, cls
super().__init__(boxes, orig_shape)
self.is_track = n == 7 self.is_track = n == 7
self.boxes = boxes
self.orig_shape = torch.as_tensor(orig_shape, device=boxes.device) if isinstance(boxes, torch.Tensor) \ self.orig_shape = torch.as_tensor(orig_shape, device=boxes.device) if isinstance(boxes, torch.Tensor) \
else np.asarray(orig_shape) else np.asarray(orig_shape)
@property @property
def xyxy(self): def xyxy(self):
return self.boxes[:, :4] return self.data[:, :4]
@property @property
def conf(self): def conf(self):
return self.boxes[:, -2] return self.data[:, -2]
@property @property
def cls(self): def cls(self):
return self.boxes[:, -1] return self.data[:, -1]
@property @property
def id(self): def id(self):
return self.boxes[:, -3] if self.is_track else None return self.data[:, -3] if self.is_track else None
@property @property
@lru_cache(maxsize=2) # maxsize 1 should suffice @lru_cache(maxsize=2) # maxsize 1 should suffice
@ -386,8 +382,9 @@ class Boxes(BaseTensor):
LOGGER.info('results.pandas() method not yet implemented') LOGGER.info('results.pandas() method not yet implemented')
@property @property
def data(self): def boxes(self):
return self.boxes LOGGER.warning("WARNING ⚠️ 'Boxes.boxes' is deprecated. Use 'Boxes.data' instead.")
return self.data
class Masks(BaseTensor): class Masks(BaseTensor):
@ -416,8 +413,7 @@ class Masks(BaseTensor):
def __init__(self, masks, orig_shape) -> None: def __init__(self, masks, orig_shape) -> None:
if masks.ndim == 2: if masks.ndim == 2:
masks = masks[None, :] masks = masks[None, :]
self.masks = masks # N, h, w super().__init__(masks, orig_shape)
self.orig_shape = orig_shape
@property @property
@lru_cache(maxsize=1) @lru_cache(maxsize=1)
@ -432,17 +428,18 @@ class Masks(BaseTensor):
def xyn(self): def xyn(self):
# Segments (normalized) # Segments (normalized)
return [ return [
ops.scale_coords(self.masks.shape[1:], x, self.orig_shape, normalize=True) ops.scale_coords(self.data.shape[1:], x, self.orig_shape, normalize=True)
for x in ops.masks2segments(self.masks)] for x in ops.masks2segments(self.data)]
@property @property
@lru_cache(maxsize=1) @lru_cache(maxsize=1)
def xy(self): def xy(self):
# Segments (pixels) # Segments (pixels)
return [ return [
ops.scale_coords(self.masks.shape[1:], x, self.orig_shape, normalize=False) ops.scale_coords(self.data.shape[1:], x, self.orig_shape, normalize=False)
for x in ops.masks2segments(self.masks)] for x in ops.masks2segments(self.data)]
@property @property
def data(self): def masks(self):
return self.masks LOGGER.warning("WARNING ⚠️ 'Masks.masks' is deprecated. Use 'Masks.data' instead.")
return self.data

@ -17,6 +17,7 @@ from types import SimpleNamespace
from typing import Union from typing import Union
import cv2 import cv2
import matplotlib.pyplot as plt
import numpy as np import numpy as np
import torch import torch
import yaml import yaml
@ -116,7 +117,7 @@ class SimpleClass:
attr = [] attr = []
for a in dir(self): for a in dir(self):
v = getattr(self, a) v = getattr(self, a)
if not callable(v) and not a.startswith('__'): if not callable(v) and not a.startswith('_'):
if isinstance(v, SimpleClass): if isinstance(v, SimpleClass):
# Display only the module and class name for subclasses # Display only the module and class name for subclasses
s = f'{a}: {v.__module__}.{v.__class__.__name__} object' s = f'{a}: {v.__module__}.{v.__class__.__name__} object'
@ -164,6 +165,39 @@ class IterableSimpleNamespace(SimpleNamespace):
return getattr(self, key, default) return getattr(self, key, default)
def plt_settings(rcparams={'font.size': 11}, backend='Agg'):
"""
Decorator to temporarily set rc parameters and the backend for a plotting function.
Usage:
decorator: @plt_settings({"font.size": 12})
context manager: with plt_settings({"font.size": 12}):
Args:
rcparams (dict): Dictionary of rc parameters to set.
backend (str, optional): Name of the backend to use. Defaults to 'Agg'.
Returns:
callable: Decorated function with temporarily set rc parameters and backend.
"""
def decorator(func):
def wrapper(*args, **kwargs):
original_backend = plt.get_backend()
plt.switch_backend(backend)
with plt.rc_context(rcparams):
result = func(*args, **kwargs)
plt.switch_backend(original_backend)
return result
return wrapper
return decorator
def set_logging(name=LOGGING_NAME, verbose=True): def set_logging(name=LOGGING_NAME, verbose=True):
# sets up logging for the given name # sets up logging for the given name
rank = int(os.getenv('RANK', -1)) # rank in world for Multi-GPU trainings rank = int(os.getenv('RANK', -1)) # rank in world for Multi-GPU trainings

@ -128,7 +128,8 @@ def check_latest_pypi_version(package_name='ultralytics'):
Returns: Returns:
str: The latest version of the package. str: The latest version of the package.
""" """
response = requests.get(f'https://pypi.org/pypi/{package_name}/json') requests.packages.urllib3.disable_warnings() # Disable the InsecureRequestWarning
response = requests.get(f'https://pypi.org/pypi/{package_name}/json', verify=False)
if response.status_code == 200: if response.status_code == 200:
return response.json()['info']['version'] return response.json()['info']['version']
return None return None

@ -11,7 +11,7 @@ import numpy as np
import torch import torch
import torch.nn as nn import torch.nn as nn
from ultralytics.yolo.utils import LOGGER, SimpleClass, TryExcept from ultralytics.yolo.utils import LOGGER, SimpleClass, TryExcept, plt_settings
OKS_SIGMA = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62, .62, 1.07, 1.07, .87, .87, .89, .89]) / 10.0 OKS_SIGMA = np.array([.26, .25, .25, .35, .35, .79, .79, .72, .72, .62, .62, 1.07, 1.07, .87, .87, .89, .89]) / 10.0
@ -234,6 +234,7 @@ class ConfusionMatrix:
return tp[:-1], fp[:-1] # remove background class return tp[:-1], fp[:-1] # remove background class
@TryExcept('WARNING ⚠️ ConfusionMatrix plot failure') @TryExcept('WARNING ⚠️ ConfusionMatrix plot failure')
@plt_settings()
def plot(self, normalize=True, save_dir='', names=()): def plot(self, normalize=True, save_dir='', names=()):
import seaborn as sn import seaborn as sn
@ -277,6 +278,7 @@ def smooth(y, f=0.05):
return np.convolve(yp, np.ones(nf) / nf, mode='valid') # y-smoothed return np.convolve(yp, np.ones(nf) / nf, mode='valid') # y-smoothed
@plt_settings()
def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()): def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()):
# Precision-recall curve # Precision-recall curve
fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
@ -299,6 +301,7 @@ def plot_pr_curve(px, py, ap, save_dir=Path('pr_curve.png'), names=()):
plt.close(fig) plt.close(fig)
@plt_settings()
def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabel='Confidence', ylabel='Metric'): def plot_mc_curve(px, py, save_dir=Path('mc_curve.png'), names=(), xlabel='Confidence', ylabel='Metric'):
# Metric-confidence curve # Metric-confidence curve
fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)

@ -5,22 +5,18 @@ import math
from pathlib import Path from pathlib import Path
import cv2 import cv2
import matplotlib
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
import torch import torch
from PIL import Image, ImageDraw, ImageFont from PIL import Image, ImageDraw, ImageFont
from PIL import __version__ as pil_version from PIL import __version__ as pil_version
from ultralytics.yolo.utils import LOGGER, TryExcept, threaded from ultralytics.yolo.utils import LOGGER, TryExcept, plt_settings, threaded
from .checks import check_font, check_version, is_ascii from .checks import check_font, check_version, is_ascii
from .files import increment_path from .files import increment_path
from .ops import clip_boxes, scale_image, xywh2xyxy, xyxy2xywh from .ops import clip_boxes, scale_image, xywh2xyxy, xyxy2xywh
matplotlib.rc('font', **{'size': 11})
matplotlib.use('Agg') # for writing to files only
class Colors: class Colors:
# Ultralytics color palette https://ultralytics.com/ # Ultralytics color palette https://ultralytics.com/
@ -212,6 +208,7 @@ class Annotator:
@TryExcept() # known issue https://github.com/ultralytics/yolov5/issues/5395 @TryExcept() # known issue https://github.com/ultralytics/yolov5/issues/5395
@plt_settings()
def plot_labels(boxes, cls, names=(), save_dir=Path('')): def plot_labels(boxes, cls, names=(), save_dir=Path('')):
import pandas as pd import pandas as pd
import seaborn as sn import seaborn as sn
@ -228,7 +225,6 @@ def plot_labels(boxes, cls, names=(), save_dir=Path('')):
plt.close() plt.close()
# matplotlib labels # matplotlib labels
matplotlib.use('svg') # faster
ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel()
y = ax[0].hist(cls, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) y = ax[0].hist(cls, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8)
with contextlib.suppress(Exception): # color histogram bars by class with contextlib.suppress(Exception): # color histogram bars by class
@ -244,9 +240,9 @@ def plot_labels(boxes, cls, names=(), save_dir=Path('')):
# rectangles # rectangles
boxes[:, 0:2] = 0.5 # center boxes[:, 0:2] = 0.5 # center
boxes = xywh2xyxy(boxes) * 2000 boxes = xywh2xyxy(boxes) * 1000
img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255) img = Image.fromarray(np.ones((1000, 1000, 3), dtype=np.uint8) * 255)
for cls, box in zip(cls[:1000], boxes[:1000]): for cls, box in zip(cls[:500], boxes[:500]):
ImageDraw.Draw(img).rectangle(box, width=1, outline=colors(cls)) # plot ImageDraw.Draw(img).rectangle(box, width=1, outline=colors(cls)) # plot
ax[1].imshow(img) ax[1].imshow(img)
ax[1].axis('off') ax[1].axis('off')
@ -256,7 +252,6 @@ def plot_labels(boxes, cls, names=(), save_dir=Path('')):
ax[a].spines[s].set_visible(False) ax[a].spines[s].set_visible(False)
plt.savefig(save_dir / 'labels.jpg', dpi=200) plt.savefig(save_dir / 'labels.jpg', dpi=200)
matplotlib.use('Agg')
plt.close() plt.close()
@ -400,6 +395,7 @@ def plot_images(images,
annotator.im.save(fname) # save annotator.im.save(fname) # save
@plt_settings()
def plot_results(file='path/to/results.csv', dir='', segment=False, pose=False): def plot_results(file='path/to/results.csv', dir='', segment=False, pose=False):
# Plot training results.csv. Usage: from utils.plots import *; plot_results('path/to/results.csv') # Plot training results.csv. Usage: from utils.plots import *; plot_results('path/to/results.csv')
import pandas as pd import pandas as pd

@ -79,7 +79,7 @@ class SegLoss(Loss):
# targets # targets
try: try:
batch_idx = batch['batch_idx'].view(-1, 1) batch_idx = batch['batch_idx'].view(-1, 1)
targets = torch.cat((batch_idx, batch['cls'].view(-1, 1), batch['bboxes'].to(dtype)), 1) targets = torch.cat((batch_idx, batch['cls'].view(-1, 1), batch['bboxes']), 1)
targets = self.preprocess(targets.to(self.device), batch_size, scale_tensor=imgsz[[1, 0, 1, 0]]) targets = self.preprocess(targets.to(self.device), batch_size, scale_tensor=imgsz[[1, 0, 1, 0]])
gt_labels, gt_bboxes = targets.split((1, 4), 2) # cls, xyxy gt_labels, gt_bboxes = targets.split((1, 4), 2) # cls, xyxy
mask_gt = gt_bboxes.sum(2, keepdim=True).gt_(0) mask_gt = gt_bboxes.sum(2, keepdim=True).gt_(0)

Loading…
Cancel
Save