diff --git a/README.md b/README.md
index b6b2ce0..8b82f33 100644
--- a/README.md
+++ b/README.md
@@ -18,11 +18,7 @@
-[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics), developed by [Ultralytics](https://ultralytics.com),
-is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces
-new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and
-easy to use, making it an excellent choice for a wide range of object detection, image segmentation and image
-classification tasks.
+[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics), developed by [Ultralytics](https://ultralytics.com), is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection, image segmentation and image classification tasks.
To request an Enterprise License please complete the form at [Ultralytics Licensing](https://ultralytics.com/license).
@@ -51,16 +47,12 @@ To request an Enterprise License please complete the form at [Ultralytics Licens
##
diff --git a/README.zh-CN.md b/README.zh-CN.md
index 81253be..a0dd8b7 100644
--- a/README.zh-CN.md
+++ b/README.zh-CN.md
@@ -1,170 +1,254 @@
-# YOLOv8 Pose Models
+
+
+
+
+
+
+[English](README.md) | [简体中文](README.zh-CN.md)
+
+
+
+
+
+[Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics),由 [Ultralytics](https://ultralytics.com) 开发,是一种尖端的、最先进(SOTA)的模型,它在之前 YOLO 版本的成功基础上进行了建设,并引入了新的特性和改进,以进一步提高性能和灵活性。YOLOv8 旨在快速、准确且易于使用,使其成为广泛的对象检测、图像分割和图像分类任务的绝佳选择。
+
+如需申请企业许可,请在 [Ultralytics 授权](https://ultralytics.com/license) 完成表格。
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+##
文档
+
+请参阅下面的快速安装和使用示例,以及 [YOLOv8 文档](https://docs.ultralytics.com) 上有关培训、验证、预测和部署的完整文档。
+
+
+安装
+
+在一个 [**Python>=3.7**](https://www.python.org/) 环境中,使用 [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/),通过 pip 安装 ultralytics 软件包以及所有[依赖项](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt)。
-Pose estimation is a task that involves identifying the location of specific points in an image, usually referred
-to as keypoints. The keypoints can represent various parts of the object such as joints, landmarks, or other distinctive
-features. The locations of the keypoints are usually represented as a set of 2D `[x, y]` or 3D `[x, y, visible]`
-coordinates.
-
-
-
-The output of a pose estimation model is a set of points that represent the keypoints on an object in the image, usually
-along with the confidence scores for each point. Pose estimation is a good choice when you need to identify specific
-parts of an object in a scene, and their location in relation to each other.
-
-**Pro Tip:** YOLOv8 _pose_ models use the `-pose` suffix, i.e. `yolov8n-pose.pt`. These models are trained on the [COCO keypoints](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco-pose.yaml) dataset and are suitable for a variety of pose estimation tasks.
+```bash
+pip install ultralytics
+```
-## [Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/v8)
+
-YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models are pretrained on
-the [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) dataset, while Classify
-models are pretrained on
-the [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) dataset.
+
+Usage
-[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
-Ultralytics [release](https://github.com/ultralytics/assets/releases) on first use.
+#### CLI
-| Model | size
(pixels) | mAPpose
50-95 | mAPpose
50 | Speed
CPU ONNX
(ms) | Speed
A100 TensorRT
(ms) | params
(M) | FLOPs
(B) |
-| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
-| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 49.7 | 79.7 | 131.8 | 1.18 | 3.3 | 9.2 |
-| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | 59.2 | 85.8 | 233.2 | 1.42 | 11.6 | 30.2 |
-| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 63.6 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
-| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-pose.pt) | 640 | 67.0 | 89.9 | 784.5 | 2.59 | 44.4 | 168.6 |
-| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 68.9 | 90.4 | 1607.1 | 3.73 | 69.4 | 263.2 |
-| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.5 | 91.3 | 4088.7 | 10.04 | 99.1 | 1066.4 |
+YOLOv8 可以在命令行界面(CLI)中直接使用,只需输入 `yolo` 命令:
-- **mAPval** values are for single-model single-scale on [COCO Keypoints val2017](http://cocodataset.org)
- dataset. Reproduce by `yolo val pose data=coco-pose.yaml device=0`
-- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/)
- instance. Reproduce by `yolo val pose data=coco8-pose.yaml batch=1 device=0|cpu`
+```bash
+yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
+```
-## Train
+`yolo` 可用于各种任务和模式,并接受其他参数,例如 `imgsz=640`。查看 YOLOv8 [CLI 文档](https://docs.ultralytics.com/usage/cli)以获取示例。
-Train a YOLOv8-pose model on the COCO128-pose dataset.
+#### Python
-### Python
+YOLOv8 也可以在 Python 环境中直接使用,并接受与上述 CLI 示例中相同的[参数](https://docs.ultralytics.com/usage/cfg/):
```python
from ultralytics import YOLO
-# Load a model
-model = YOLO("yolov8n-pose.yaml") # build a new model from YAML
-model = YOLO("yolov8n-pose.pt") # load a pretrained model (recommended for training)
-model = YOLO("yolov8n-pose.yaml").load(
- "yolov8n-pose.pt"
-) # build from YAML and transfer weights
-
-# Train the model
-model.train(data="coco8-pose.yaml", epochs=100, imgsz=640)
-```
-
-### CLI
-
-```bash
-# Build a new model from YAML and start training from scratch
-yolo pose train data=coco8-pose.yaml model=yolov8n-pose.yaml epochs=100 imgsz=640
-
-# Start training from a pretrained *.pt model
-yolo pose train data=coco8-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
+# 加载模型
+model = YOLO("yolov8n.yaml") # 从头开始构建新模型
+model = YOLO("yolov8n.pt") # 加载预训练模型(建议用于训练)
-# Build a new model from YAML, transfer pretrained weights to it and start training
-yolo pose train data=coco8-pose.yaml model=yolov8n-pose.yaml pretrained=yolov8n-pose.pt epochs=100 imgsz=640
+# 使用模型
+model.train(data="coco128.yaml", epochs=3) # 训练模型
+metrics = model.val() # 在验证集上评估模型性能
+results = model("https://ultralytics.com/images/bus.jpg") # 对图像进行预测
+success = model.export(format="onnx") # 将模型导出为 ONNX 格式
```
-## Val
-
-Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the `model`
-retains it's training `data` and arguments as model attributes.
-
-### Python
+[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) 会自动从最新的 Ultralytics [发布版本](https://github.com/ultralytics/assets/releases)中下载。查看 YOLOv8 [Python 文档](https://docs.ultralytics.com/usage/python)以获取更多示例。
-```python
-from ultralytics import YOLO
+
-# Load a model
-model = YOLO("yolov8n-pose.pt") # load an official model
-model = YOLO("path/to/best.pt") # load a custom model
+##
Models
-# Validate the model
-metrics = model.val() # no arguments needed, dataset and settings remembered
-metrics.box.map # map50-95
-metrics.box.map50 # map50
-metrics.box.map75 # map75
-metrics.box.maps # a list contains map50-95 of each category
-```
+所有的 YOLOv8 预训练模型都可以在此找到。检测、分割和姿态模型在 [COCO](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/coco.yaml) 数据集上进行预训练,而分类模型在 [ImageNet](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/datasets/ImageNet.yaml) 数据集上进行预训练。
-### CLI
+在首次使用时,[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) 会自动从最新的 Ultralytics [发布版本](https://github.com/ultralytics/assets/releases)中下载。
-```bash
-yolo pose val model=yolov8n-pose.pt # val official model
-yolo pose val model=path/to/best.pt # val custom model
-```
+
检测
-## Predict
+查看 [检测文档](https://docs.ultralytics.com/tasks/detect/) 以获取使用这些模型的示例。
-Use a trained YOLOv8n-pose model to run predictions on images.
+| Model | size
(pixels) | mAPval
50-95 | Speed
CPU ONNX
(ms) | Speed
A100 TensorRT
(ms) | params
(M) | FLOPs
(B) |
+| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
+| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | 80.4 | 0.99 | 3.2 | 8.7 |
+| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt) | 640 | 44.9 | 128.4 | 1.20 | 11.2 | 28.6 |
+| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m.pt) | 640 | 50.2 | 234.7 | 1.83 | 25.9 | 78.9 |
+| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l.pt) | 640 | 52.9 | 375.2 | 2.39 | 43.7 | 165.2 |
+| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x.pt) | 640 | 53.9 | 479.1 | 3.53 | 68.2 | 257.8 |
-### Python
+- **mAPval** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
+
Reproduce by `yolo val detect data=coco.yaml device=0`
+- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
+
Reproduce by `yolo val detect data=coco128.yaml batch=1 device=0|cpu`
-```python
-from ultralytics import YOLO
+
-# Load a model
-model = YOLO("yolov8n-pose.pt") # load an official model
-model = YOLO("path/to/best.pt") # load a custom model
+
分割
-# Predict with the model
-results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
-```
+查看 [分割文档](https://docs.ultralytics.com/tasks/segment/) 以获取使用这些模型的示例。
-### CLI
+| Model | size
(pixels) | mAPbox
50-95 | mAPmask
50-95 | Speed
CPU ONNX
(ms) | Speed
A100 TensorRT
(ms) | params
(M) | FLOPs
(B) |
+| -------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
+| [YOLOv8n-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | 96.1 | 1.21 | 3.4 | 12.6 |
+| [YOLOv8s-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | 155.7 | 1.47 | 11.8 | 42.6 |
+| [YOLOv8m-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | 317.0 | 2.18 | 27.3 | 110.2 |
+| [YOLOv8l-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | 572.4 | 2.79 | 46.0 | 220.5 |
+| [YOLOv8x-seg](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | 712.1 | 4.02 | 71.8 | 344.1 |
-```bash
-yolo pose predict model=yolov8n-pose.pt source='https://ultralytics.com/images/bus.jpg' # predict with official model
-yolo pose predict model=path/to/best.pt source='https://ultralytics.com/images/bus.jpg' # predict with custom model
-```
+- **mAPval** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
+
Reproduce by `yolo val segment data=coco.yaml device=0`
+- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
+
Reproduce by `yolo val segment data=coco128-seg.yaml batch=1 device=0|cpu`
-See full `predict` mode details in the [Predict](https://docs.ultralytics.com/modes/predict/) page.
+
-## Export
+
分类
-Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc.
+查看 [分类文档](https://docs.ultralytics.com/tasks/classify/) 以获取使用这些模型的示例。
-### Python
+| Model | size
(pixels) | acc
top1 | acc
top5 | Speed
CPU ONNX
(ms) | Speed
A100 TensorRT
(ms) | params
(M) | FLOPs
(B) at 640 |
+| -------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------------ | ----------------------------------- | ------------------ | ------------------------ |
+| [YOLOv8n-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | 12.9 | 0.31 | 2.7 | 4.3 |
+| [YOLOv8s-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | 23.4 | 0.35 | 6.4 | 13.5 |
+| [YOLOv8m-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | 85.4 | 0.62 | 17.0 | 42.7 |
+| [YOLOv8l-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | 163.0 | 0.87 | 37.5 | 99.7 |
+| [YOLOv8x-cls](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | 232.0 | 1.01 | 57.4 | 154.8 |
-```python
-from ultralytics import YOLO
+- **acc** values are model accuracies on the [ImageNet](https://www.image-net.org/) dataset validation set.
+
Reproduce by `yolo val classify data=path/to/ImageNet device=0`
+- **Speed** averaged over ImageNet val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
+
Reproduce by `yolo val classify data=path/to/ImageNet batch=1 device=0|cpu`
-# Load a model
-model = YOLO("yolov8n-pose.pt") # load an official model
-model = YOLO("path/to/best.pt") # load a custom trained
+
-# Export the model
-model.export(format="onnx")
-```
+
姿态
-### CLI
+查看 [姿态文档](https://docs.ultralytics.com/tasks/) 以获取使用这些模型的示例。
-```bash
-yolo export model=yolov8n-pose.pt format=onnx # export official model
-yolo export model=path/to/best.pt format=onnx # export custom trained model
-```
+| Model | size
(pixels) | mAPpose
50-95 | mAPpose
50 | Speed
CPU ONNX
(ms) | Speed
A100 TensorRT
(ms) | params
(M) | FLOPs
(B) |
+| ---------------------------------------------------------------------------------------------------- | --------------------- | --------------------- | ------------------ | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
+| [YOLOv8n-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-pose.pt) | 640 | 49.7 | 79.7 | 131.8 | 1.18 | 3.3 | 9.2 |
+| [YOLOv8s-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-pose.pt) | 640 | 59.2 | 85.8 | 233.2 | 1.42 | 11.6 | 30.2 |
+| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 63.6 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
+| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-pose.pt) | 640 | 67.0 | 89.9 | 784.5 | 2.59 | 44.4 | 168.6 |
+| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 68.9 | 90.4 | 1607.1 | 3.73 | 69.4 | 263.2 |
+| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.5 | 91.3 | 4088.7 | 10.04 | 99.1 | 1066.4 |
-Available YOLOv8-pose export formats are in the table below. You can predict or validate directly on exported models,
-i.e. `yolo predict model=yolov8n-pose.onnx`. Usage examples are shown for your model after export completes.
-
-| Format | `format` Argument | Model | Metadata |
-| ------------------------------------------------------------------ | ----------------- | ------------------------------ | -------- |
-| [PyTorch](https://pytorch.org/) | - | `yolov8n-pose.pt` | ✅ |
-| [TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov8n-pose.torchscript` | ✅ |
-| [ONNX](https://onnx.ai/) | `onnx` | `yolov8n-pose.onnx` | ✅ |
-| [OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov8n-pose_openvino_model/` | ✅ |
-| [TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov8n-pose.engine` | ✅ |
-| [CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov8n-pose.mlmodel` | ✅ |
-| [TF SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov8n-pose_saved_model/` | ✅ |
-| [TF GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov8n-pose.pb` | ❌ |
-| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n-pose.tflite` | ✅ |
-| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n-pose_edgetpu.tflite` | ✅ |
-| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n-pose_web_model/` | ✅ |
-| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n-pose_paddle_model/` | ✅ |
-
-See full `export` details in the [Export](https://docs.ultralytics.com/modes/export/) page.
+- **mAPval** values are for single-model single-scale on [COCO Keypoints val2017](http://cocodataset.org)
+ dataset.
+
Reproduce by `yolo val pose data=coco-pose.yaml device=0`
+- **Speed** averaged over COCO val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance.
+
Reproduce by `yolo val pose data=coco8-pose.yaml batch=1 device=0|cpu`
+
+
+
+##
Integrations
+
+
+
+
+
+
+
+
+
+| Roboflow | ClearML ⭐ NEW | Comet ⭐ NEW | Neural Magic ⭐ NEW |
+| :--------------------------------------------------------------------------------: | :----------------------------------------------------------------------------: | :----------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------: |
+| 使用 [Roboflow](https://roboflow.com/?ref=ultralytics) 将您的自定义数据集直接标记并导出至 YOLOv8 进行训练 | 使用 [ClearML](https://cutt.ly/yolov5-readme-clearml)(开源!)自动跟踪、可视化,甚至远程训练 YOLOv8 | 免费且永久,[Comet](https://bit.ly/yolov8-readme-comet) 让您保存 YOLOv8 模型、恢复训练,并以交互式方式查看和调试预测 | 使用 [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic) 使 YOLOv8 推理速度提高多达 6 倍 |
+
+##
Ultralytics HUB
+
+体验 [Ultralytics HUB](https://bit.ly/ultralytics_hub) ⭐ 带来的无缝 AI,这是一个一体化解决方案,用于数据可视化、YOLOv5 和即将推出的 YOLOv8 🚀 模型训练和部署,无需任何编码。通过我们先进的平台和用户友好的 [Ultralytics 应用程序](https://ultralytics.com/app_install),轻松将图像转化为可操作的见解,并实现您的 AI 愿景。现在就开始您的**免费**之旅!
+
+
+
+
+##
Contribute
+
+我们喜欢您的参与!没有社区的帮助,YOLOv5 和 YOLOv8 将无法实现。请参阅我们的[贡献指南](CONTRIBUTING.md)以开始使用,并填写我们的[调查问卷](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey)向我们提供您的使用体验反馈。感谢所有贡献者的支持!🙏
+
+
+
+
+
+
+##
License
+
+YOLOv8 提供两种不同的许可证:
+
+- **GPL-3.0 许可证**:详细信息请参阅 [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) 文件。
+- **企业许可证**:为商业产品开发提供更大的灵活性,无需遵循 GPL-3.0 的开源要求。典型的用例是将 Ultralytics 软件和 AI 模型嵌入商业产品和应用中。在 [Ultralytics 授权](https://ultralytics.com/license) 处申请企业许可证。
+
+##
Contact
+
+如需报告 YOLOv8 的错误或提出功能需求,请访问 [GitHub Issues](https://github.com/ultralytics/ultralytics/issues) 或 [Ultralytics 社区论坛](https://community.ultralytics.com/)。
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+