`ultralytics 8.0.93` HUB docs and JSON2YOLO converter (#2431)
Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: 李际朝 <tubkninght@gmail.com> Co-authored-by: Danny Kim <imbird0312@gmail.com>single_channel
parent
0ebd3f2959
commit
ddb354ce5e
@ -1,116 +0,0 @@
|
||||
---
|
||||
comments: true
|
||||
---
|
||||
|
||||
# Ultralytics HUB
|
||||
|
||||
<a href="https://bit.ly/ultralytics_hub" target="_blank">
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
|
||||
<br>
|
||||
<div align="center">
|
||||
<a href="https://github.com/ultralytics" style="text-decoration:none;">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="2%" alt="" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
|
||||
<a href="https://www.linkedin.com/company/ultralytics" style="text-decoration:none;">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="2%" alt="" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
|
||||
<a href="https://twitter.com/ultralytics" style="text-decoration:none;">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="2%" alt="" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
|
||||
<a href="https://youtube.com/ultralytics" style="text-decoration:none;">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="2%" alt="" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
|
||||
<a href="https://www.tiktok.com/@ultralytics" style="text-decoration:none;">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="2%" alt="" /></a>
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="2%" alt="" />
|
||||
<a href="https://www.instagram.com/ultralytics/" style="text-decoration:none;">
|
||||
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-instagram.png" width="2%" alt="" /></a>
|
||||
<br>
|
||||
<br>
|
||||
<a href="https://github.com/ultralytics/hub/actions/workflows/ci.yaml">
|
||||
<img src="https://github.com/ultralytics/hub/actions/workflows/ci.yaml/badge.svg" alt="CI CPU"></a>
|
||||
<a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
|
||||
</div>
|
||||
|
||||
|
||||
[Ultralytics HUB](https://hub.ultralytics.com) is a new no-code online tool developed
|
||||
by [Ultralytics](https://ultralytics.com), the creators of the popular [YOLOv5](https://github.com/ultralytics/yolov5)
|
||||
object detection and image segmentation models. With Ultralytics HUB, users can easily train and deploy YOLO models
|
||||
without any coding or technical expertise.
|
||||
|
||||
Ultralytics HUB is designed to be user-friendly and intuitive, with a drag-and-drop interface that allows users to
|
||||
easily upload their data and select their model configurations. It also offers a range of pre-trained models and
|
||||
templates to choose from, making it easy for users to get started with training their own models. Once a model is
|
||||
trained, it can be easily deployed and used for real-time object detection and image segmentation tasks. Overall,
|
||||
Ultralytics HUB is an essential tool for anyone looking to use YOLO for their object detection and image segmentation
|
||||
projects.
|
||||
|
||||
**[Get started now](https://hub.ultralytics.com)** and experience the power and simplicity of Ultralytics HUB for
|
||||
yourself. Sign up for a free account and start building, training, and deploying YOLOv5 and YOLOv8 models today.
|
||||
|
||||
## 1. Upload a Dataset
|
||||
|
||||
Ultralytics HUB datasets are just like YOLOv5 🚀 datasets, they use the same structure and the same label formats to keep
|
||||
everything simple.
|
||||
|
||||
When you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML inside the dataset root directory**
|
||||
as in the example shown below, and then zip for upload to https://hub.ultralytics.com/. Your **dataset YAML, directory
|
||||
and zip** should all share the same name. For example, if your dataset is called 'coco6' as in our
|
||||
example [ultralytics/hub/coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip), then you should have a
|
||||
coco6.yaml inside your coco6/ directory, which should zip to create coco6.zip for upload:
|
||||
|
||||
```bash
|
||||
zip -r coco6.zip coco6
|
||||
```
|
||||
|
||||
The example [coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip) dataset in this repository can be
|
||||
downloaded and unzipped to see exactly how to structure your custom dataset.
|
||||
|
||||
<p align="center">
|
||||
<img width="80%" src="https://user-images.githubusercontent.com/26833433/201424843-20fa081b-ad4b-4d6c-a095-e810775908d8.png" title="COCO6" />
|
||||
</p>
|
||||
|
||||
The dataset YAML is the same standard YOLOv5 YAML format. See
|
||||
the [YOLOv5 Train Custom Data tutorial](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data) for full details.
|
||||
|
||||
```yaml
|
||||
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
|
||||
path: # dataset root dir (leave empty for HUB)
|
||||
train: images/train # train images (relative to 'path') 8 images
|
||||
val: images/val # val images (relative to 'path') 8 images
|
||||
test: # test images (optional)
|
||||
|
||||
# Classes
|
||||
names:
|
||||
0: person
|
||||
1: bicycle
|
||||
2: car
|
||||
3: motorcycle
|
||||
...
|
||||
```
|
||||
|
||||
After zipping your dataset, sign in to [Ultralytics HUB](https://bit.ly/ultralytics_hub) and click the Datasets tab.
|
||||
Click 'Upload Dataset' to upload, scan and visualize your new dataset before training new YOLOv5 models on it!
|
||||
|
||||
<img width="100%" alt="HUB Dataset Upload" src="https://user-images.githubusercontent.com/26833433/216763338-9a8812c8-a4e5-4362-8102-40dad7818396.png">
|
||||
|
||||
## 2. Train a Model
|
||||
|
||||
Connect to the Ultralytics HUB notebook and use your model API key to begin training!
|
||||
|
||||
<a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb" target="_blank">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
|
||||
|
||||
## 3. Deploy to Real World
|
||||
|
||||
Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run
|
||||
models directly on your [iOS](https://apps.apple.com/xk/app/ultralytics/id1583935240) or
|
||||
[Android](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app) mobile device by downloading
|
||||
the [Ultralytics App](https://ultralytics.com/app_install)!
|
||||
|
||||
## ❓ Issues
|
||||
|
||||
If you are a new [Ultralytics HUB](https://bit.ly/ultralytics_hub) user and have questions or comments, you are in the
|
||||
right place! Please raise a [New Issue](https://github.com/ultralytics/hub/issues/new/choose) and let us know what we
|
||||
can do to make your life better 😃!
|
@ -0,0 +1,51 @@
|
||||
---
|
||||
comments: true
|
||||
---
|
||||
|
||||
# HUB Datasets
|
||||
|
||||
## Upload a Dataset
|
||||
|
||||
Ultralytics HUB datasets are just like YOLOv5 and YOLOv8 🚀 datasets, they use the same structure and the same label formats to keep
|
||||
everything simple.
|
||||
|
||||
When you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML inside the dataset root directory**
|
||||
as in the example shown below, and then zip for upload to https://hub.ultralytics.com/. Your **dataset YAML, directory
|
||||
and zip** should all share the same name. For example, if your dataset is called 'coco6' as in our
|
||||
example [ultralytics/hub/coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip), then you should have a
|
||||
coco6.yaml inside your coco6/ directory, which should zip to create coco6.zip for upload:
|
||||
|
||||
```bash
|
||||
zip -r coco6.zip coco6
|
||||
```
|
||||
|
||||
The example [coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip) dataset in this repository can be
|
||||
downloaded and unzipped to see exactly how to structure your custom dataset.
|
||||
|
||||
<p align="center">
|
||||
<img width="80%" src="https://user-images.githubusercontent.com/26833433/201424843-20fa081b-ad4b-4d6c-a095-e810775908d8.png" title="COCO6" />
|
||||
</p>
|
||||
|
||||
The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format. See
|
||||
the [YOLOv5 and YOLOv8 Train Custom Data tutorial](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data/) for full details.
|
||||
|
||||
```yaml
|
||||
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
|
||||
path: # dataset root dir (leave empty for HUB)
|
||||
train: images/train # train images (relative to 'path') 8 images
|
||||
val: images/val # val images (relative to 'path') 8 images
|
||||
test: # test images (optional)
|
||||
|
||||
# Classes
|
||||
names:
|
||||
0: person
|
||||
1: bicycle
|
||||
2: car
|
||||
3: motorcycle
|
||||
...
|
||||
```
|
||||
|
||||
After zipping your dataset, sign in to [Ultralytics HUB](https://bit.ly/ultralytics_hub) and click the Datasets tab.
|
||||
Click 'Upload Dataset' to upload, scan and visualize your new dataset before training new YOLOv5 or YOLOv8 models on it!
|
||||
|
||||
<img width="100%" alt="HUB Dataset Upload" src="https://user-images.githubusercontent.com/26833433/216763338-9a8812c8-a4e5-4362-8102-40dad7818396.png">
|
@ -0,0 +1,41 @@
|
||||
---
|
||||
comments: true
|
||||
---
|
||||
|
||||
# Ultralytics HUB
|
||||
|
||||
<a href="https://bit.ly/ultralytics_hub" target="_blank">
|
||||
<img width="100%" src="https://github.com/ultralytics/assets/raw/main/im/ultralytics-hub.png"></a>
|
||||
<br>
|
||||
<br>
|
||||
<div align="center">
|
||||
<a href="https://github.com/ultralytics/hub/actions/workflows/ci.yaml">
|
||||
<img src="https://github.com/ultralytics/hub/actions/workflows/ci.yaml/badge.svg" alt="CI CPU"></a>
|
||||
<a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
|
||||
</div>
|
||||
<br>
|
||||
|
||||
👋 Hello from the [Ultralytics](https://ultralytics.com/) Team! We've been working hard these last few months to
|
||||
launch [Ultralytics HUB](https://bit.ly/ultralytics_hub), a new web tool for training and deploying all your YOLOv5 and YOLOv8 🚀
|
||||
models from one spot!
|
||||
|
||||
|
||||
## Introduction
|
||||
|
||||
HUB is designed to be user-friendly and intuitive, with a drag-and-drop interface that allows users to
|
||||
easily upload their data and train new models quickly. It offers a range of pre-trained models and
|
||||
templates to choose from, making it easy for users to get started with training their own models. Once a model is
|
||||
trained, it can be easily deployed and used for real-time object detection, instance segmentation and classification tasks.
|
||||
|
||||
We hope that the resources here will help you get the most out of HUB. Please browse the HUB <a href="https://docs.ultralytics.com/hub">Docs</a> for details, raise an issue on <a href="https://github.com/ultralytics/hub/issues/new/choose">GitHub</a> for support, and join our <a href="https://discord.gg/n6cFeSPZdD">Discord</a> community for questions and discussions!
|
||||
|
||||
- [**Quickstart**](./quickstart.md). Start training and deploying YOLO models with HUB in seconds.
|
||||
- [**Datasets: Preparing and Uploading**](./datasets.md). Learn how to prepare and upload your datasets to HUB in YOLO format.
|
||||
- [**Projects: Creating and Managing**](./projects.md). Group your models into projects for improved organization.
|
||||
- [**Models: Training and Exporting**](./models.md). Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
|
||||
- [**Integrations: Options**](./integrations.md). Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
|
||||
- [**Ultralytics HUB App**](./app/index.md). Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
|
||||
- [**iOS**](./app/ios.md)
|
||||
- [**Android**](./app/android.md)
|
||||
- [**Inference API**](./inference_api.md). Understand how to use the Inference API for running your trained models in the cloud to generate predictions.
|
@ -0,0 +1,20 @@
|
||||
---
|
||||
comments: true
|
||||
---
|
||||
|
||||
# HUB Models
|
||||
|
||||
## Train a Model
|
||||
|
||||
Connect to the Ultralytics HUB notebook and use your model API key to begin training!
|
||||
|
||||
<a href="https://colab.research.google.com/github/ultralytics/hub/blob/master/hub.ipynb" target="_blank">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
|
||||
|
||||
|
||||
## Deploy to Real World
|
||||
|
||||
Export your model to 13 different formats, including TensorFlow, ONNX, OpenVINO, CoreML, Paddle and many others. Run
|
||||
models directly on your [iOS](https://apps.apple.com/xk/app/ultralytics/id1583935240) or
|
||||
[Android](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app) mobile device by downloading
|
||||
the [Ultralytics App](https://ultralytics.com/app_install)!
|
@ -1,429 +0,0 @@
|
||||
---
|
||||
comments: true
|
||||
---
|
||||
|
||||
# YOLO Inference API (UNDER CONSTRUCTION)
|
||||
|
||||
The YOLO Inference API allows you to access the YOLOv8 object detection capabilities via a RESTful API. This enables you to run object detection on images without the need to install and set up the YOLOv8 environment locally.
|
||||
|
||||
## API URL
|
||||
|
||||
The API URL is the address used to access the YOLO Inference API. In this case, the base URL is:
|
||||
|
||||
```
|
||||
https://api.ultralytics.com/inference/v1
|
||||
```
|
||||
|
||||
To access the API with a specific model and your API key, you can include them as query parameters in the API URL. The `model` parameter refers to the `MODEL_ID` you want to use for inference, and the `key` parameter corresponds to your `API_KEY`.
|
||||
|
||||
The complete API URL with the model and API key parameters would be:
|
||||
|
||||
```
|
||||
https://api.ultralytics.com/inference/v1?model=MODEL_ID&key=API_KEY
|
||||
```
|
||||
|
||||
Replace `MODEL_ID` with the ID of the model you want to use and `API_KEY` with your actual API key from [https://hub.ultralytics.com/settings?tab=api+keys](https://hub.ultralytics.com/settings?tab=api+keys).
|
||||
|
||||
## Example Usage in Python
|
||||
|
||||
To access the YOLO Inference API with the specified model and API key using Python, you can use the following code:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
api_key = "API_KEY"
|
||||
model_id = "MODEL_ID"
|
||||
url = f"https://api.ultralytics.com/inference/v1?model={model_id}&key={api_key}"
|
||||
image_path = "image.jpg"
|
||||
|
||||
with open(image_path, "rb") as image_file:
|
||||
files = {"image": image_file}
|
||||
response = requests.post(url, files=files)
|
||||
|
||||
print(response.text)
|
||||
```
|
||||
|
||||
In this example, replace `API_KEY` with your actual API key, `MODEL_ID` with the desired model ID, and `image.jpg` with the path to the image you want to analyze.
|
||||
|
||||
|
||||
## Example Usage with CLI
|
||||
|
||||
You can use the YOLO Inference API with the command-line interface (CLI) by utilizing the `curl` command. Replace `API_KEY` with your actual API key, `MODEL_ID` with the desired model ID, and `image.jpg` with the path to the image you want to analyze:
|
||||
|
||||
```commandline
|
||||
curl -X POST -F image=@image.jpg "https://api.ultralytics.com/inference/v1?model=MODEL_ID&key=API_KEY"
|
||||
```
|
||||
|
||||
## Passing Arguments
|
||||
|
||||
This command sends a POST request to the YOLO Inference API with the specified `model` and `key` parameters in the URL, along with the image file specified by `@image.jpg`.
|
||||
|
||||
Here's an example of passing the `model`, `key`, and `normalize` arguments via the API URL using the `requests` library in Python:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
api_key = "API_KEY"
|
||||
model_id = "MODEL_ID"
|
||||
url = "https://api.ultralytics.com/inference/v1"
|
||||
|
||||
# Define your query parameters
|
||||
params = {
|
||||
"key": api_key,
|
||||
"model": model_id,
|
||||
"normalize": "True"
|
||||
}
|
||||
|
||||
image_path = "image.jpg"
|
||||
|
||||
with open(image_path, "rb") as image_file:
|
||||
files = {"image": image_file}
|
||||
response = requests.post(url, files=files, params=params)
|
||||
|
||||
print(response.text)
|
||||
```
|
||||
|
||||
In this example, the `params` dictionary contains the query parameters `key`, `model`, and `normalize`, which tells the API to return all values in normalized image coordinates from 0 to 1. The `normalize` parameter is set to `"True"` as a string since query parameters should be passed as strings. These query parameters are then passed to the `requests.post()` function.
|
||||
|
||||
This will send the query parameters along with the file in the POST request. Make sure to consult the API documentation for the list of available arguments and their expected values.
|
||||
|
||||
## Return JSON format
|
||||
|
||||
The YOLO Inference API returns a JSON list with the detection results. The format of the JSON list will be the same as the one produced locally by the `results[0].tojson()` command.
|
||||
|
||||
The JSON list contains information about the detected objects, their coordinates, classes, and confidence scores.
|
||||
|
||||
### Detect Model Format
|
||||
|
||||
YOLO detection models, such as `yolov8n.pt`, can return JSON responses from local inference, CLI API inference, and Python API inference. All of these methods produce the same JSON response format.
|
||||
|
||||
!!! example "Detect Model JSON Response"
|
||||
|
||||
=== "Local"
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load model
|
||||
model = YOLO('yolov8n.pt')
|
||||
|
||||
# Run inference
|
||||
results = model('image.jpg')
|
||||
|
||||
# Print image.jpg results in JSON format
|
||||
print(results[0].tojson())
|
||||
```
|
||||
|
||||
=== "CLI API"
|
||||
```commandline
|
||||
curl -X POST -F image=@image.jpg https://api.ultralytics.com/inference/v1?model=MODEL_ID,key=API_KEY
|
||||
```
|
||||
|
||||
=== "Python API"
|
||||
```python
|
||||
import requests
|
||||
|
||||
api_key = "API_KEY"
|
||||
model_id = "MODEL_ID"
|
||||
url = "https://api.ultralytics.com/inference/v1"
|
||||
|
||||
# Define your query parameters
|
||||
params = {
|
||||
"key": api_key,
|
||||
"model": model_id,
|
||||
}
|
||||
|
||||
image_path = "image.jpg"
|
||||
|
||||
with open(image_path, "rb") as image_file:
|
||||
files = {"image": image_file}
|
||||
response = requests.post(url, files=files, params=params)
|
||||
|
||||
print(response.text)
|
||||
```
|
||||
|
||||
=== "JSON Response"
|
||||
```json
|
||||
[
|
||||
{
|
||||
"name": "person",
|
||||
"class": 0,
|
||||
"confidence": 0.8359682559967041,
|
||||
"box": {
|
||||
"x1": 0.08974208831787109,
|
||||
"y1": 0.27418340047200523,
|
||||
"x2": 0.8706787109375,
|
||||
"y2": 0.9887352837456598
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "person",
|
||||
"class": 0,
|
||||
"confidence": 0.8189555406570435,
|
||||
"box": {
|
||||
"x1": 0.5847355842590332,
|
||||
"y1": 0.05813225640190972,
|
||||
"x2": 0.8930277824401855,
|
||||
"y2": 0.9903111775716146
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "tie",
|
||||
"class": 27,
|
||||
"confidence": 0.2909725308418274,
|
||||
"box": {
|
||||
"x1": 0.3433395862579346,
|
||||
"y1": 0.6070465511745877,
|
||||
"x2": 0.40964522361755373,
|
||||
"y2": 0.9849439832899306
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Segment Model Format
|
||||
|
||||
YOLO segmentation models, such as `yolov8n-seg.pt`, can return JSON responses from local inference, CLI API inference, and Python API inference. All of these methods produce the same JSON response format.
|
||||
|
||||
!!! example "Segment Model JSON Response"
|
||||
|
||||
=== "Local"
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load model
|
||||
model = YOLO('yolov8n-seg.pt')
|
||||
|
||||
# Run inference
|
||||
results = model('image.jpg')
|
||||
|
||||
# Print image.jpg results in JSON format
|
||||
print(results[0].tojson())
|
||||
```
|
||||
|
||||
=== "CLI API"
|
||||
```commandline
|
||||
curl -X POST -F image=@image.jpg https://api.ultralytics.com/inference/v1?model=MODEL_ID,key=API_KEY
|
||||
```
|
||||
|
||||
=== "Python API"
|
||||
```python
|
||||
import requests
|
||||
|
||||
api_key = "API_KEY"
|
||||
model_id = "MODEL_ID"
|
||||
url = "https://api.ultralytics.com/inference/v1"
|
||||
|
||||
# Define your query parameters
|
||||
params = {
|
||||
"key": api_key,
|
||||
"model": model_id,
|
||||
}
|
||||
|
||||
image_path = "image.jpg"
|
||||
|
||||
with open(image_path, "rb") as image_file:
|
||||
files = {"image": image_file}
|
||||
response = requests.post(url, files=files, params=params)
|
||||
|
||||
print(response.text)
|
||||
```
|
||||
|
||||
=== "JSON Response"
|
||||
Note `segments` `x` and `y` lengths may vary from one object to another. Larger or more complex objects may have more segment points.
|
||||
```json
|
||||
[
|
||||
{
|
||||
"name": "person",
|
||||
"class": 0,
|
||||
"confidence": 0.856913149356842,
|
||||
"box": {
|
||||
"x1": 0.1064866065979004,
|
||||
"y1": 0.2798851860894097,
|
||||
"x2": 0.8738358497619629,
|
||||
"y2": 0.9894873725043403
|
||||
},
|
||||
"segments": {
|
||||
"x": [
|
||||
0.421875,
|
||||
0.4203124940395355,
|
||||
0.41718751192092896
|
||||
...
|
||||
],
|
||||
"y": [
|
||||
0.2888889014720917,
|
||||
0.2916666567325592,
|
||||
0.2916666567325592
|
||||
...
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "person",
|
||||
"class": 0,
|
||||
"confidence": 0.8512625694274902,
|
||||
"box": {
|
||||
"x1": 0.5757311820983887,
|
||||
"y1": 0.053943040635850696,
|
||||
"x2": 0.8960096359252929,
|
||||
"y2": 0.985154045952691
|
||||
},
|
||||
"segments": {
|
||||
"x": [
|
||||
0.7515624761581421,
|
||||
0.75,
|
||||
0.7437499761581421
|
||||
...
|
||||
],
|
||||
"y": [
|
||||
0.0555555559694767,
|
||||
0.05833333358168602,
|
||||
0.05833333358168602
|
||||
...
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "tie",
|
||||
"class": 27,
|
||||
"confidence": 0.6485961675643921,
|
||||
"box": {
|
||||
"x1": 0.33911995887756347,
|
||||
"y1": 0.6057066175672743,
|
||||
"x2": 0.4081430912017822,
|
||||
"y2": 0.9916408962673611
|
||||
},
|
||||
"segments": {
|
||||
"x": [
|
||||
0.37187498807907104,
|
||||
0.37031251192092896,
|
||||
0.3687500059604645
|
||||
...
|
||||
],
|
||||
"y": [
|
||||
0.6111111044883728,
|
||||
0.6138888597488403,
|
||||
0.6138888597488403
|
||||
...
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
|
||||
### Pose Model Format
|
||||
|
||||
YOLO pose models, such as `yolov8n-pose.pt`, can return JSON responses from local inference, CLI API inference, and Python API inference. All of these methods produce the same JSON response format.
|
||||
|
||||
!!! example "Pose Model JSON Response"
|
||||
|
||||
=== "Local"
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load model
|
||||
model = YOLO('yolov8n-seg.pt')
|
||||
|
||||
# Run inference
|
||||
results = model('image.jpg')
|
||||
|
||||
# Print image.jpg results in JSON format
|
||||
print(results[0].tojson())
|
||||
```
|
||||
|
||||
=== "CLI API"
|
||||
```commandline
|
||||
curl -X POST -F image=@image.jpg https://api.ultralytics.com/inference/v1?model=MODEL_ID,key=API_KEY
|
||||
```
|
||||
|
||||
=== "Python API"
|
||||
```python
|
||||
import requests
|
||||
|
||||
api_key = "API_KEY"
|
||||
model_id = "MODEL_ID"
|
||||
url = "https://api.ultralytics.com/inference/v1"
|
||||
|
||||
# Define your query parameters
|
||||
params = {
|
||||
"key": api_key,
|
||||
"model": model_id,
|
||||
}
|
||||
|
||||
image_path = "image.jpg"
|
||||
|
||||
with open(image_path, "rb") as image_file:
|
||||
files = {"image": image_file}
|
||||
response = requests.post(url, files=files, params=params)
|
||||
|
||||
print(response.text)
|
||||
```
|
||||
|
||||
=== "JSON Response"
|
||||
Note COCO-keypoints pretrained models will have 17 human keypoints. The `visible` part of the keypoints indicates whether a keypoint is visible or obscured. Obscured keypoints may be outside the image or may not be visible, i.e. a person's eyes facing away from the camera.
|
||||
```json
|
||||
[
|
||||
{
|
||||
"name": "person",
|
||||
"class": 0,
|
||||
"confidence": 0.8439509868621826,
|
||||
"box": {
|
||||
"x1": 0.1125,
|
||||
"y1": 0.28194444444444444,
|
||||
"x2": 0.7953125,
|
||||
"y2": 0.9902777777777778
|
||||
},
|
||||
"keypoints": {
|
||||
"x": [
|
||||
0.5058594942092896,
|
||||
0.5103894472122192,
|
||||
0.4920862317085266
|
||||
...
|
||||
],
|
||||
"y": [
|
||||
0.48964157700538635,
|
||||
0.4643048942089081,
|
||||
0.4465252459049225
|
||||
...
|
||||
],
|
||||
"visible": [
|
||||
0.8726999163627625,
|
||||
0.653947651386261,
|
||||
0.9130823612213135
|
||||
...
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "person",
|
||||
"class": 0,
|
||||
"confidence": 0.7474289536476135,
|
||||
"box": {
|
||||
"x1": 0.58125,
|
||||
"y1": 0.0625,
|
||||
"x2": 0.8859375,
|
||||
"y2": 0.9888888888888889
|
||||
},
|
||||
"keypoints": {
|
||||
"x": [
|
||||
0.778544008731842,
|
||||
0.7976160049438477,
|
||||
0.7530890107154846
|
||||
...
|
||||
],
|
||||
"y": [
|
||||
0.27595141530036926,
|
||||
0.2378823608160019,
|
||||
0.23644638061523438
|
||||
...
|
||||
],
|
||||
"visible": [
|
||||
0.8900790810585022,
|
||||
0.789978563785553,
|
||||
0.8974530100822449
|
||||
...
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
@ -0,0 +1,29 @@
|
||||
# coco91_to_coco80_class
|
||||
---
|
||||
:::ultralytics.yolo.data.converter.coco91_to_coco80_class
|
||||
<br><br>
|
||||
|
||||
# convert_coco
|
||||
---
|
||||
:::ultralytics.yolo.data.converter.convert_coco
|
||||
<br><br>
|
||||
|
||||
# rle2polygon
|
||||
---
|
||||
:::ultralytics.yolo.data.converter.rle2polygon
|
||||
<br><br>
|
||||
|
||||
# min_index
|
||||
---
|
||||
:::ultralytics.yolo.data.converter.min_index
|
||||
<br><br>
|
||||
|
||||
# merge_multi_segment
|
||||
---
|
||||
:::ultralytics.yolo.data.converter.merge_multi_segment
|
||||
<br><br>
|
||||
|
||||
# delete_dsstore
|
||||
---
|
||||
:::ultralytics.yolo.data.converter.delete_dsstore
|
||||
<br><br>
|
@ -0,0 +1,230 @@
|
||||
import json
|
||||
from collections import defaultdict
|
||||
from pathlib import Path
|
||||
|
||||
import cv2
|
||||
import numpy as np
|
||||
from tqdm import tqdm
|
||||
|
||||
from ultralytics.yolo.utils.checks import check_requirements
|
||||
from ultralytics.yolo.utils.files import make_dirs
|
||||
|
||||
|
||||
def coco91_to_coco80_class():
|
||||
"""Converts 91-index COCO class IDs to 80-index COCO class IDs.
|
||||
|
||||
Returns:
|
||||
(list): A list of 91 class IDs where the index represents the 80-index class ID and the value is the
|
||||
corresponding 91-index class ID.
|
||||
|
||||
"""
|
||||
return [
|
||||
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, None, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, None, 24, 25, None,
|
||||
None, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, None, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
|
||||
51, 52, 53, 54, 55, 56, 57, 58, 59, None, 60, None, None, 61, None, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72,
|
||||
None, 73, 74, 75, 76, 77, 78, 79, None]
|
||||
|
||||
|
||||
def convert_coco(labels_dir='../coco/annotations/', use_segments=False, use_keypoints=False, cls91to80=True):
|
||||
"""Converts COCO dataset annotations to a format suitable for training YOLOv5 models.
|
||||
|
||||
Args:
|
||||
labels_dir (str, optional): Path to directory containing COCO dataset annotation files.
|
||||
use_segments (bool, optional): Whether to include segmentation masks in the output.
|
||||
use_keypoints (bool, optional): Whether to include keypoint annotations in the output.
|
||||
cls91to80 (bool, optional): Whether to map 91 COCO class IDs to the corresponding 80 COCO class IDs.
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If the labels_dir path does not exist.
|
||||
|
||||
Example Usage:
|
||||
convert_coco(labels_dir='../coco/annotations/', use_segments=True, use_keypoints=True, cls91to80=True)
|
||||
|
||||
Output:
|
||||
Generates output files in the specified output directory.
|
||||
"""
|
||||
|
||||
save_dir = make_dirs('yolo_labels') # output directory
|
||||
coco80 = coco91_to_coco80_class()
|
||||
|
||||
# Import json
|
||||
for json_file in sorted(Path(labels_dir).resolve().glob('*.json')):
|
||||
fn = Path(save_dir) / 'labels' / json_file.stem.replace('instances_', '') # folder name
|
||||
fn.mkdir(parents=True, exist_ok=True)
|
||||
with open(json_file) as f:
|
||||
data = json.load(f)
|
||||
|
||||
# Create image dict
|
||||
images = {'%g' % x['id']: x for x in data['images']}
|
||||
# Create image-annotations dict
|
||||
imgToAnns = defaultdict(list)
|
||||
for ann in data['annotations']:
|
||||
imgToAnns[ann['image_id']].append(ann)
|
||||
|
||||
# Write labels file
|
||||
for img_id, anns in tqdm(imgToAnns.items(), desc=f'Annotations {json_file}'):
|
||||
img = images['%g' % img_id]
|
||||
h, w, f = img['height'], img['width'], img['file_name']
|
||||
|
||||
bboxes = []
|
||||
segments = []
|
||||
keypoints = []
|
||||
for ann in anns:
|
||||
if ann['iscrowd']:
|
||||
continue
|
||||
# The COCO box format is [top left x, top left y, width, height]
|
||||
box = np.array(ann['bbox'], dtype=np.float64)
|
||||
box[:2] += box[2:] / 2 # xy top-left corner to center
|
||||
box[[0, 2]] /= w # normalize x
|
||||
box[[1, 3]] /= h # normalize y
|
||||
if box[2] <= 0 or box[3] <= 0: # if w <= 0 and h <= 0
|
||||
continue
|
||||
|
||||
cls = coco80[ann['category_id'] - 1] if cls91to80 else ann['category_id'] - 1 # class
|
||||
box = [cls] + box.tolist()
|
||||
if box not in bboxes:
|
||||
bboxes.append(box)
|
||||
if use_segments and ann.get('segmentation') is not None:
|
||||
if len(ann['segmentation']) == 0:
|
||||
segments.append([])
|
||||
continue
|
||||
if isinstance(ann['segmentation'], dict):
|
||||
ann['segmentation'] = rle2polygon(ann['segmentation'])
|
||||
if len(ann['segmentation']) > 1:
|
||||
s = merge_multi_segment(ann['segmentation'])
|
||||
s = (np.concatenate(s, axis=0) / np.array([w, h])).reshape(-1).tolist()
|
||||
else:
|
||||
s = [j for i in ann['segmentation'] for j in i] # all segments concatenated
|
||||
s = (np.array(s).reshape(-1, 2) / np.array([w, h])).reshape(-1).tolist()
|
||||
s = [cls] + s
|
||||
if s not in segments:
|
||||
segments.append(s)
|
||||
if use_keypoints and ann.get('keypoints') is not None:
|
||||
k = (np.array(ann['keypoints']).reshape(-1, 3) / np.array([w, h, 1])).reshape(-1).tolist()
|
||||
k = box + k
|
||||
keypoints.append(k)
|
||||
|
||||
# Write
|
||||
with open((fn / f).with_suffix('.txt'), 'a') as file:
|
||||
for i in range(len(bboxes)):
|
||||
if use_keypoints:
|
||||
line = *(keypoints[i]), # cls, box, keypoints
|
||||
else:
|
||||
line = *(segments[i]
|
||||
if use_segments and len(segments[i]) > 0 else bboxes[i]), # cls, box or segments
|
||||
file.write(('%g ' * len(line)).rstrip() % line + '\n')
|
||||
|
||||
|
||||
def rle2polygon(segmentation):
|
||||
"""
|
||||
Convert Run-Length Encoding (RLE) mask to polygon coordinates.
|
||||
|
||||
Args:
|
||||
segmentation (dict, list): RLE mask representation of the object segmentation.
|
||||
|
||||
Returns:
|
||||
(list): A list of lists representing the polygon coordinates for each contour.
|
||||
|
||||
Note:
|
||||
Requires the 'pycocotools' package to be installed.
|
||||
"""
|
||||
check_requirements('pycocotools')
|
||||
from pycocotools import mask
|
||||
|
||||
m = mask.decode(segmentation)
|
||||
m[m > 0] = 255
|
||||
contours, _ = cv2.findContours(m, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS)
|
||||
polygons = []
|
||||
for contour in contours:
|
||||
epsilon = 0.001 * cv2.arcLength(contour, True)
|
||||
contour_approx = cv2.approxPolyDP(contour, epsilon, True)
|
||||
polygon = contour_approx.flatten().tolist()
|
||||
polygons.append(polygon)
|
||||
return polygons
|
||||
|
||||
|
||||
def min_index(arr1, arr2):
|
||||
"""
|
||||
Find a pair of indexes with the shortest distance between two arrays of 2D points.
|
||||
|
||||
Args:
|
||||
arr1 (np.array): A NumPy array of shape (N, 2) representing N 2D points.
|
||||
arr2 (np.array): A NumPy array of shape (M, 2) representing M 2D points.
|
||||
|
||||
Returns:
|
||||
(tuple): A tuple containing the indexes of the points with the shortest distance in arr1 and arr2 respectively.
|
||||
"""
|
||||
dis = ((arr1[:, None, :] - arr2[None, :, :]) ** 2).sum(-1)
|
||||
return np.unravel_index(np.argmin(dis, axis=None), dis.shape)
|
||||
|
||||
|
||||
def merge_multi_segment(segments):
|
||||
"""
|
||||
Merge multiple segments into one list by connecting the coordinates with the minimum distance between each segment.
|
||||
This function connects these coordinates with a thin line to merge all segments into one.
|
||||
|
||||
Args:
|
||||
segments (List[List]): Original segmentations in COCO's JSON file.
|
||||
Each element is a list of coordinates, like [segmentation1, segmentation2,...].
|
||||
|
||||
Returns:
|
||||
s (List[np.ndarray]): A list of connected segments represented as NumPy arrays.
|
||||
"""
|
||||
s = []
|
||||
segments = [np.array(i).reshape(-1, 2) for i in segments]
|
||||
idx_list = [[] for _ in range(len(segments))]
|
||||
|
||||
# record the indexes with min distance between each segment
|
||||
for i in range(1, len(segments)):
|
||||
idx1, idx2 = min_index(segments[i - 1], segments[i])
|
||||
idx_list[i - 1].append(idx1)
|
||||
idx_list[i].append(idx2)
|
||||
|
||||
# use two round to connect all the segments
|
||||
for k in range(2):
|
||||
# forward connection
|
||||
if k == 0:
|
||||
for i, idx in enumerate(idx_list):
|
||||
# middle segments have two indexes
|
||||
# reverse the index of middle segments
|
||||
if len(idx) == 2 and idx[0] > idx[1]:
|
||||
idx = idx[::-1]
|
||||
segments[i] = segments[i][::-1, :]
|
||||
|
||||
segments[i] = np.roll(segments[i], -idx[0], axis=0)
|
||||
segments[i] = np.concatenate([segments[i], segments[i][:1]])
|
||||
# deal with the first segment and the last one
|
||||
if i in [0, len(idx_list) - 1]:
|
||||
s.append(segments[i])
|
||||
else:
|
||||
idx = [0, idx[1] - idx[0]]
|
||||
s.append(segments[i][idx[0]:idx[1] + 1])
|
||||
|
||||
else:
|
||||
for i in range(len(idx_list) - 1, -1, -1):
|
||||
if i not in [0, len(idx_list) - 1]:
|
||||
idx = idx_list[i]
|
||||
nidx = abs(idx[1] - idx[0])
|
||||
s.append(segments[i][nidx:])
|
||||
return s
|
||||
|
||||
|
||||
def delete_dsstore(path='../datasets'):
|
||||
"""Delete Apple .DS_Store files in the specified directory and its subdirectories."""
|
||||
from pathlib import Path
|
||||
|
||||
files = list(Path(path).rglob('.DS_store'))
|
||||
print(files)
|
||||
for f in files:
|
||||
f.unlink()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
source = 'COCO'
|
||||
|
||||
if source == 'COCO':
|
||||
convert_coco(
|
||||
'../datasets/coco/annotations', # directory with *.json
|
||||
use_segments=False,
|
||||
use_keypoints=True,
|
||||
cls91to80=False)
|
Loading…
Reference in new issue