ultralytics 8.0.94
HUBDatasetStats() Segment and Pose support (#2450)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: JF Chen <k-2feng@hotmail.com> Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com> Co-authored-by: Laughing-q <1185102784@qq.com>
This commit is contained in:
@ -2,6 +2,63 @@
|
||||
comments: true
|
||||
---
|
||||
|
||||
# 🚧 Page Under Construction ⚒
|
||||
# Ultralytics Android App: Real-time Object Detection with YOLO Models
|
||||
|
||||
This page is currently under construction!️ 👷Please check back later for updates. 😃🔜
|
||||
The Ultralytics Android App is a powerful tool that allows you to run YOLO models directly on your Android device for real-time object detection. This app utilizes TensorFlow Lite for model optimization and various hardware delegates for acceleration, enabling fast and efficient object detection.
|
||||
|
||||
## Quantization and Acceleration
|
||||
|
||||
To achieve real-time performance on your Android device, YOLO models are quantized to either FP16 or INT8 precision. Quantization is a process that reduces the numerical precision of the model's weights and biases, thus reducing the model's size and the amount of computation required. This results in faster inference times without significantly affecting the model's accuracy.
|
||||
|
||||
### FP16 Quantization
|
||||
|
||||
FP16 (or half-precision) quantization converts the model's 32-bit floating-point numbers to 16-bit floating-point numbers. This reduces the model's size by half and speeds up the inference process, while maintaining a good balance between accuracy and performance.
|
||||
|
||||
### INT8 Quantization
|
||||
|
||||
INT8 (or 8-bit integer) quantization further reduces the model's size and computation requirements by converting its 32-bit floating-point numbers to 8-bit integers. This quantization method can result in a significant speedup, but it may lead to a slight reduction in mean average precision (mAP) due to the lower numerical precision.
|
||||
|
||||
!!! tip "mAP Reduction in INT8 Models"
|
||||
|
||||
The reduced numerical precision in INT8 models can lead to some loss of information during the quantization process, which may result in a slight decrease in mAP. However, this trade-off is often acceptable considering the substantial performance gains offered by INT8 quantization.
|
||||
|
||||
## Delegates and Performance Variability
|
||||
|
||||
Different delegates are available on Android devices to accelerate model inference. These delegates include CPU, [GPU](https://www.tensorflow.org/lite/android/delegates/gpu), [Hexagon](https://www.tensorflow.org/lite/android/delegates/hexagon) and [NNAPI](https://www.tensorflow.org/lite/android/delegates/nnapi). The performance of these delegates varies depending on the device's hardware vendor, product line, and specific chipsets used in the device.
|
||||
|
||||
1. **CPU**: The default option, with reasonable performance on most devices.
|
||||
2. **GPU**: Utilizes the device's GPU for faster inference. It can provide a significant performance boost on devices with powerful GPUs.
|
||||
3. **Hexagon**: Leverages Qualcomm's Hexagon DSP for faster and more efficient processing. This option is available on devices with Qualcomm Snapdragon processors.
|
||||
4. **NNAPI**: The Android Neural Networks API (NNAPI) serves as an abstraction layer for running ML models on Android devices. NNAPI can utilize various hardware accelerators, such as CPU, GPU, and dedicated AI chips (e.g., Google's Edge TPU, or the Pixel Neural Core).
|
||||
|
||||
Here's a table showing the primary vendors, their product lines, popular devices, and supported delegates:
|
||||
|
||||
| Vendor | Product Lines | Popular Devices | Delegates Supported |
|
||||
|-----------------------------------------|---------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------|
|
||||
| [Qualcomm](https://www.qualcomm.com/) | [Snapdragon (e.g., 800 series)](https://www.qualcomm.com/snapdragon) | [Samsung Galaxy S21](https://www.samsung.com/global/galaxy/galaxy-s21-5g/), [OnePlus 9](https://www.oneplus.com/9), [Google Pixel 6](https://store.google.com/product/pixel_6) | CPU, GPU, Hexagon, NNAPI |
|
||||
| [Samsung](https://www.samsung.com/) | [Exynos (e.g., Exynos 2100)](https://www.samsung.com/semiconductor/minisite/exynos/) | [Samsung Galaxy S21 (Global version)](https://www.samsung.com/global/galaxy/galaxy-s21-5g/) | CPU, GPU, NNAPI |
|
||||
| [MediaTek](https://www.mediatek.com/) | [Dimensity (e.g., Dimensity 1200)](https://www.mediatek.com/products/smartphones) | [Realme GT](https://www.realme.com/global/realme-gt), [Xiaomi Redmi Note](https://www.mi.com/en/phone/redmi/note-list) | CPU, GPU, NNAPI |
|
||||
| [HiSilicon](https://www.hisilicon.com/) | [Kirin (e.g., Kirin 990)](https://www.hisilicon.com/en/products/Kirin) | [Huawei P40 Pro](https://consumer.huawei.com/en/phones/p40-pro/), [Huawei Mate 30 Pro](https://consumer.huawei.com/en/phones/mate30-pro/) | CPU, GPU, NNAPI |
|
||||
| [NVIDIA](https://www.nvidia.com/) | [Tegra (e.g., Tegra X1)](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems-dev-kits-modules/) | [NVIDIA Shield TV](https://www.nvidia.com/en-us/shield/shield-tv/), [Nintendo Switch](https://www.nintendo.com/switch/) | CPU, GPU, NNAPI |
|
||||
|
||||
Please note that the list of devices mentioned is not exhaustive and may vary depending on the specific chipsets and device models. Always test your models on your target devices to ensure compatibility and optimal performance.
|
||||
|
||||
Keep in mind that the choice of delegate can affect performance and model compatibility. For example, some models may not work with certain delegates, or a delegate may not be available on a specific device. As such, it's essential to test your model and the chosen delegate on your target devices for the best results.
|
||||
|
||||
## Getting Started with the Ultralytics Android App
|
||||
|
||||
To get started with the Ultralytics Android App, follow these steps:
|
||||
|
||||
1. Download the Ultralytics App from the [Google Play Store](https://play.google.com/store/apps/details?id=com.ultralytics.ultralytics_app).
|
||||
|
||||
2. Launch the app on your Android device and sign in with your Ultralytics account. If you don't have an account yet, create one [here](https://hub.ultralytics.com/).
|
||||
|
||||
3. Once signed in, you will see a list of your trained YOLO models. Select a model to use for object detection.
|
||||
|
||||
4. Grant the app permission to access your device's camera.
|
||||
|
||||
5. Point your device's camera at objects you want to detect. The app will display bounding boxes and class labels in real-time as it detects objects.
|
||||
|
||||
6. Explore the app's settings to adjust the detection threshold, enable or disable specific object classes, and more.
|
||||
|
||||
With the Ultralytics Android App, you now have the power of real-time object detection using YOLO models right at your fingertips. Enjoy exploring the app's features and optimizing its settings to suit your specific use cases.
|
||||
|
@ -2,6 +2,54 @@
|
||||
comments: true
|
||||
---
|
||||
|
||||
# 🚧 Page Under Construction ⚒
|
||||
# Ultralytics iOS App: Real-time Object Detection with YOLO Models
|
||||
|
||||
This page is currently under construction!️ 👷Please check back later for updates. 😃🔜
|
||||
The Ultralytics iOS App is a powerful tool that allows you to run YOLO models directly on your iPhone or iPad for real-time object detection. This app utilizes the Apple Neural Engine and Core ML for model optimization and acceleration, enabling fast and efficient object detection.
|
||||
|
||||
## Quantization and Acceleration
|
||||
|
||||
To achieve real-time performance on your iOS device, YOLO models are quantized to either FP16 or INT8 precision. Quantization is a process that reduces the numerical precision of the model's weights and biases, thus reducing the model's size and the amount of computation required. This results in faster inference times without significantly affecting the model's accuracy.
|
||||
|
||||
### FP16 Quantization
|
||||
|
||||
FP16 (or half-precision) quantization converts the model's 32-bit floating-point numbers to 16-bit floating-point numbers. This reduces the model's size by half and speeds up the inference process, while maintaining a good balance between accuracy and performance.
|
||||
|
||||
### INT8 Quantization
|
||||
|
||||
INT8 (or 8-bit integer) quantization further reduces the model's size and computation requirements by converting its 32-bit floating-point numbers to 8-bit integers. This quantization method can result in a significant speedup, but it may lead to a slight reduction in accuracy.
|
||||
|
||||
## Apple Neural Engine
|
||||
|
||||
The Apple Neural Engine (ANE) is a dedicated hardware component integrated into Apple's A-series and M-series chips. It's designed to accelerate machine learning tasks, particularly for neural networks, allowing for faster and more efficient execution of your YOLO models.
|
||||
|
||||
By combining quantized YOLO models with the Apple Neural Engine, the Ultralytics iOS App achieves real-time object detection on your iOS device without compromising on accuracy or performance.
|
||||
|
||||
| Release Year | iPhone Name | Chipset Name | Node Size | ANE TOPs |
|
||||
|--------------|------------------------------------------------------|-------------------------------------------------------|-----------|----------|
|
||||
| 2017 | [iPhone X](https://en.wikipedia.org/wiki/IPhone_X) | [A11 Bionic](https://en.wikipedia.org/wiki/Apple_A11) | 10 nm | 0.6 |
|
||||
| 2018 | [iPhone XS](https://en.wikipedia.org/wiki/IPhone_XS) | [A12 Bionic](https://en.wikipedia.org/wiki/Apple_A12) | 7 nm | 5 |
|
||||
| 2019 | [iPhone 11](https://en.wikipedia.org/wiki/IPhone_11) | [A13 Bionic](https://en.wikipedia.org/wiki/Apple_A13) | 7 nm | 6 |
|
||||
| 2020 | [iPhone 12](https://en.wikipedia.org/wiki/IPhone_12) | [A14 Bionic](https://en.wikipedia.org/wiki/Apple_A14) | 5 nm | 11 |
|
||||
| 2021 | [iPhone 13](https://en.wikipedia.org/wiki/IPhone_13) | [A15 Bionic](https://en.wikipedia.org/wiki/Apple_A15) | 5 nm | 15.8 |
|
||||
| 2022 | [iPhone 14](https://en.wikipedia.org/wiki/IPhone_14) | [A16 Bionic](https://en.wikipedia.org/wiki/Apple_A16) | 4 nm | 17.0 |
|
||||
|
||||
|
||||
Please note that this list only includes iPhone models from 2017 onwards, and the ANE TOPs values are approximate.
|
||||
|
||||
## Getting Started with the Ultralytics iOS App
|
||||
|
||||
To get started with the Ultralytics iOS App, follow these steps:
|
||||
|
||||
1. Download the Ultralytics App from the [App Store](https://apps.apple.com/xk/app/ultralytics/id1583935240).
|
||||
|
||||
2. Launch the app on your iOS device and sign in with your Ultralytics account. If you don't have an account yet, create one [here](https://hub.ultralytics.com/).
|
||||
|
||||
3. Once signed in, you will see a list of your trained YOLO models. Select a model to use for object detection.
|
||||
|
||||
4. Grant the app permission to access your device's camera.
|
||||
|
||||
5. Point your device's camera at objects you want to detect. The app will display bounding boxes and class labels in real-time as it detects objects.
|
||||
|
||||
6. Explore the app's settings to adjust the detection threshold, enable or disable specific object classes, and more.
|
||||
|
||||
With the Ultralytics iOS App, you can now leverage the power of YOLO models for real-time object detection on your iPhone or iPad, powered by the Apple Neural Engine and optimized with FP16 or INT8 quantization.
|
||||
|
@ -4,26 +4,24 @@ comments: true
|
||||
|
||||
# HUB Datasets
|
||||
|
||||
## Upload a Dataset
|
||||
## 1. Upload a Dataset
|
||||
|
||||
Ultralytics HUB datasets are just like YOLOv5 and YOLOv8 🚀 datasets, they use the same structure and the same label formats to keep
|
||||
everything simple.
|
||||
|
||||
When you upload a dataset to Ultralytics HUB, make sure to **place your dataset YAML inside the dataset root directory**
|
||||
as in the example shown below, and then zip for upload to https://hub.ultralytics.com/. Your **dataset YAML, directory
|
||||
and zip** should all share the same name. For example, if your dataset is called 'coco6' as in our
|
||||
example [ultralytics/hub/coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip), then you should have a
|
||||
coco6.yaml inside your coco6/ directory, which should zip to create coco6.zip for upload:
|
||||
as in the example shown below, and then zip for upload to [https://hub.ultralytics.com](https://hub.ultralytics.com/). Your **dataset YAML, directory
|
||||
and zip** should all share the same name. For example, if your dataset is called 'coco8' as in our
|
||||
example [ultralytics/hub/example_datasets/coco8.zip](https://github.com/ultralytics/hub/blob/master/example_datasets/coco8.zip), then you should have a `coco8.yaml` inside your `coco8/` directory, which should zip to create `coco8.zip` for upload:
|
||||
|
||||
```bash
|
||||
zip -r coco6.zip coco6
|
||||
zip -r coco8.zip coco8
|
||||
```
|
||||
|
||||
The example [coco6.zip](https://github.com/ultralytics/hub/blob/master/coco6.zip) dataset in this repository can be
|
||||
downloaded and unzipped to see exactly how to structure your custom dataset.
|
||||
The [example_datasets/coco8.zip](https://github.com/ultralytics/hub/blob/master/example_datasets/coco8.zip) dataset in this repository can be downloaded and unzipped to see exactly how to structure your custom dataset.
|
||||
|
||||
<p align="center">
|
||||
<img width="80%" src="https://user-images.githubusercontent.com/26833433/201424843-20fa081b-ad4b-4d6c-a095-e810775908d8.png" title="COCO6" />
|
||||
<img width="80%" src="https://user-images.githubusercontent.com/26833433/201424843-20fa081b-ad4b-4d6c-a095-e810775908d8.png" title="COCO8" />
|
||||
</p>
|
||||
|
||||
The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format. See
|
||||
|
@ -36,6 +36,6 @@ We hope that the resources here will help you get the most out of HUB. Please br
|
||||
- [**Models: Training and Exporting**](./models.md). Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
|
||||
- [**Integrations: Options**](./integrations.md). Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
|
||||
- [**Ultralytics HUB App**](./app/index.md). Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
|
||||
- [**iOS**](./app/ios.md)
|
||||
- [**Android**](./app/android.md)
|
||||
* [**iOS**](./app/ios.md). Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
|
||||
* [**Android**](./app/android.md). Explore TFLite acceleration on mobile devices.
|
||||
- [**Inference API**](./inference_api.md). Understand how to use the Inference API for running your trained models in the cloud to generate predictions.
|
@ -50,7 +50,7 @@ In this example, replace `API_KEY` with your actual API key, `MODEL_ID` with the
|
||||
|
||||
You can use the YOLO Inference API with the command-line interface (CLI) by utilizing the `curl` command. Replace `API_KEY` with your actual API key, `MODEL_ID` with the desired model ID, and `image.jpg` with the path to the image you want to analyze:
|
||||
|
||||
```commandline
|
||||
```bash
|
||||
curl -X POST "https://api.ultralytics.com/v1/predict/MODEL_ID" \
|
||||
-H "x-api-key: API_KEY" \
|
||||
-F "image=@/path/to/image.jpg" \
|
||||
@ -89,13 +89,13 @@ In this example, the `data` dictionary contains the query arguments `size`, `con
|
||||
|
||||
This will send the query parameters along with the file in the POST request. See the table below for a full list of available inference arguments.
|
||||
|
||||
| Argument | Default | Type | Notes |
|
||||
|--------------|---------|---------|-----------------------------------------|
|
||||
| `size` | `640` | `int` | allowable range is `32` - `1280` pixels |
|
||||
| `confidence` | `0.25` | `float` | allowable range is `0.01` - `1.0` |
|
||||
| `iou` | `0.45` | `float` | allowable range is `0.0` - `0.95` |
|
||||
| `url` | `''` | `str` | |
|
||||
| `normalize` | `False` | `bool` | |
|
||||
| Inference Argument | Default | Type | Notes |
|
||||
|--------------------|---------|---------|------------------------------------------------|
|
||||
| `size` | `640` | `int` | valid range is `32` - `1280` pixels |
|
||||
| `confidence` | `0.25` | `float` | valid range is `0.01` - `1.0` |
|
||||
| `iou` | `0.45` | `float` | valid range is `0.0` - `0.95` |
|
||||
| `url` | `''` | `str` | optional image URL if not image file is passed |
|
||||
| `normalize` | `False` | `bool` | |
|
||||
|
||||
## Return JSON format
|
||||
|
||||
@ -124,7 +124,7 @@ YOLO detection models, such as `yolov8n.pt`, can return JSON responses from loca
|
||||
```
|
||||
|
||||
=== "CLI API"
|
||||
```commandline
|
||||
```bash
|
||||
curl -X POST "https://api.ultralytics.com/v1/predict/MODEL_ID" \
|
||||
-H "x-api-key: API_KEY" \
|
||||
-F "image=@/path/to/image.jpg" \
|
||||
@ -218,7 +218,7 @@ YOLO segmentation models, such as `yolov8n-seg.pt`, can return JSON responses fr
|
||||
```
|
||||
|
||||
=== "CLI API"
|
||||
```commandline
|
||||
```bash
|
||||
curl -X POST "https://api.ultralytics.com/v1/predict/MODEL_ID" \
|
||||
-H "x-api-key: API_KEY" \
|
||||
-F "image=@/path/to/image.jpg" \
|
||||
@ -356,7 +356,7 @@ YOLO pose models, such as `yolov8n-pose.pt`, can return JSON responses from loca
|
||||
```
|
||||
|
||||
=== "CLI API"
|
||||
```commandline
|
||||
```bash
|
||||
curl -X POST "https://api.ultralytics.com/v1/predict/MODEL_ID" \
|
||||
-H "x-api-key: API_KEY" \
|
||||
-F "image=@/path/to/image.jpg" \
|
||||
|
@ -0,0 +1,7 @@
|
||||
---
|
||||
comments: true
|
||||
---
|
||||
|
||||
# 🚧 Page Under Construction ⚒
|
||||
|
||||
This page is currently under construction!️ 👷Please check back later for updates. 😃🔜
|
||||
|
Reference in New Issue
Block a user