ultralytics 8.0.97
confusion matrix, windows, docs updates (#2511)
Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com> Co-authored-by: Dowon <ks2515@naver.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
This commit is contained in:
@ -1,5 +1,6 @@
|
||||
---
|
||||
comments: true
|
||||
description: Benchmark mode compares speed and accuracy of various YOLOv8 export formats like ONNX or OpenVINO. Optimize formats for speed or accuracy.
|
||||
---
|
||||
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||
|
@ -1,5 +1,6 @@
|
||||
---
|
||||
comments: true
|
||||
description: 'Export mode: Create a deployment-ready YOLOv8 model by converting it to various formats. Export to ONNX or OpenVINO for up to 3x CPU speedup.'
|
||||
---
|
||||
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||
@ -82,4 +83,4 @@ i.e. `format='onnx'` or `format='engine'`.
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8` |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz` |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |
|
@ -1,5 +1,6 @@
|
||||
---
|
||||
comments: true
|
||||
description: Use Ultralytics YOLOv8 Modes (Train, Val, Predict, Export, Track, Benchmark) to train, validate, predict, track, export or benchmark.
|
||||
---
|
||||
|
||||
# Ultralytics YOLOv8 Modes
|
||||
@ -63,4 +64,4 @@ or `accuracy_top5` metrics (for classification), and the inference time in milli
|
||||
formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for
|
||||
their specific use case based on their requirements for speed and accuracy.
|
||||
|
||||
[Benchmark Examples](benchmark.md){ .md-button .md-button--primary}
|
||||
[Benchmark Examples](benchmark.md){ .md-button .md-button--primary}
|
@ -1,5 +1,6 @@
|
||||
---
|
||||
comments: true
|
||||
description: Get started with YOLOv8 Predict mode and input sources. Accepts various input sources such as images, videos, and directories.
|
||||
---
|
||||
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||
@ -58,10 +59,11 @@ whether each source can be used in streaming mode with `stream=True` ✅ and an
|
||||
| YouTube ✅ | `'https://youtu.be/Zgi9g1ksQHc'` | `str` | |
|
||||
| stream ✅ | `'rtsp://example.com/media.mp4'` | `str` | RTSP, RTMP, HTTP |
|
||||
|
||||
|
||||
## Arguments
|
||||
|
||||
`model.predict` accepts multiple arguments that control the prediction operation. These arguments can be passed directly to `model.predict`:
|
||||
!!! example
|
||||
|
||||
```
|
||||
model.predict(source, save=True, imgsz=320, conf=0.5)
|
||||
```
|
||||
@ -220,6 +222,7 @@ masks, classification logits, etc.) found in the results object
|
||||
res_plotted = res[0].plot()
|
||||
cv2.imshow("result", res_plotted)
|
||||
```
|
||||
|
||||
| Argument | Description |
|
||||
|-------------------------------|----------------------------------------------------------------------------------------|
|
||||
| `conf (bool)` | Whether to plot the detection confidence score. |
|
||||
@ -234,7 +237,6 @@ masks, classification logits, etc.) found in the results object
|
||||
| `masks (bool)` | Whether to plot the masks. |
|
||||
| `probs (bool)` | Whether to plot classification probability. |
|
||||
|
||||
|
||||
## Streaming Source `for`-loop
|
||||
|
||||
Here's a Python script using OpenCV (cv2) and YOLOv8 to run inference on video frames. This script assumes you have already installed the necessary packages (opencv-python and ultralytics).
|
||||
@ -277,4 +279,4 @@ Here's a Python script using OpenCV (cv2) and YOLOv8 to run inference on video f
|
||||
# Release the video capture object and close the display window
|
||||
cap.release()
|
||||
cv2.destroyAllWindows()
|
||||
```
|
||||
```
|
@ -1,5 +1,6 @@
|
||||
---
|
||||
comments: true
|
||||
description: Validate and improve YOLOv8n model accuracy on COCO128 and other datasets using hyperparameter & configuration tuning, in Val mode.
|
||||
---
|
||||
|
||||
<img width="1024" src="https://github.com/ultralytics/assets/raw/main/yolov8/banner-integrations.png">
|
||||
@ -87,4 +88,4 @@ i.e. `format='onnx'` or `format='engine'`.
|
||||
| [TF Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov8n.tflite` | ✅ | `imgsz`, `half`, `int8` |
|
||||
| [TF Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov8n_edgetpu.tflite` | ✅ | `imgsz` |
|
||||
| [TF.js](https://www.tensorflow.org/js) | `tfjs` | `yolov8n_web_model/` | ✅ | `imgsz` |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |
|
||||
| [PaddlePaddle](https://github.com/PaddlePaddle) | `paddle` | `yolov8n_paddle_model/` | ✅ | `imgsz` |
|
Reference in New Issue
Block a user