ultralytics 8.0.97
confusion matrix, windows, docs updates (#2511)
Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com> Co-authored-by: Dowon <ks2515@naver.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
This commit is contained in:
@ -1,5 +1,6 @@
|
||||
---
|
||||
comments: true
|
||||
description: Learn about the supported models and architectures, such as YOLOv3, YOLOv5, and YOLOv8, and how to contribute your own model to Ultralytics.
|
||||
---
|
||||
|
||||
# Models
|
||||
|
@ -1,5 +1,6 @@
|
||||
---
|
||||
comments: true
|
||||
description: Learn about the Vision Transformer (ViT) and segment anything with SAM models. Train and use pre-trained models with Python API.
|
||||
---
|
||||
|
||||
# Vision Transformers
|
||||
@ -9,11 +10,11 @@ Vit models currently support Python environment:
|
||||
```python
|
||||
from ultralytics.vit import SAM
|
||||
|
||||
# from ultralytics.vit import MODEL_TYPe
|
||||
# from ultralytics.vit import MODEL_TYPE
|
||||
|
||||
model = SAM("sam_b.pt")
|
||||
model.info() # display model information
|
||||
model.predict(...) # train the model
|
||||
model.predict(...) # predict
|
||||
```
|
||||
|
||||
# Segment Anything
|
||||
@ -33,4 +34,4 @@ model.predict(...) # train the model
|
||||
|------------|--------------------|
|
||||
| Inference | :heavy_check_mark: |
|
||||
| Validation | :x: |
|
||||
| Training | :x: |
|
||||
| Training | :x: |
|
@ -1,5 +1,6 @@
|
||||
---
|
||||
comments: true
|
||||
description: Detect objects faster and more accurately using Ultralytics YOLOv5u. Find pre-trained models for each task, including Inference, Validation and Training.
|
||||
---
|
||||
|
||||
# YOLOv5u
|
||||
@ -38,4 +39,4 @@ Anchor-free YOLOv5 models with improved accuracy-speed tradeoff.
|
||||
| [YOLOv5s6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5s6u.pt) | 1280 | 48.6 | - | - | 15.3 | 24.6 |
|
||||
| [YOLOv5m6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5m6u.pt) | 1280 | 53.6 | - | - | 41.2 | 65.7 |
|
||||
| [YOLOv5l6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5l6u.pt) | 1280 | 55.7 | - | - | 86.1 | 137.4 |
|
||||
| [YOLOv5x6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5x6u.pt) | 1280 | 56.8 | - | - | 155.4 | 250.7 |
|
||||
| [YOLOv5x6u](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov5x6u.pt) | 1280 | 56.8 | - | - | 155.4 | 250.7 |
|
@ -1,5 +1,6 @@
|
||||
---
|
||||
comments: true
|
||||
description: Learn about YOLOv8's pre-trained weights supporting detection, instance segmentation, pose, and classification tasks. Get performance details.
|
||||
---
|
||||
|
||||
# YOLOv8
|
||||
@ -64,4 +65,4 @@ comments: true
|
||||
| [YOLOv8m-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-pose.pt) | 640 | 65.0 | 88.8 | 456.3 | 2.00 | 26.4 | 81.0 |
|
||||
| [YOLOv8l-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-pose.pt) | 640 | 67.6 | 90.0 | 784.5 | 2.59 | 44.4 | 168.6 |
|
||||
| [YOLOv8x-pose](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose.pt) | 640 | 69.2 | 90.2 | 1607.1 | 3.73 | 69.4 | 263.2 |
|
||||
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
|
||||
| [YOLOv8x-pose-p6](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-pose-p6.pt) | 1280 | 71.6 | 91.2 | 4088.7 | 10.04 | 99.1 | 1066.4 |
|
Reference in New Issue
Block a user