ultralytics 8.0.42
DDP fix and Docs updates (#1065)
Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com> Co-authored-by: Noobtoss <96134731+Noobtoss@users.noreply.github.com> Co-authored-by: Laughing-q <1185102784@qq.com>
This commit is contained in:
@ -32,10 +32,11 @@ predictor's call method.
|
||||
|
||||
Results object consists of these component objects:
|
||||
|
||||
- `Results.boxes` : `Boxes` object with properties and methods for manipulating bboxes
|
||||
- `Results.masks` : `Masks` object used to index masks or to get segment coordinates.
|
||||
- `Results.probs` : `torch.Tensor` containing the class probabilities/logits.
|
||||
- `Results.orig_shape` : `tuple` containing the original image size as (height, width).
|
||||
- `Results.boxes`: `Boxes` object with properties and methods for manipulating bboxes
|
||||
- `Results.masks`: `Masks` object used to index masks or to get segment coordinates.
|
||||
- `Results.probs`: `torch.Tensor` containing the class probabilities/logits.
|
||||
- `Results.orig_img`: Original image loaded in memory.
|
||||
- `Results.path`: `Path` containing the path to input image
|
||||
|
||||
Each result is composed of torch.Tensor by default, in which you can easily use following functionality:
|
||||
|
||||
@ -94,18 +95,18 @@ results[0].probs # cls prob, (num_class, )
|
||||
|
||||
Class reference documentation for `Results` module and its components can be found [here](reference/results.md)
|
||||
|
||||
## Visualizing results
|
||||
## Plotting results
|
||||
|
||||
You can use `visualize()` function of `Result` object to get a visualization. It plots all components(boxes, masks,
|
||||
You can use `plot()` function of `Result` object to plot results on in image object. It plots all components(boxes, masks,
|
||||
classification logits, etc) found in the results object
|
||||
|
||||
```python
|
||||
res = model(img)
|
||||
res_plotted = res[0].visualize()
|
||||
cv2.imshow("result", res_plotted)
|
||||
res = model(img)
|
||||
res_plotted = res[0].plot()
|
||||
cv2.imshow("result", res_plotted)
|
||||
```
|
||||
|
||||
!!! example "`visualize()` arguments"
|
||||
!!! example "`plot()` arguments"
|
||||
|
||||
`show_conf (bool)`: Show confidence
|
||||
|
||||
|
86
docs/tasks/tracking.md
Normal file
86
docs/tasks/tracking.md
Normal file
@ -0,0 +1,86 @@
|
||||
Object tracking is a task that involves identifying the location and class of objects, then assigning a unique ID to
|
||||
that detection in video streams.
|
||||
|
||||
The output of tracker is the same as detection with an added object ID.
|
||||
|
||||
## Available Trackers
|
||||
|
||||
The following tracking algorithms have been implemented and can be enabled by passing `tracker=tracker_type.yaml`
|
||||
|
||||
* [BoT-SORT](https://github.com/NirAharon/BoT-SORT) - `botsort.yaml`
|
||||
* [ByteTrack](https://github.com/ifzhang/ByteTrack) - `bytetrack.yaml`
|
||||
|
||||
The default tracker is BoT-SORT.
|
||||
|
||||
## Tracking
|
||||
|
||||
Use a trained YOLOv8n/YOLOv8n-seg model to run tracker on video streams.
|
||||
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt") # load an official detection model
|
||||
model = YOLO("yolov8n-seg.pt") # load an official segmentation model
|
||||
model = YOLO("path/to/best.pt") # load a custom model
|
||||
|
||||
# Track with the model
|
||||
results = model.track(source="https://youtu.be/Zgi9g1ksQHc", show=True)
|
||||
results = model.track(source="https://youtu.be/Zgi9g1ksQHc", show=True, tracker="bytetrack.yaml")
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" # official detection model
|
||||
yolo track model=yolov8n-seg.pt source=... # official segmentation model
|
||||
yolo track model=path/to/best.pt source=... # custom model
|
||||
yolo track model=path/to/best.pt tracker="bytetrack.yaml" # bytetrack tracker
|
||||
|
||||
```
|
||||
|
||||
As in the above usage, we support both the detection and segmentation models for tracking and the only thing you need to do is loading the corresponding(detection or segmentation) model.
|
||||
|
||||
## Configuration
|
||||
### Tracking
|
||||
Tracking shares the configuration with predict, i.e `conf`, `iou`, `show`. More configurations please refer to [predict page](https://docs.ultralytics.com/cfg/#prediction).
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO("yolov8n.pt")
|
||||
results = model.track(source="https://youtu.be/Zgi9g1ksQHc", conf=0.3, iou=0.5, show=True)
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" conf=0.3, iou=0.5 show
|
||||
|
||||
```
|
||||
|
||||
### Tracker
|
||||
We also support using a modified tracker config file, just copy a config file i.e `custom_tracker.yaml` from [ultralytics/tracker/cfg](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/tracker/cfg) and modify any configurations(expect the `tracker_type`) you need to.
|
||||
!!! example ""
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO("yolov8n.pt")
|
||||
results = model.track(source="https://youtu.be/Zgi9g1ksQHc", tracker='custom_tracker.yaml')
|
||||
```
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
yolo track model=yolov8n.pt source="https://youtu.be/Zgi9g1ksQHc" tracker='custom_tracker.yaml'
|
||||
|
||||
```
|
||||
Please refer to [ultralytics/tracker/cfg](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/tracker/cfg) page.
|
||||
|
Reference in New Issue
Block a user