@ -152,7 +152,8 @@ operations are cached, meaning they're only calculated once per object, and thos
```python
```python
results = model(inputs)
results = model(inputs)
masks = results[0].masks # Masks object
masks = results[0].masks # Masks object
masks.segments # bounding coordinates of masks, List[segment] * N
masks.xy # x, y segments (pixels), List[segment] * N
masks.xyn # x, y segments (normalized), List[segment] * N
masks.data # raw masks tensor, (N, H, W) or masks.masks
masks.data # raw masks tensor, (N, H, W) or masks.masks
```
```
@ -185,3 +186,47 @@ masks, classification logits, etc.) found in the results object
- `show_conf (bool)`: Show confidence
- `show_conf (bool)`: Show confidence
- `line_width (Float)`: The line width of boxes. Automatically scaled to img size if not provided
- `line_width (Float)`: The line width of boxes. Automatically scaled to img size if not provided
- `font_size (Float)`: The font size of . Automatically scaled to img size if not provided
- `font_size (Float)`: The font size of . Automatically scaled to img size if not provided
## Streaming Source `for`-loop
Here's a Python script using OpenCV (cv2) and YOLOv8 to run inference on video frames. This script assumes you have already installed the necessary packages (opencv-python and ultralytics).
!!! example "Streaming for-loop"
```python
import cv2
from ultralytics import YOLO
# Load the YOLOv8 model
model = YOLO('yolov8n.pt')
# Open the video file
video_path = "path/to/your/video/file.mp4"
cap = cv2.VideoCapture(video_path)
# Loop through the video frames
while cap.isOpened():
# Read a frame from the video
success, frame = cap.read()
if success:
# Run YOLOv8 inference on the frame
results = model(frame)
# Visualize the results on the frame
annotated_frame = results[0].plot()
# Display the annotated frame
cv2.imshow("YOLOv8 Inference", annotated_frame)
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
# Break the loop if the end of the video is reached
break
# Release the video capture object and close the display window
Here are all supported callbacks. See callbacks [source code](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/utils/callbacks/base.py) for additional details.
Training settings for YOLO models refer to the various hyperparameters and configurations used to train the model on a
## Train
dataset. These settings can affect the model's performance, speed, and accuracy. Some common YOLO training settings
include the batch size, learning rate, momentum, and weight decay. Other factors that may affect the training process
The training settings for YOLO models encompass various hyperparameters and configurations used during the training process. These settings influence the model's performance, speed, and accuracy. Key training settings include batch size, learning rate, momentum, and weight decay. Additionally, the choice of optimizer, loss function, and training dataset composition can impact the training process. Careful tuning and experimentation with these settings are crucial for optimizing performance.
include the choice of optimizer, the choice of loss function, and the size and composition of the training dataset. It
is important to carefully tune and experiment with these settings to achieve the best possible performance for a given
Prediction settings for YOLO models refer to the various hyperparameters and configurations used to make predictions
The prediction settings for YOLO models encompass a range of hyperparameters and configurations that influence the model's performance, speed, and accuracy during inference on new data. Careful tuning and experimentation with these settings are essential to achieve optimal performance for a specific task. Key settings include the confidence threshold, Non-Maximum Suppression (NMS) threshold, and the number of classes considered. Additional factors affecting the prediction process are input data size and format, the presence of supplementary features such as masks or multiple labels per box, and the particular task the model is employed for.
with the model on new data. These settings can affect the model's performance, speed, and accuracy. Some common YOLO
prediction settings include the confidence threshold, non-maximum suppression (NMS) threshold, and the number of classes
to consider. Other factors that may affect the prediction process include the size and format of the input data, the
presence of additional features such as masks or multiple labels per box, and the specific task the model is being used
for. It is important to carefully tune and experiment with these settings to achieve the best possible performance for a
Validation settings for YOLO models refer to the various hyperparameters and configurations used to
The val (validation) settings for YOLO models involve various hyperparameters and configurations used to evaluate the model's performance on a validation dataset. These settings influence the model's performance, speed, and accuracy. Common YOLO validation settings include batch size, validation frequency during training, and performance evaluation metrics. Other factors affecting the validation process include the validation dataset's size and composition, as well as the specific task the model is employed for. Careful tuning and experimentation with these settings are crucial to ensure optimal performance on the validation dataset and detect and prevent overfitting.
evaluate the model's performance on a validation dataset. These settings can affect the model's performance, speed, and
accuracy. Some common YOLO validation settings include the batch size, the frequency with which validation is performed
during training, and the metrics used to evaluate the model's performance. Other factors that may affect the validation
process include the size and composition of the validation dataset and the specific task the model is being used for. It
is important to carefully tune and experiment with these settings to ensure that the model is performing well on the
validation dataset and to detect and prevent overfitting.
Export settings for YOLO models refer to the various configurations and options used to save or
## Export
export the model for use in other environments or platforms. These settings can affect the model's performance, size,
and compatibility with different systems. Some common YOLO export settings include the format of the exported model
Export settings for YOLO models encompass configurations and options related to saving or exporting the model for use in different environments or platforms. These settings can impact the model's performance, size, and compatibility with various systems. Key export settings include the exported model file format (e.g., ONNX, TensorFlow SavedModel), the target device (e.g., CPU, GPU), and additional features such as masks or multiple labels per box. The export process may also be affected by the model's specific task and the requirements or constraints of the destination environment or platform. It is crucial to thoughtfully configure these settings to ensure the exported model is optimized for the intended use case and functions effectively in the target environment.
file (e.g. ONNX, TensorFlow SavedModel), the device on which the model will be run (e.g. CPU, GPU), and the presence of
additional features such as masks or multiple labels per box. Other factors that may affect the export process include
the specific task the model is being used for and the requirements or constraints of the target environment or platform.
It is important to carefully consider and configure these settings to ensure that the exported model is optimized for
the intended use case and can be used effectively in the target environment.
@ -25,7 +25,7 @@ Creating a custom model to detect your objects is an iterative process of collec
YOLOv5 models must be trained on labelled data in order to learn classes of objects in that data. There are two options for creating your dataset before you start training:
YOLOv5 models must be trained on labelled data in order to learn classes of objects in that data. There are two options for creating your dataset before you start training:
<detailsmarkdown>
<detailsmarkdown>
<summary>Use <ahref="https://roboflow.com/?ref=ultralytics">Roboflow</a> to manage your dataset in YOLO format</summary>
<summary>Use Roboflow to manage your dataset in YOLO format</summary>
### 1.1 Collect Images
### 1.1 Collect Images
@ -102,7 +102,7 @@ names:
### 1.2 Create Labels
### 1.2 Create Labels
After using a tool like [Roboflow Annotate](https://roboflow.com/annotate?ref=ultralytics) to label your images, export your labels to **YOLO format**, with one `*.txt` file per image (if no objects in image, no `*.txt` file is required). The `*.txt` file specifications are:
After using an annotation tool to label your images, export your labels to **YOLO format**, with one `*.txt` file per image (if no objects in image, no `*.txt` file is required). The `*.txt` file specifications are:
- One row per object
- One row per object
- Each row is `class x_center y_center width height` format.
- Each row is `class x_center y_center width height` format.