`ultralytics 8.0.158` add benchmarks to coverage (#4432)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com>
single_channel
Glenn Jocher 1 year ago committed by GitHub
parent 495806565d
commit 87ce15d383
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -106,11 +106,7 @@ jobs:
shell: bash # for Windows compatibility shell: bash # for Windows compatibility
run: | run: |
python -m pip install --upgrade pip wheel python -m pip install --upgrade pip wheel
if [ "${{ matrix.os }}" == "macos-latest" ]; then pip install -e ".[export]" coverage --extra-index-url https://download.pytorch.org/whl/cpu
pip install -e ".[export]" --extra-index-url https://download.pytorch.org/whl/cpu
else
pip install -e ".[export]" --extra-index-url https://download.pytorch.org/whl/cpu
fi
yolo export format=tflite imgsz=32 || true yolo export format=tflite imgsz=32 || true
- name: Check environment - name: Check environment
run: | run: |
@ -125,16 +121,25 @@ jobs:
pip list pip list
- name: Benchmark DetectionModel - name: Benchmark DetectionModel
shell: bash shell: bash
run: yolo benchmark model='path with spaces/${{ matrix.model }}.pt' imgsz=160 verbose=0.26 run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}.pt' imgsz=160 verbose=0.26
- name: Benchmark SegmentationModel - name: Benchmark SegmentationModel
shell: bash shell: bash
run: yolo benchmark model='path with spaces/${{ matrix.model }}-seg.pt' imgsz=160 verbose=0.30 run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-seg.pt' imgsz=160 verbose=0.30
- name: Benchmark ClassificationModel - name: Benchmark ClassificationModel
shell: bash shell: bash
run: yolo benchmark model='path with spaces/${{ matrix.model }}-cls.pt' imgsz=160 verbose=0.36 run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-cls.pt' imgsz=160 verbose=0.36
- name: Benchmark PoseModel - name: Benchmark PoseModel
shell: bash shell: bash
run: yolo benchmark model='path with spaces/${{ matrix.model }}-pose.pt' imgsz=160 verbose=0.17 run: coverage run -a --source=ultralytics -m ultralytics.cfg.__init__ benchmark model='path with spaces/${{ matrix.model }}-pose.pt' imgsz=160 verbose=0.17
- name: Merge Coverage Reports
run: |
coverage xml -o coverage-benchmarks.xml
- name: Upload Coverage Reports to CodeCov
uses: codecov/codecov-action@v3
with:
flags: Benchmarks
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- name: Benchmark Summary - name: Benchmark Summary
run: | run: |
cat benchmarks.log cat benchmarks.log
@ -183,9 +188,11 @@ jobs:
- name: Pytest tests - name: Pytest tests
shell: bash # for Windows compatibility shell: bash # for Windows compatibility
run: pytest --cov=ultralytics/ --cov-report xml tests/ run: pytest --cov=ultralytics/ --cov-report xml tests/
- name: Upload coverage reports to Codecov - name: Upload Coverage Reports to CodeCov
if: github.repository == 'ultralytics/ultralytics' && matrix.os == 'ubuntu-latest' && matrix.python-version == '3.11' if: github.repository == 'ultralytics/ultralytics' && matrix.os == 'ubuntu-latest' && matrix.python-version == '3.11'
uses: codecov/codecov-action@v3 uses: codecov/codecov-action@v3
with:
flags: Tests
env: env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }} CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

3
.gitignore vendored

@ -118,6 +118,9 @@ venv.bak/
.spyderproject .spyderproject
.spyproject .spyproject
# VSCode project settings
.vscode/
# Rope project settings # Rope project settings
.ropeproject .ropeproject

@ -1,6 +1,6 @@
--- ---
description: Explore the exporter functionality of Ultralytics. Learn about exporting formats, iOSDetectModel, and try exporting with examples. description: Explore the exporter functionality of Ultralytics. Learn about exporting formats, IOSDetectModel, and try exporting with examples.
keywords: Ultralytics, Exporter, iOSDetectModel, Export Formats, Try export keywords: Ultralytics, Exporter, IOSDetectModel, Export Formats, Try export
--- ---
# Reference for `ultralytics/engine/exporter.py` # Reference for `ultralytics/engine/exporter.py`
@ -14,7 +14,7 @@ keywords: Ultralytics, Exporter, iOSDetectModel, Export Formats, Try export
<br><br> <br><br>
--- ---
## ::: ultralytics.engine.exporter.iOSDetectModel ## ::: ultralytics.engine.exporter.IOSDetectModel
<br><br> <br><br>
--- ---
@ -28,7 +28,3 @@ keywords: Ultralytics, Exporter, iOSDetectModel, Export Formats, Try export
--- ---
## ::: ultralytics.engine.exporter.try_export ## ::: ultralytics.engine.exporter.try_export
<br><br> <br><br>
---
## ::: ultralytics.engine.exporter.export
<br><br>

@ -12,7 +12,3 @@ keywords: Ultralytics, RTDETRTrainer, model training, Ultralytics models, PyTorc
--- ---
## ::: ultralytics.models.rtdetr.train.RTDETRTrainer ## ::: ultralytics.models.rtdetr.train.RTDETRTrainer
<br><br> <br><br>
---
## ::: ultralytics.models.rtdetr.train.train
<br><br>

@ -12,7 +12,3 @@ keywords: Ultralytics, classification predictor, predict, YOLO, AI models, model
--- ---
## ::: ultralytics.models.yolo.classify.predict.ClassificationPredictor ## ::: ultralytics.models.yolo.classify.predict.ClassificationPredictor
<br><br> <br><br>
---
## ::: ultralytics.models.yolo.classify.predict.predict
<br><br>

@ -12,7 +12,3 @@ keywords: Ultralytics, YOLO, Classification Trainer, deep learning, training pro
--- ---
## ::: ultralytics.models.yolo.classify.train.ClassificationTrainer ## ::: ultralytics.models.yolo.classify.train.ClassificationTrainer
<br><br> <br><br>
---
## ::: ultralytics.models.yolo.classify.train.train
<br><br>

@ -12,7 +12,3 @@ keywords: Ultralytics, YOLO, ClassificationValidator, model validation, model fi
--- ---
## ::: ultralytics.models.yolo.classify.val.ClassificationValidator ## ::: ultralytics.models.yolo.classify.val.ClassificationValidator
<br><br> <br><br>
---
## ::: ultralytics.models.yolo.classify.val.val
<br><br>

@ -12,7 +12,3 @@ keywords: Ultralytics, YOLO, DetectionPredictor, detect, predict, object detecti
--- ---
## ::: ultralytics.models.yolo.detect.predict.DetectionPredictor ## ::: ultralytics.models.yolo.detect.predict.DetectionPredictor
<br><br> <br><br>
---
## ::: ultralytics.models.yolo.detect.predict.predict
<br><br>

@ -12,7 +12,3 @@ keywords: Ultralytics YOLO, YOLO, Detection Trainer, Model Training, Machine Lea
--- ---
## ::: ultralytics.models.yolo.detect.train.DetectionTrainer ## ::: ultralytics.models.yolo.detect.train.DetectionTrainer
<br><br> <br><br>
---
## ::: ultralytics.models.yolo.detect.train.train
<br><br>

@ -12,7 +12,3 @@ keywords: Ultralytics, YOLO, Detection Validator, model valuation, precision, re
--- ---
## ::: ultralytics.models.yolo.detect.val.DetectionValidator ## ::: ultralytics.models.yolo.detect.val.DetectionValidator
<br><br> <br><br>
---
## ::: ultralytics.models.yolo.detect.val.val
<br><br>

@ -12,7 +12,3 @@ keywords: Ultralytics, YOLO, PosePredictor, machine learning, AI, predictive mod
--- ---
## ::: ultralytics.models.yolo.pose.predict.PosePredictor ## ::: ultralytics.models.yolo.pose.predict.PosePredictor
<br><br> <br><br>
---
## ::: ultralytics.models.yolo.pose.predict.predict
<br><br>

@ -12,7 +12,3 @@ keywords: Ultralytics, YOLO, PoseTrainer, pose training, AI modeling, custom dat
--- ---
## ::: ultralytics.models.yolo.pose.train.PoseTrainer ## ::: ultralytics.models.yolo.pose.train.PoseTrainer
<br><br> <br><br>
---
## ::: ultralytics.models.yolo.pose.train.train
<br><br>

@ -12,7 +12,3 @@ keywords: PoseValidator, Ultralytics, YOLO, Object detection, Pose validation
--- ---
## ::: ultralytics.models.yolo.pose.val.PoseValidator ## ::: ultralytics.models.yolo.pose.val.PoseValidator
<br><br> <br><br>
---
## ::: ultralytics.models.yolo.pose.val.val
<br><br>

@ -12,7 +12,3 @@ keywords: YOLO, Ultralytics, object detection, segmentation predictor
--- ---
## ::: ultralytics.models.yolo.segment.predict.SegmentationPredictor ## ::: ultralytics.models.yolo.segment.predict.SegmentationPredictor
<br><br> <br><br>
---
## ::: ultralytics.models.yolo.segment.predict.predict
<br><br>

@ -14,7 +14,7 @@ the required functions or operations as long the as correct formats are followed
custom model and dataloader by just overriding these functions: custom model and dataloader by just overriding these functions:
* `get_model(cfg, weights)` - The function that builds the model to be trained * `get_model(cfg, weights)` - The function that builds the model to be trained
* `get_dataloder()` - The function that builds the dataloader * `get_dataloader()` - The function that builds the dataloader
More details and source code can be found in [`BaseTrainer` Reference](../reference/engine/trainer.md) More details and source code can be found in [`BaseTrainer` Reference](../reference/engine/trainer.md)
## DetectionTrainer ## DetectionTrainer

@ -401,6 +401,7 @@ plugins:
handlers: handlers:
python: python:
options: options:
docstring_style: google
show_root_heading: true show_root_heading: true
show_source: true show_source: true
- ultralytics: - ultralytics:

@ -1,5 +1,5 @@
# Ultralytics requirements # Ultralytics requirements
# Usage: pip install -r requirements.txt # Example: pip install -r requirements.txt
# Base ---------------------------------------- # Base ----------------------------------------
matplotlib>=3.2.2 matplotlib>=3.2.2

@ -40,6 +40,14 @@ def test_train(task, model, data):
@pytest.mark.parametrize('task,model,data', TASK_ARGS) @pytest.mark.parametrize('task,model,data', TASK_ARGS)
def test_val(task, model, data): def test_val(task, model, data):
# Download annotations to run pycocotools eval
# from ultralytics.utils import SETTINGS, Path
# from ultralytics.utils.downloads import download
# url = 'https://github.com/ultralytics/assets/releases/download/v0.0.0/'
# download(f'{url}instances_val2017.json', dir=Path(SETTINGS['datasets_dir']) / 'coco8/annotations')
# download(f'{url}person_keypoints_val2017.json', dir=Path(SETTINGS['datasets_dir']) / 'coco8-pose/annotations')
# Validate
run(f'yolo val {task} model={WEIGHTS_DIR / model}.pt data={data} imgsz=32 save_txt save_json') run(f'yolo val {task} model={WEIGHTS_DIR / model}.pt data={data} imgsz=32 save_txt save_json')

@ -132,13 +132,13 @@ def test_val():
def test_train_scratch(): def test_train_scratch():
model = YOLO(CFG) model = YOLO(CFG)
model.train(data='coco8.yaml', epochs=1, imgsz=32, cache='disk', batch=-1) # test disk caching with AutoBatch model.train(data='coco8.yaml', epochs=2, imgsz=32, cache='disk', batch=-1, close_mosaic=1)
model(SOURCE) model(SOURCE)
def test_train_pretrained(): def test_train_pretrained():
model = YOLO(WEIGHTS_DIR / 'yolov8n-seg.pt') model = YOLO(WEIGHTS_DIR / 'yolov8n-seg.pt')
model.train(data='coco8-seg.yaml', epochs=1, imgsz=32, cache='ram', copy_paste=0.5, mixup=0.5) # test RAM caching model.train(data='coco8-seg.yaml', epochs=1, imgsz=32, cache='ram', copy_paste=0.5, mixup=0.5)
model(SOURCE) model(SOURCE)
@ -283,6 +283,12 @@ def test_data_converter():
coco80_to_coco91_class() coco80_to_coco91_class()
def test_data_annotator():
from ultralytics.data.annotator import auto_annotate
auto_annotate(ASSETS, det_model='yolov8n.pt', sam_model='mobile_sam.pt', output_dir=TMP / 'auto_annotate_labels')
def test_events(): def test_events():
# Test event sending # Test event sending
from ultralytics.hub.utils import Events from ultralytics.hub.utils import Events
@ -304,12 +310,15 @@ def test_utils_init():
def test_utils_checks(): def test_utils_checks():
from ultralytics.utils.checks import check_requirements, check_yolov5u_filename, git_describe from ultralytics.utils.checks import (check_imgsz, check_requirements, check_yolov5u_filename, git_describe,
print_args)
check_yolov5u_filename('yolov5n.pt') check_yolov5u_filename('yolov5n.pt')
# check_imshow(warn=True) # check_imshow(warn=True)
git_describe(ROOT) git_describe(ROOT)
check_requirements() # check requirements.txt check_requirements() # check requirements.txt
check_imgsz([600, 600], max_dim=1)
print_args()
def test_utils_benchmarks(): def test_utils_benchmarks():

@ -1,6 +1,6 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
__version__ = '8.0.157' __version__ = '8.0.158'
from ultralytics.hub import start from ultralytics.hub import start
from ultralytics.models import RTDETR, SAM, YOLO from ultralytics.models import RTDETR, SAM, YOLO

@ -8,6 +8,7 @@ from ultralytics import SAM, YOLO
def auto_annotate(data, det_model='yolov8x.pt', sam_model='sam_b.pt', device='', output_dir=None): def auto_annotate(data, det_model='yolov8x.pt', sam_model='sam_b.pt', device='', output_dir=None):
""" """
Automatically annotates images using a YOLO object detection model and a SAM segmentation model. Automatically annotates images using a YOLO object detection model and a SAM segmentation model.
Args: Args:
data (str): Path to a folder containing images to be annotated. data (str): Path to a folder containing images to be annotated.
det_model (str, optional): Pre-trained YOLO detection model. Defaults to 'yolov8x.pt'. det_model (str, optional): Pre-trained YOLO detection model. Defaults to 'yolov8x.pt'.
@ -15,12 +16,20 @@ def auto_annotate(data, det_model='yolov8x.pt', sam_model='sam_b.pt', device='',
device (str, optional): Device to run the models on. Defaults to an empty string (CPU or GPU, if available). device (str, optional): Device to run the models on. Defaults to an empty string (CPU or GPU, if available).
output_dir (str | None | optional): Directory to save the annotated results. output_dir (str | None | optional): Directory to save the annotated results.
Defaults to a 'labels' folder in the same directory as 'data'. Defaults to a 'labels' folder in the same directory as 'data'.
Example:
```python
from ultralytics.data.annotator import auto_annotate
auto_annotate(data='ultralytics/assets', det_model='yolov8n.pt', sam_model='mobile_sam.pt')
```
""" """
det_model = YOLO(det_model) det_model = YOLO(det_model)
sam_model = SAM(sam_model) sam_model = SAM(sam_model)
data = Path(data)
if not output_dir: if not output_dir:
output_dir = Path(str(data)).parent / 'labels' output_dir = data.parent / f'{data.stem}_auto_annotate_labels'
Path(output_dir).mkdir(exist_ok=True, parents=True) Path(output_dir).mkdir(exist_ok=True, parents=True)
det_results = det_model(data, stream=True, device=device) det_results = det_model(data, stream=True, device=device)

@ -402,7 +402,7 @@ class RandomPerspective:
keypoints (ndarray): keypoints, [N, 17, 3]. keypoints (ndarray): keypoints, [N, 17, 3].
M (ndarray): affine matrix. M (ndarray): affine matrix.
Return: Returns:
new_keypoints (ndarray): keypoints after affine, [N, 17, 3]. new_keypoints (ndarray): keypoints after affine, [N, 17, 3].
""" """
n, nkpt = keypoints.shape[:2] n, nkpt = keypoints.shape[:2]

@ -484,7 +484,7 @@ class Exporter:
classifier_config = ct.ClassifierConfig(list(self.model.names.values())) if self.args.nms else None classifier_config = ct.ClassifierConfig(list(self.model.names.values())) if self.args.nms else None
model = self.model model = self.model
elif self.model.task == 'detect': elif self.model.task == 'detect':
model = iOSDetectModel(self.model, self.im) if self.args.nms else self.model model = IOSDetectModel(self.model, self.im) if self.args.nms else self.model
else: else:
if self.args.nms: if self.args.nms:
LOGGER.warning(f"{prefix} WARNING ⚠️ 'nms=True' is only available for Detect models like 'yolov8n.pt'.") LOGGER.warning(f"{prefix} WARNING ⚠️ 'nms=True' is only available for Detect models like 'yolov8n.pt'.")
@ -846,12 +846,11 @@ class Exporter:
out0, out1 = iter(spec.description.output) out0, out1 = iter(spec.description.output)
if MACOS: if MACOS:
from PIL import Image from PIL import Image
img = Image.new('RGB', (w, h)) # img(192 width, 320 height) img = Image.new('RGB', (w, h)) # w=192, h=320
# img = torch.zeros((*opt.img_size, 3)).numpy() # img size(320,192,3) iDetection
out = model.predict({'image': img}) out = model.predict({'image': img})
out0_shape = out[out0.name].shape out0_shape = out[out0.name].shape # (3780, 80)
out1_shape = out[out1.name].shape out1_shape = out[out1.name].shape # (3780, 4)
else: # linux and windows can not run model.predict(), get sizes from pytorch output y else: # linux and windows can not run model.predict(), get sizes from PyTorch model output y
out0_shape = self.output_shape[2], self.output_shape[1] - 4 # (3780, 80) out0_shape = self.output_shape[2], self.output_shape[1] - 4 # (3780, 80)
out1_shape = self.output_shape[2], 4 # (3780, 4) out1_shape = self.output_shape[2], 4 # (3780, 4)
@ -963,11 +962,11 @@ class Exporter:
callback(self) callback(self)
class iOSDetectModel(torch.nn.Module): class IOSDetectModel(torch.nn.Module):
"""Wrap an Ultralytics YOLO model for iOS export.""" """Wrap an Ultralytics YOLO model for Apple iOS CoreML export."""
def __init__(self, model, im): def __init__(self, model, im):
"""Initialize the iOSDetectModel class with a YOLO model and example image.""" """Initialize the IOSDetectModel class with a YOLO model and example image."""
super().__init__() super().__init__()
b, c, h, w = im.shape # batch, channel, height, width b, c, h, w = im.shape # batch, channel, height, width
self.model = model self.model = model
@ -981,21 +980,3 @@ class iOSDetectModel(torch.nn.Module):
"""Normalize predictions of object detection model with input size-dependent factors.""" """Normalize predictions of object detection model with input size-dependent factors."""
xywh, cls = self.model(x)[0].transpose(0, 1).split((4, self.nc), 1) xywh, cls = self.model(x)[0].transpose(0, 1).split((4, self.nc), 1)
return cls, xywh * self.normalize # confidence (3780, 80), coordinates (3780, 4) return cls, xywh * self.normalize # confidence (3780, 80), coordinates (3780, 4)
def export(cfg=DEFAULT_CFG):
"""Export a YOLOv model to a specific format."""
cfg.model = cfg.model or 'yolov8n.yaml'
cfg.format = cfg.format or 'torchscript'
from ultralytics import YOLO
model = YOLO(cfg.model)
model.export(**vars(cfg))
if __name__ == '__main__':
"""
CLI:
yolo mode=export model=yolov8n.yaml format=onnx
"""
export()

@ -138,12 +138,14 @@ class BasePredictor:
return self.model(im, augment=self.args.augment, visualize=visualize) return self.model(im, augment=self.args.augment, visualize=visualize)
def pre_transform(self, im): def pre_transform(self, im):
"""Pre-transform input image before inference. """
Pre-transform input image before inference.
Args: Args:
im (List(np.ndarray)): (N, 3, h, w) for tensor, [(h, w, 3) x N] for list. im (List(np.ndarray)): (N, 3, h, w) for tensor, [(h, w, 3) x N] for list.
Return: A list of transformed imgs. Returns:
(list): A list of transformed images.
""" """
same_shapes = all(x.shape == im[0].shape for x in im) same_shapes = all(x.shape == im[0].shape for x in im)
auto = same_shapes and self.model.pt auto = same_shapes and self.model.pt

@ -26,7 +26,7 @@ class FastSAMPrompt:
import clip # for linear_assignment import clip # for linear_assignment
except ImportError: except ImportError:
from ultralytics.utils.checks import check_requirements from ultralytics.utils.checks import check_requirements
check_requirements('git+https://github.com/openai/CLIP.git') # required before installing lap from source check_requirements('git+https://github.com/openai/CLIP.git')
import clip import clip
self.clip = clip self.clip = clip
@ -91,8 +91,6 @@ class FastSAMPrompt:
y1 = min(y1, y_t) y1 = min(y1, y_t)
x2 = max(x2, x_t + w_t) x2 = max(x2, x_t + w_t)
y2 = max(y2, y_t + h_t) y2 = max(y2, y_t + h_t)
h = y2 - y1
w = x2 - x1
return [x1, y1, x2, y2] return [x1, y1, x2, y2]
def plot(self, def plot(self,
@ -104,9 +102,11 @@ class FastSAMPrompt:
mask_random_color=True, mask_random_color=True,
better_quality=True, better_quality=True,
retina=False, retina=False,
withContours=True): with_countouers=True):
if isinstance(annotations[0], dict): if isinstance(annotations[0], dict):
annotations = [annotation['segmentation'] for annotation in annotations] annotations = [annotation['segmentation'] for annotation in annotations]
if isinstance(annotations, torch.Tensor):
annotations = annotations.cpu().numpy()
result_name = os.path.basename(self.img_path) result_name = os.path.basename(self.img_path)
image = self.ori_img image = self.ori_img
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
@ -123,41 +123,22 @@ class FastSAMPrompt:
plt.imshow(image) plt.imshow(image)
if better_quality: if better_quality:
if isinstance(annotations[0], torch.Tensor):
annotations = np.array(annotations.cpu())
for i, mask in enumerate(annotations): for i, mask in enumerate(annotations):
mask = cv2.morphologyEx(mask.astype(np.uint8), cv2.MORPH_CLOSE, np.ones((3, 3), np.uint8)) mask = cv2.morphologyEx(mask.astype(np.uint8), cv2.MORPH_CLOSE, np.ones((3, 3), np.uint8))
annotations[i] = cv2.morphologyEx(mask.astype(np.uint8), cv2.MORPH_OPEN, np.ones((8, 8), np.uint8)) annotations[i] = cv2.morphologyEx(mask.astype(np.uint8), cv2.MORPH_OPEN, np.ones((8, 8), np.uint8))
if self.device == 'cpu': self.fast_show_mask(
annotations = np.array(annotations) annotations,
self.fast_show_mask( plt.gca(),
annotations, random_color=mask_random_color,
plt.gca(), bbox=bbox,
random_color=mask_random_color, points=points,
bbox=bbox, pointlabel=point_label,
points=points, retinamask=retina,
pointlabel=point_label, target_height=original_h,
retinamask=retina, target_width=original_w,
target_height=original_h, )
target_width=original_w,
) if with_countouers:
else:
if isinstance(annotations[0], np.ndarray):
annotations = torch.from_numpy(annotations)
self.fast_show_mask_gpu(
annotations,
plt.gca(),
random_color=mask_random_color,
bbox=bbox,
points=points,
pointlabel=point_label,
retinamask=retina,
target_height=original_h,
target_width=original_w,
)
if isinstance(annotations, torch.Tensor):
annotations = annotations.cpu().numpy()
if withContours:
contour_all = [] contour_all = []
temp = np.zeros((original_h, original_w, 1)) temp = np.zeros((original_h, original_w, 1))
for i, mask in enumerate(annotations): for i, mask in enumerate(annotations):
@ -184,8 +165,8 @@ class FastSAMPrompt:
LOGGER.info(f'Saved to {save_path.absolute()}') LOGGER.info(f'Saved to {save_path.absolute()}')
# CPU post process # CPU post process
@staticmethod
def fast_show_mask( def fast_show_mask(
self,
annotation, annotation,
ax, ax,
random_color=False, random_color=False,
@ -196,32 +177,29 @@ class FastSAMPrompt:
target_height=960, target_height=960,
target_width=960, target_width=960,
): ):
msak_sum = annotation.shape[0] n, h, w = annotation.shape # batch, height, width
height = annotation.shape[1]
weight = annotation.shape[2]
# 将annotation 按照面积 排序
areas = np.sum(annotation, axis=(1, 2)) areas = np.sum(annotation, axis=(1, 2))
sorted_indices = np.argsort(areas) annotation = annotation[np.argsort(areas)]
annotation = annotation[sorted_indices]
index = (annotation != 0).argmax(axis=0) index = (annotation != 0).argmax(axis=0)
if random_color: if random_color:
color = np.random.random((msak_sum, 1, 1, 3)) color = np.random.random((n, 1, 1, 3))
else: else:
color = np.ones((msak_sum, 1, 1, 3)) * np.array([30 / 255, 144 / 255, 1.0]) color = np.ones((n, 1, 1, 3)) * np.array([30 / 255, 144 / 255, 1.0])
transparency = np.ones((msak_sum, 1, 1, 1)) * 0.6 transparency = np.ones((n, 1, 1, 1)) * 0.6
visual = np.concatenate([color, transparency], axis=-1) visual = np.concatenate([color, transparency], axis=-1)
mask_image = np.expand_dims(annotation, -1) * visual mask_image = np.expand_dims(annotation, -1) * visual
show = np.zeros((height, weight, 4)) show = np.zeros((h, w, 4))
h_indices, w_indices = np.meshgrid(np.arange(height), np.arange(weight), indexing='ij') h_indices, w_indices = np.meshgrid(np.arange(h), np.arange(w), indexing='ij')
indices = (index[h_indices, w_indices], h_indices, w_indices, slice(None)) indices = (index[h_indices, w_indices], h_indices, w_indices, slice(None))
# 使用向量化索引更新show的值
show[h_indices, w_indices, :] = mask_image[indices] show[h_indices, w_indices, :] = mask_image[indices]
if bbox is not None: if bbox is not None:
x1, y1, x2, y2 = bbox x1, y1, x2, y2 = bbox
ax.add_patch(plt.Rectangle((x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor='b', linewidth=1)) ax.add_patch(plt.Rectangle((x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor='b', linewidth=1))
# draw point # Draw point
if points is not None: if points is not None:
plt.scatter( plt.scatter(
[point[0] for i, point in enumerate(points) if pointlabel[i] == 1], [point[0] for i, point in enumerate(points) if pointlabel[i] == 1],
@ -240,63 +218,6 @@ class FastSAMPrompt:
show = cv2.resize(show, (target_width, target_height), interpolation=cv2.INTER_NEAREST) show = cv2.resize(show, (target_width, target_height), interpolation=cv2.INTER_NEAREST)
ax.imshow(show) ax.imshow(show)
def fast_show_mask_gpu(
self,
annotation,
ax,
random_color=False,
bbox=None,
points=None,
pointlabel=None,
retinamask=True,
target_height=960,
target_width=960,
):
msak_sum = annotation.shape[0]
height = annotation.shape[1]
weight = annotation.shape[2]
areas = torch.sum(annotation, dim=(1, 2))
sorted_indices = torch.argsort(areas, descending=False)
annotation = annotation[sorted_indices]
# 找每个位置第一个非零值下标
index = (annotation != 0).to(torch.long).argmax(dim=0)
if random_color:
color = torch.rand((msak_sum, 1, 1, 3)).to(annotation.device)
else:
color = torch.ones((msak_sum, 1, 1, 3)).to(annotation.device) * torch.tensor([30 / 255, 144 / 255, 1.0]).to(
annotation.device)
transparency = torch.ones((msak_sum, 1, 1, 1)).to(annotation.device) * 0.6
visual = torch.cat([color, transparency], dim=-1)
mask_image = torch.unsqueeze(annotation, -1) * visual
# 按index取数index指每个位置选哪个batch的数把mask_image转成一个batch的形式
show = torch.zeros((height, weight, 4)).to(annotation.device)
h_indices, w_indices = torch.meshgrid(torch.arange(height), torch.arange(weight), indexing='ij')
indices = (index[h_indices, w_indices], h_indices, w_indices, slice(None))
# 使用向量化索引更新show的值
show[h_indices, w_indices, :] = mask_image[indices]
show_cpu = show.cpu().numpy()
if bbox is not None:
x1, y1, x2, y2 = bbox
ax.add_patch(plt.Rectangle((x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor='b', linewidth=1))
# draw point
if points is not None:
plt.scatter(
[point[0] for i, point in enumerate(points) if pointlabel[i] == 1],
[point[1] for i, point in enumerate(points) if pointlabel[i] == 1],
s=20,
c='y',
)
plt.scatter(
[point[0] for i, point in enumerate(points) if pointlabel[i] == 0],
[point[1] for i, point in enumerate(points) if pointlabel[i] == 0],
s=20,
c='m',
)
if not retinamask:
show_cpu = cv2.resize(show_cpu, (target_width, target_height), interpolation=cv2.INTER_NEAREST)
ax.imshow(show_cpu)
# clip
@torch.no_grad() @torch.no_grad()
def retrieve(self, model, preprocess, elements, search_text: str, device) -> int: def retrieve(self, model, preprocess, elements, search_text: str, device) -> int:
preprocessed_images = [preprocess(image).to(device) for image in elements] preprocessed_images = [preprocess(image).to(device) for image in elements]

@ -5,7 +5,6 @@ import torch
from ultralytics.engine.predictor import BasePredictor from ultralytics.engine.predictor import BasePredictor
from ultralytics.engine.results import Results from ultralytics.engine.results import Results
from ultralytics.utils import ops from ultralytics.utils import ops
from ultralytics.utils.ops import xyxy2xywh
class NASPredictor(BasePredictor): class NASPredictor(BasePredictor):
@ -14,7 +13,7 @@ class NASPredictor(BasePredictor):
"""Postprocess predictions and returns a list of Results objects.""" """Postprocess predictions and returns a list of Results objects."""
# Cat boxes and class scores # Cat boxes and class scores
boxes = xyxy2xywh(preds_in[0][0]) boxes = ops.xyxy2xywh(preds_in[0][0])
preds = torch.cat((boxes, preds_in[0][1]), -1).permute(0, 2, 1) preds = torch.cat((boxes, preds_in[0][1]), -1).permute(0, 2, 1)
preds = ops.non_max_suppression(preds, preds = ops.non_max_suppression(preds,

@ -4,7 +4,6 @@ import torch
from ultralytics.models.yolo.detect import DetectionValidator from ultralytics.models.yolo.detect import DetectionValidator
from ultralytics.utils import ops from ultralytics.utils import ops
from ultralytics.utils.ops import xyxy2xywh
__all__ = ['NASValidator'] __all__ = ['NASValidator']
@ -13,7 +12,7 @@ class NASValidator(DetectionValidator):
def postprocess(self, preds_in): def postprocess(self, preds_in):
"""Apply Non-maximum suppression to prediction outputs.""" """Apply Non-maximum suppression to prediction outputs."""
boxes = xyxy2xywh(preds_in[0][0]) boxes = ops.xyxy2xywh(preds_in[0][0])
preds = torch.cat((boxes, preds_in[0][1]), -1).permute(0, 2, 1) preds = torch.cat((boxes, preds_in[0][1]), -1).permute(0, 2, 1)
return ops.non_max_suppression(preds, return ops.non_max_suppression(preds,
self.args.conf, self.args.conf,

@ -9,6 +9,19 @@ from ultralytics.utils import ops
class RTDETRPredictor(BasePredictor): class RTDETRPredictor(BasePredictor):
"""
A class extending the BasePredictor class for prediction based on an RT-DETR detection model.
Example:
```python
from ultralytics.utils import ASSETS
from ultralytics.models.rtdetr import RTDETRPredictor
args = dict(model='rtdetr-l.pt', source=ASSETS)
predictor = RTDETRPredictor(overrides=args)
predictor.predict_cli()
```
"""
def postprocess(self, preds, img, orig_imgs): def postprocess(self, preds, img, orig_imgs):
"""Postprocess predictions and returns a list of Results objects.""" """Postprocess predictions and returns a list of Results objects."""
@ -38,7 +51,9 @@ class RTDETRPredictor(BasePredictor):
Args: Args:
im (List(np.ndarray)): (N, 3, h, w) for tensor, [(h, w, 3) x N] for list. im (List(np.ndarray)): (N, 3, h, w) for tensor, [(h, w, 3) x N] for list.
Return: A list of transformed imgs. Notes: The size must be square(640) and scaleFilled.
Returns:
(list): A list of transformed imgs.
""" """
# The size must be square(640) and scaleFilled.
return [LetterBox(self.imgsz, auto=False, scaleFill=True)(image=x) for x in im] return [LetterBox(self.imgsz, auto=False, scaleFill=True)(image=x) for x in im]

@ -6,12 +6,28 @@ import torch
from ultralytics.models.yolo.detect import DetectionTrainer from ultralytics.models.yolo.detect import DetectionTrainer
from ultralytics.nn.tasks import RTDETRDetectionModel from ultralytics.nn.tasks import RTDETRDetectionModel
from ultralytics.utils import DEFAULT_CFG, RANK, colorstr from ultralytics.utils import RANK, colorstr
from .val import RTDETRDataset, RTDETRValidator from .val import RTDETRDataset, RTDETRValidator
class RTDETRTrainer(DetectionTrainer): class RTDETRTrainer(DetectionTrainer):
"""
A class extending the DetectionTrainer class for training based on an RT-DETR detection model.
Notes:
- F.grid_sample used in rt-detr does not support the `deterministic=True` argument.
- AMP training can lead to NaN outputs and may produce errors during bipartite graph matching.
Example:
```python
from ultralytics.models.rtdetr.train import RTDETRTrainer
args = dict(model='rtdetr-l.yaml', data='coco8.yaml', imgsz=640, epochs=3)
trainer = RTDETRTrainer(overrides=args)
trainer.train()
```
"""
def get_model(self, cfg=None, weights=None, verbose=True): def get_model(self, cfg=None, weights=None, verbose=True):
"""Return a YOLO detection model.""" """Return a YOLO detection model."""
@ -54,27 +70,3 @@ class RTDETRTrainer(DetectionTrainer):
gt_bbox.append(batch['bboxes'][batch_idx == i].to(batch_idx.device)) gt_bbox.append(batch['bboxes'][batch_idx == i].to(batch_idx.device))
gt_class.append(batch['cls'][batch_idx == i].to(device=batch_idx.device, dtype=torch.long)) gt_class.append(batch['cls'][batch_idx == i].to(device=batch_idx.device, dtype=torch.long))
return batch return batch
def train(cfg=DEFAULT_CFG, use_python=False):
"""Train and optimize RTDETR model given training data and device."""
model = 'rtdetr-l.yaml'
data = cfg.data or 'coco8.yaml' # or yolo.ClassificationDataset("mnist")
device = cfg.device if cfg.device is not None else ''
# NOTE: F.grid_sample which is in rt-detr does not support deterministic=True
# NOTE: amp training causes nan outputs and end with error while doing bipartite graph matching
args = dict(model=model,
data=data,
device=device,
imgsz=640,
exist_ok=True,
batch=4,
deterministic=False,
amp=False)
trainer = RTDETRTrainer(overrides=args)
trainer.train()
if __name__ == '__main__':
train()

@ -67,6 +67,18 @@ class RTDETRDataset(YOLODataset):
class RTDETRValidator(DetectionValidator): class RTDETRValidator(DetectionValidator):
"""
A class extending the DetectionValidator class for validation based on an RT-DETR detection model.
Example:
```python
from ultralytics.models.rtdetr import RTDETRValidator
args = dict(model='rtdetr-l.pt', data='coco8.yaml')
validator = RTDETRValidator(args=args)
validator(model=args['model'])
```
"""
def build_dataset(self, img_path, mode='val', batch=None): def build_dataset(self, img_path, mode='val', batch=None):
"""Build YOLO Dataset """Build YOLO Dataset

@ -55,12 +55,14 @@ class Predictor(BasePredictor):
return img return img
def pre_transform(self, im): def pre_transform(self, im):
"""Pre-transform input image before inference. """
Pre-transform input image before inference.
Args: Args:
im (List(np.ndarray)): (N, 3, h, w) for tensor, [(h, w, 3) x N] for list. im (List(np.ndarray)): (N, 3, h, w) for tensor, [(h, w, 3) x N] for list.
Return: A list of transformed imgs. Returns:
(list): A list of transformed images.
""" """
assert len(im) == 1, 'SAM model has not supported batch inference yet!' assert len(im) == 1, 'SAM model has not supported batch inference yet!'
return [LetterBox(self.args.imgsz, auto=False, center=False)(image=x) for x in im] return [LetterBox(self.args.imgsz, auto=False, center=False)(image=x) for x in im]

@ -1,7 +1,7 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
from ultralytics.models.yolo.classify.predict import ClassificationPredictor, predict from ultralytics.models.yolo.classify.predict import ClassificationPredictor
from ultralytics.models.yolo.classify.train import ClassificationTrainer, train from ultralytics.models.yolo.classify.train import ClassificationTrainer
from ultralytics.models.yolo.classify.val import ClassificationValidator, val from ultralytics.models.yolo.classify.val import ClassificationValidator
__all__ = 'ClassificationPredictor', 'predict', 'ClassificationTrainer', 'train', 'ClassificationValidator', 'val' __all__ = 'ClassificationPredictor', 'ClassificationTrainer', 'ClassificationValidator'

@ -4,10 +4,26 @@ import torch
from ultralytics.engine.predictor import BasePredictor from ultralytics.engine.predictor import BasePredictor
from ultralytics.engine.results import Results from ultralytics.engine.results import Results
from ultralytics.utils import ASSETS, DEFAULT_CFG from ultralytics.utils import DEFAULT_CFG
class ClassificationPredictor(BasePredictor): class ClassificationPredictor(BasePredictor):
"""
A class extending the BasePredictor class for prediction based on a classification model.
Notes:
- Torchvision classification models can also be passed to the 'model' argument, i.e. model='resnet18'.
Example:
```python
from ultralytics.utils import ASSETS
from ultralytics.models.yolo.classify import ClassificationPredictor
args = dict(model='yolov8n-cls.pt', source=ASSETS)
predictor = ClassificationPredictor(overrides=args)
predictor.predict_cli()
```
"""
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
super().__init__(cfg, overrides, _callbacks) super().__init__(cfg, overrides, _callbacks)
@ -30,21 +46,3 @@ class ClassificationPredictor(BasePredictor):
results.append(Results(orig_img=orig_img, path=img_path, names=self.model.names, probs=pred)) results.append(Results(orig_img=orig_img, path=img_path, names=self.model.names, probs=pred))
return results return results
def predict(cfg=DEFAULT_CFG, use_python=False):
"""Run YOLO model predictions on input images/videos."""
model = cfg.model or 'yolov8n-cls.pt' # or "resnet18"
source = cfg.source or ASSETS
args = dict(model=model, source=source)
if use_python:
from ultralytics import YOLO
YOLO(model)(**args)
else:
predictor = ClassificationPredictor(overrides=args)
predictor.predict_cli()
if __name__ == '__main__':
predict()

@ -13,6 +13,21 @@ from ultralytics.utils.torch_utils import is_parallel, strip_optimizer, torch_di
class ClassificationTrainer(BaseTrainer): class ClassificationTrainer(BaseTrainer):
"""
A class extending the BaseTrainer class for training based on a classification model.
Notes:
- Torchvision classification models can also be passed to the 'model' argument, i.e. model='resnet18'.
Example:
```python
from ultralytics.models.yolo.classify import ClassificationTrainer
args = dict(model='yolov8n-cls.pt', data='imagenet10', epochs=3)
trainer = ClassificationTrainer(overrides=args)
trainer.train()
```
"""
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
"""Initialize a ClassificationTrainer object with optional configuration overrides and callbacks.""" """Initialize a ClassificationTrainer object with optional configuration overrides and callbacks."""
@ -137,22 +152,3 @@ class ClassificationTrainer(BaseTrainer):
cls=batch['cls'].view(-1), # warning: use .view(), not .squeeze() for Classify models cls=batch['cls'].view(-1), # warning: use .view(), not .squeeze() for Classify models
fname=self.save_dir / f'train_batch{ni}.jpg', fname=self.save_dir / f'train_batch{ni}.jpg',
on_plot=self.on_plot) on_plot=self.on_plot)
def train(cfg=DEFAULT_CFG, use_python=False):
"""Train a YOLO classification model."""
model = cfg.model or 'yolov8n-cls.pt' # or "resnet18"
data = cfg.data or 'mnist160' # or yolo.ClassificationDataset("mnist")
device = cfg.device if cfg.device is not None else ''
args = dict(model=model, data=data, device=device)
if use_python:
from ultralytics import YOLO
YOLO(model).train(**args)
else:
trainer = ClassificationTrainer(overrides=args)
trainer.train()
if __name__ == '__main__':
train()

@ -4,12 +4,27 @@ import torch
from ultralytics.data import ClassificationDataset, build_dataloader from ultralytics.data import ClassificationDataset, build_dataloader
from ultralytics.engine.validator import BaseValidator from ultralytics.engine.validator import BaseValidator
from ultralytics.utils import DEFAULT_CFG, LOGGER from ultralytics.utils import LOGGER
from ultralytics.utils.metrics import ClassifyMetrics, ConfusionMatrix from ultralytics.utils.metrics import ClassifyMetrics, ConfusionMatrix
from ultralytics.utils.plotting import plot_images from ultralytics.utils.plotting import plot_images
class ClassificationValidator(BaseValidator): class ClassificationValidator(BaseValidator):
"""
A class extending the BaseValidator class for validation based on a classification model.
Notes:
- Torchvision classification models can also be passed to the 'model' argument, i.e. model='resnet18'.
Example:
```python
from ultralytics.models.yolo.classify import ClassificationValidator
args = dict(model='yolov8n-cls.pt', data='imagenet10')
validator = ClassificationValidator(args=args)
validator(model=args['model'])
```
"""
def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None): def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None):
"""Initializes ClassificationValidator instance with args, dataloader, save_dir, and progress bar.""" """Initializes ClassificationValidator instance with args, dataloader, save_dir, and progress bar."""
@ -92,21 +107,3 @@ class ClassificationValidator(BaseValidator):
fname=self.save_dir / f'val_batch{ni}_pred.jpg', fname=self.save_dir / f'val_batch{ni}_pred.jpg',
names=self.names, names=self.names,
on_plot=self.on_plot) # pred on_plot=self.on_plot) # pred
def val(cfg=DEFAULT_CFG, use_python=False):
"""Validate YOLO model using custom data."""
model = cfg.model or 'yolov8n-cls.pt' # or "resnet18"
data = cfg.data or 'mnist160'
args = dict(model=model, data=data)
if use_python:
from ultralytics import YOLO
YOLO(model).val(**args)
else:
validator = ClassificationValidator(args=args)
validator(model=args['model'])
if __name__ == '__main__':
val()

@ -1,7 +1,7 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
from .predict import DetectionPredictor, predict from .predict import DetectionPredictor
from .train import DetectionTrainer, train from .train import DetectionTrainer
from .val import DetectionValidator, val from .val import DetectionValidator
__all__ = 'DetectionPredictor', 'predict', 'DetectionTrainer', 'train', 'DetectionValidator', 'val' __all__ = 'DetectionPredictor', 'DetectionTrainer', 'DetectionValidator'

@ -4,10 +4,23 @@ import torch
from ultralytics.engine.predictor import BasePredictor from ultralytics.engine.predictor import BasePredictor
from ultralytics.engine.results import Results from ultralytics.engine.results import Results
from ultralytics.utils import ASSETS, DEFAULT_CFG, ops from ultralytics.utils import ops
class DetectionPredictor(BasePredictor): class DetectionPredictor(BasePredictor):
"""
A class extending the BasePredictor class for prediction based on a detection model.
Example:
```python
from ultralytics.utils import ASSETS
from ultralytics.models.yolo.detect import DetectionPredictor
args = dict(model='yolov8n.pt', source=ASSETS)
predictor = DetectionPredictor(overrides=args)
predictor.predict_cli()
```
"""
def postprocess(self, preds, img, orig_imgs): def postprocess(self, preds, img, orig_imgs):
"""Post-processes predictions and returns a list of Results objects.""" """Post-processes predictions and returns a list of Results objects."""
@ -27,21 +40,3 @@ class DetectionPredictor(BasePredictor):
img_path = path[i] if isinstance(path, list) else path img_path = path[i] if isinstance(path, list) else path
results.append(Results(orig_img=orig_img, path=img_path, names=self.model.names, boxes=pred)) results.append(Results(orig_img=orig_img, path=img_path, names=self.model.names, boxes=pred))
return results return results
def predict(cfg=DEFAULT_CFG, use_python=False):
"""Runs YOLO model inference on input image(s)."""
model = cfg.model or 'yolov8n.pt'
source = cfg.source or ASSETS
args = dict(model=model, source=source)
if use_python:
from ultralytics import YOLO
YOLO(model)(**args)
else:
predictor = DetectionPredictor(overrides=args)
predictor.predict_cli()
if __name__ == '__main__':
predict()

@ -8,12 +8,24 @@ from ultralytics.data import build_dataloader, build_yolo_dataset
from ultralytics.engine.trainer import BaseTrainer from ultralytics.engine.trainer import BaseTrainer
from ultralytics.models import yolo from ultralytics.models import yolo
from ultralytics.nn.tasks import DetectionModel from ultralytics.nn.tasks import DetectionModel
from ultralytics.utils import DEFAULT_CFG, LOGGER, RANK from ultralytics.utils import LOGGER, RANK
from ultralytics.utils.plotting import plot_images, plot_labels, plot_results from ultralytics.utils.plotting import plot_images, plot_labels, plot_results
from ultralytics.utils.torch_utils import de_parallel, torch_distributed_zero_first from ultralytics.utils.torch_utils import de_parallel, torch_distributed_zero_first
class DetectionTrainer(BaseTrainer): class DetectionTrainer(BaseTrainer):
"""
A class extending the BaseTrainer class for training based on a detection model.
Example:
```python
from ultralytics.models.yolo.detect import DetectionTrainer
args = dict(model='yolov8n.pt', data='coco8.yaml', epochs=3)
trainer = DetectionTrainer(overrides=args)
trainer.train()
```
"""
def build_dataset(self, img_path, mode='train', batch=None): def build_dataset(self, img_path, mode='train', batch=None):
""" """
@ -102,22 +114,3 @@ class DetectionTrainer(BaseTrainer):
boxes = np.concatenate([lb['bboxes'] for lb in self.train_loader.dataset.labels], 0) boxes = np.concatenate([lb['bboxes'] for lb in self.train_loader.dataset.labels], 0)
cls = np.concatenate([lb['cls'] for lb in self.train_loader.dataset.labels], 0) cls = np.concatenate([lb['cls'] for lb in self.train_loader.dataset.labels], 0)
plot_labels(boxes, cls.squeeze(), names=self.data['names'], save_dir=self.save_dir, on_plot=self.on_plot) plot_labels(boxes, cls.squeeze(), names=self.data['names'], save_dir=self.save_dir, on_plot=self.on_plot)
def train(cfg=DEFAULT_CFG, use_python=False):
"""Train and optimize YOLO model given training data and device."""
model = cfg.model or 'yolov8n.pt'
data = cfg.data or 'coco8.yaml' # or yolo.ClassificationDataset("mnist")
device = cfg.device if cfg.device is not None else ''
args = dict(model=model, data=data, device=device)
if use_python:
from ultralytics import YOLO
YOLO(model).train(**args)
else:
trainer = DetectionTrainer(overrides=args)
trainer.train()
if __name__ == '__main__':
train()

@ -8,7 +8,7 @@ import torch
from ultralytics.data import build_dataloader, build_yolo_dataset, converter from ultralytics.data import build_dataloader, build_yolo_dataset, converter
from ultralytics.engine.validator import BaseValidator from ultralytics.engine.validator import BaseValidator
from ultralytics.utils import DEFAULT_CFG, LOGGER, ops from ultralytics.utils import LOGGER, ops
from ultralytics.utils.checks import check_requirements from ultralytics.utils.checks import check_requirements
from ultralytics.utils.metrics import ConfusionMatrix, DetMetrics, box_iou from ultralytics.utils.metrics import ConfusionMatrix, DetMetrics, box_iou
from ultralytics.utils.plotting import output_to_target, plot_images from ultralytics.utils.plotting import output_to_target, plot_images
@ -16,6 +16,18 @@ from ultralytics.utils.torch_utils import de_parallel
class DetectionValidator(BaseValidator): class DetectionValidator(BaseValidator):
"""
A class extending the BaseValidator class for validation based on a detection model.
Example:
```python
from ultralytics.models.yolo.detect import DetectionValidator
args = dict(model='yolov8n.pt', data='coco8.yaml')
validator = DetectionValidator(args=args)
validator(model=args['model'])
```
"""
def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None): def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None):
"""Initialize detection model with necessary variables and settings.""" """Initialize detection model with necessary variables and settings."""
@ -254,21 +266,3 @@ class DetectionValidator(BaseValidator):
except Exception as e: except Exception as e:
LOGGER.warning(f'pycocotools unable to run: {e}') LOGGER.warning(f'pycocotools unable to run: {e}')
return stats return stats
def val(cfg=DEFAULT_CFG, use_python=False):
"""Validate trained YOLO model on validation dataset."""
model = cfg.model or 'yolov8n.pt'
data = cfg.data or 'coco8.yaml'
args = dict(model=model, data=data)
if use_python:
from ultralytics import YOLO
YOLO(model).val(**args)
else:
validator = DetectionValidator(args=args)
validator(model=args['model'])
if __name__ == '__main__':
val()

@ -1,7 +1,7 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
from .predict import PosePredictor, predict from .predict import PosePredictor
from .train import PoseTrainer, train from .train import PoseTrainer
from .val import PoseValidator, val from .val import PoseValidator
__all__ = 'PoseTrainer', 'train', 'PoseValidator', 'val', 'PosePredictor', 'predict' __all__ = 'PoseTrainer', 'PoseValidator', 'PosePredictor'

@ -2,10 +2,23 @@
from ultralytics.engine.results import Results from ultralytics.engine.results import Results
from ultralytics.models.yolo.detect.predict import DetectionPredictor from ultralytics.models.yolo.detect.predict import DetectionPredictor
from ultralytics.utils import ASSETS, DEFAULT_CFG, LOGGER, ops from ultralytics.utils import DEFAULT_CFG, LOGGER, ops
class PosePredictor(DetectionPredictor): class PosePredictor(DetectionPredictor):
"""
A class extending the DetectionPredictor class for prediction based on a pose model.
Example:
```python
from ultralytics.utils import ASSETS
from ultralytics.models.yolo.pose import PosePredictor
args = dict(model='yolov8n-pose.pt', source=ASSETS)
predictor = PosePredictor(overrides=args)
predictor.predict_cli()
```
"""
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
super().__init__(cfg, overrides, _callbacks) super().__init__(cfg, overrides, _callbacks)
@ -40,21 +53,3 @@ class PosePredictor(DetectionPredictor):
boxes=pred[:, :6], boxes=pred[:, :6],
keypoints=pred_kpts)) keypoints=pred_kpts))
return results return results
def predict(cfg=DEFAULT_CFG, use_python=False):
"""Runs YOLO to predict objects in an image or video."""
model = cfg.model or 'yolov8n-pose.pt'
source = cfg.source or ASSETS
args = dict(model=model, source=source)
if use_python:
from ultralytics import YOLO
YOLO(model)(**args)
else:
predictor = PosePredictor(overrides=args)
predictor.predict_cli()
if __name__ == '__main__':
predict()

@ -9,6 +9,18 @@ from ultralytics.utils.plotting import plot_images, plot_results
class PoseTrainer(yolo.detect.DetectionTrainer): class PoseTrainer(yolo.detect.DetectionTrainer):
"""
A class extending the DetectionTrainer class for training based on a pose model.
Example:
```python
from ultralytics.models.yolo.pose import PoseTrainer
args = dict(model='yolov8n-pose.pt', data='coco8-pose.yaml', epochs=3)
trainer = PoseTrainer(overrides=args)
trainer.train()
```
"""
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
"""Initialize a PoseTrainer object with specified configurations and overrides.""" """Initialize a PoseTrainer object with specified configurations and overrides."""
@ -59,22 +71,3 @@ class PoseTrainer(yolo.detect.DetectionTrainer):
def plot_metrics(self): def plot_metrics(self):
"""Plots training/val metrics.""" """Plots training/val metrics."""
plot_results(file=self.csv, pose=True, on_plot=self.on_plot) # save results.png plot_results(file=self.csv, pose=True, on_plot=self.on_plot) # save results.png
def train(cfg=DEFAULT_CFG, use_python=False):
"""Train the YOLO model on the given data and device."""
model = cfg.model or 'yolov8n-pose.yaml'
data = cfg.data or 'coco8-pose.yaml'
device = cfg.device if cfg.device is not None else ''
args = dict(model=model, data=data, device=device)
if use_python:
from ultralytics import YOLO
YOLO(model).train(**args)
else:
trainer = PoseTrainer(overrides=args)
trainer.train()
if __name__ == '__main__':
train()

@ -6,13 +6,25 @@ import numpy as np
import torch import torch
from ultralytics.models.yolo.detect import DetectionValidator from ultralytics.models.yolo.detect import DetectionValidator
from ultralytics.utils import DEFAULT_CFG, LOGGER, ops from ultralytics.utils import LOGGER, ops
from ultralytics.utils.checks import check_requirements from ultralytics.utils.checks import check_requirements
from ultralytics.utils.metrics import OKS_SIGMA, PoseMetrics, box_iou, kpt_iou from ultralytics.utils.metrics import OKS_SIGMA, PoseMetrics, box_iou, kpt_iou
from ultralytics.utils.plotting import output_to_target, plot_images from ultralytics.utils.plotting import output_to_target, plot_images
class PoseValidator(DetectionValidator): class PoseValidator(DetectionValidator):
"""
A class extending the DetectionValidator class for validation based on a pose model.
Example:
```python
from ultralytics.models.yolo.pose import PoseValidator
args = dict(model='yolov8n-pose.pt', data='coco8-pose.yaml')
validator = PoseValidator(args=args)
validator(model=args['model'])
```
"""
def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None): def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None):
"""Initialize a 'PoseValidator' object with custom parameters and assigned attributes.""" """Initialize a 'PoseValidator' object with custom parameters and assigned attributes."""
@ -201,21 +213,3 @@ class PoseValidator(DetectionValidator):
except Exception as e: except Exception as e:
LOGGER.warning(f'pycocotools unable to run: {e}') LOGGER.warning(f'pycocotools unable to run: {e}')
return stats return stats
def val(cfg=DEFAULT_CFG, use_python=False):
"""Performs validation on YOLO model using given data."""
model = cfg.model or 'yolov8n-pose.pt'
data = cfg.data or 'coco8-pose.yaml'
args = dict(model=model, data=data)
if use_python:
from ultralytics import YOLO
YOLO(model).val(**args)
else:
validator = PoseValidator(args=args)
validator(model=args['model'])
if __name__ == '__main__':
val()

@ -1,7 +1,7 @@
# Ultralytics YOLO 🚀, AGPL-3.0 license # Ultralytics YOLO 🚀, AGPL-3.0 license
from .predict import SegmentationPredictor, predict from .predict import SegmentationPredictor
from .train import SegmentationTrainer, train from .train import SegmentationTrainer
from .val import SegmentationValidator, val from .val import SegmentationValidator
__all__ = 'SegmentationPredictor', 'predict', 'SegmentationTrainer', 'train', 'SegmentationValidator', 'val' __all__ = 'SegmentationPredictor', 'SegmentationTrainer', 'SegmentationValidator'

@ -4,10 +4,23 @@ import torch
from ultralytics.engine.results import Results from ultralytics.engine.results import Results
from ultralytics.models.yolo.detect.predict import DetectionPredictor from ultralytics.models.yolo.detect.predict import DetectionPredictor
from ultralytics.utils import ASSETS, DEFAULT_CFG, ops from ultralytics.utils import DEFAULT_CFG, ops
class SegmentationPredictor(DetectionPredictor): class SegmentationPredictor(DetectionPredictor):
"""
A class extending the DetectionPredictor class for prediction based on a segmentation model.
Example:
```python
from ultralytics.utils import ASSETS
from ultralytics.models.yolo.segment import SegmentationPredictor
args = dict(model='yolov8n-seg.pt', source=ASSETS)
predictor = SegmentationPredictor(overrides=args)
predictor.predict_cli()
```
"""
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
super().__init__(cfg, overrides, _callbacks) super().__init__(cfg, overrides, _callbacks)
@ -42,21 +55,3 @@ class SegmentationPredictor(DetectionPredictor):
results.append( results.append(
Results(orig_img=orig_img, path=img_path, names=self.model.names, boxes=pred[:, :6], masks=masks)) Results(orig_img=orig_img, path=img_path, names=self.model.names, boxes=pred[:, :6], masks=masks))
return results return results
def predict(cfg=DEFAULT_CFG, use_python=False):
"""Runs YOLO object detection on an image or video source."""
model = cfg.model or 'yolov8n-seg.pt'
source = cfg.source or ASSETS
args = dict(model=model, source=source)
if use_python:
from ultralytics import YOLO
YOLO(model)(**args)
else:
predictor = SegmentationPredictor(overrides=args)
predictor.predict_cli()
if __name__ == '__main__':
predict()

@ -9,6 +9,18 @@ from ultralytics.utils.plotting import plot_images, plot_results
class SegmentationTrainer(yolo.detect.DetectionTrainer): class SegmentationTrainer(yolo.detect.DetectionTrainer):
"""
A class extending the DetectionTrainer class for training based on a segmentation model.
Example:
```python
from ultralytics.models.yolo.segment import SegmentationTrainer
args = dict(model='yolov8n-seg.pt', data='coco8-seg.yaml', epochs=3)
trainer = SegmentationTrainer(overrides=args)
trainer.train()
```
"""
def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None): def __init__(self, cfg=DEFAULT_CFG, overrides=None, _callbacks=None):
"""Initialize a SegmentationTrainer object with given arguments.""" """Initialize a SegmentationTrainer object with given arguments."""
@ -46,19 +58,11 @@ class SegmentationTrainer(yolo.detect.DetectionTrainer):
plot_results(file=self.csv, segment=True, on_plot=self.on_plot) # save results.png plot_results(file=self.csv, segment=True, on_plot=self.on_plot) # save results.png
def train(cfg=DEFAULT_CFG, use_python=False): def train(cfg=DEFAULT_CFG):
"""Train a YOLO segmentation model based on passed arguments.""" """Train a YOLO segmentation model based on passed arguments."""
model = cfg.model or 'yolov8n-seg.pt' args = dict(model=cfg.model or 'yolov8n-seg.pt', data=cfg.data or 'coco8-seg.yaml')
data = cfg.data or 'coco8-seg.yaml' trainer = SegmentationTrainer(overrides=args)
device = cfg.device if cfg.device is not None else '' trainer.train()
args = dict(model=model, data=data, device=device)
if use_python:
from ultralytics import YOLO
YOLO(model).train(**args)
else:
trainer = SegmentationTrainer(overrides=args)
trainer.train()
if __name__ == '__main__': if __name__ == '__main__':

@ -15,6 +15,18 @@ from ultralytics.utils.plotting import output_to_target, plot_images
class SegmentationValidator(DetectionValidator): class SegmentationValidator(DetectionValidator):
"""
A class extending the DetectionValidator class for validation based on a segmentation model.
Example:
```python
from ultralytics.models.yolo.segment import SegmentationValidator
args = dict(model='yolov8n-seg.pt', data='coco8-seg.yaml')
validator = SegmentationValidator(args=args)
validator(model=args['model'])
```
"""
def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None): def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None):
"""Initialize SegmentationValidator and set task to 'segment', metrics to SegmentMetrics.""" """Initialize SegmentationValidator and set task to 'segment', metrics to SegmentMetrics."""
@ -233,18 +245,11 @@ class SegmentationValidator(DetectionValidator):
return stats return stats
def val(cfg=DEFAULT_CFG, use_python=False): def val(cfg=DEFAULT_CFG):
"""Validate trained YOLO model on validation data.""" """Validate trained YOLO model on validation data."""
model = cfg.model or 'yolov8n-seg.pt' args = dict(model=cfg.model or 'yolov8n-seg.pt', data=cfg.data or 'coco8-seg.yaml')
data = cfg.data or 'coco8-seg.yaml' validator = SegmentationValidator(args=args)
validator(model=args['model'])
args = dict(model=model, data=data)
if use_python:
from ultralytics import YOLO
YOLO(model).val(**args)
else:
validator = SegmentationValidator(args=args)
validator(model=args['model'])
if __name__ == '__main__': if __name__ == '__main__':

@ -414,13 +414,10 @@ class AutoBackend(nn.Module):
scale, zero_point = output['quantization'] scale, zero_point = output['quantization']
x = (x.astype(np.float32) - zero_point) * scale # re-scale x = (x.astype(np.float32) - zero_point) * scale # re-scale
if x.ndim > 2: # if task is not classification if x.ndim > 2: # if task is not classification
# Denormalize xywh with input image size # Denormalize xywh by image size. See https://github.com/ultralytics/ultralytics/pull/1695
# xywh are normalized in TFLite/EdgeTPU to mitigate quantization error of integer models # xywh are normalized in TFLite/EdgeTPU to mitigate quantization error of integer models
# See this PR for details: https://github.com/ultralytics/ultralytics/pull/1695 x[:, [0, 2]] *= w
x[:, 0] *= w x[:, [1, 3]] *= h
x[:, 1] *= h
x[:, 2] *= w
x[:, 3] *= h
y.append(x) y.append(x)
# TF segment fixes: export is reversed vs ONNX export and protos are transposed # TF segment fixes: export is reversed vs ONNX export and protos are transposed
if len(y) == 2: # segment with (det, proto) output order reversed if len(y) == 2: # segment with (det, proto) output order reversed

@ -169,7 +169,7 @@ def plt_settings(rcparams=None, backend='Agg'):
""" """
Decorator to temporarily set rc parameters and the backend for a plotting function. Decorator to temporarily set rc parameters and the backend for a plotting function.
Usage: Example:
decorator: @plt_settings({"font.size": 12}) decorator: @plt_settings({"font.size": 12})
context manager: with plt_settings({"font.size": 12}): context manager: with plt_settings({"font.size": 12}):

@ -18,8 +18,7 @@ from .metrics import box_iou
class Profile(contextlib.ContextDecorator): class Profile(contextlib.ContextDecorator):
""" """
YOLOv8 Profile class. YOLOv8 Profile class. Use as a decorator with @Profile() or as a context manager with 'with Profile():'.
Usage: as a decorator with @Profile() or as a context manager with 'with Profile():'
""" """
def __init__(self, t=0.0): def __init__(self, t=0.0):

@ -10,12 +10,14 @@ TORCH_1_10 = check_version(torch.__version__, '1.10.0')
def select_candidates_in_gts(xy_centers, gt_bboxes, eps=1e-9): def select_candidates_in_gts(xy_centers, gt_bboxes, eps=1e-9):
"""select the positive anchor center in gt """
Select the positive anchor center in gt.
Args: Args:
xy_centers (Tensor): shape(h*w, 4) xy_centers (Tensor): shape(h*w, 4)
gt_bboxes (Tensor): shape(b, n_boxes, 4) gt_bboxes (Tensor): shape(b, n_boxes, 4)
Return:
Returns:
(Tensor): shape(b, n_boxes, h*w) (Tensor): shape(b, n_boxes, h*w)
""" """
n_anchors = xy_centers.shape[0] n_anchors = xy_centers.shape[0]
@ -27,13 +29,14 @@ def select_candidates_in_gts(xy_centers, gt_bboxes, eps=1e-9):
def select_highest_overlaps(mask_pos, overlaps, n_max_boxes): def select_highest_overlaps(mask_pos, overlaps, n_max_boxes):
"""if an anchor box is assigned to multiple gts, """
the one with the highest iou will be selected. If an anchor box is assigned to multiple gts, the one with the highest IoI will be selected.
Args: Args:
mask_pos (Tensor): shape(b, n_max_boxes, h*w) mask_pos (Tensor): shape(b, n_max_boxes, h*w)
overlaps (Tensor): shape(b, n_max_boxes, h*w) overlaps (Tensor): shape(b, n_max_boxes, h*w)
Return:
Returns:
target_gt_idx (Tensor): shape(b, h*w) target_gt_idx (Tensor): shape(b, h*w)
fg_mask (Tensor): shape(b, h*w) fg_mask (Tensor): shape(b, h*w)
mask_pos (Tensor): shape(b, n_max_boxes, h*w) mask_pos (Tensor): shape(b, n_max_boxes, h*w)

Loading…
Cancel
Save