Release 8.0.4 fixes (#256)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: Laughing <61612323+Laughing-q@users.noreply.github.com>
Co-authored-by: TechieG <35962141+gokulnath30@users.noreply.github.com>
Co-authored-by: Parthiban Marimuthu <66585214+partheee@users.noreply.github.com>
single_channel
Ayush Chaurasia 2 years ago committed by GitHub
parent f5dfd5be8b
commit 216cf2ddb6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -99,8 +99,8 @@ results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
success = YOLO("yolov8n.pt").export(format="onnx") # export a model to ONNX format
```
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/yolo/v8/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/ultralytics/releases).
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/assets/releases).
### Known Issues / TODOs
@ -116,18 +116,18 @@ We are still working on several parts of YOLOv8! We aim to have these completed
All YOLOv8 pretrained models are available here. Detection and Segmentation models are pretrained on the COCO dataset, while Classification models are pretrained on the ImageNet dataset.
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/yolo/v8/models) download automatically from the latest
[Models](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) download automatically from the latest
Ultralytics [release](https://github.com/ultralytics/ultralytics/releases) on first use.
<details open><summary>Detection</summary>
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU<br>(ms) | Speed<br><sup>T4 GPU<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ----------------------------------------------------------------------------------------- | --------------------- | -------------------- | ------------------------- | ---------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n.pt) | 640 | 37.3 | - | - | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s.pt) | 640 | 44.9 | - | - | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m.pt) | 640 | 50.2 | - | - | 25.9 | 78.9 |
| [YOLOv8l](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l.pt) | 640 | 52.9 | - | - | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x.pt) | 640 | 53.9 | - | - | 68.2 | 257.8 |
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU<br>(ms) | Speed<br><sup>T4 GPU<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ------------------------------------------------------------------------------------ | --------------------- | -------------------- | ------------------------- | ---------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt) | 640 | 37.3 | - | - | 3.2 | 8.7 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt) | 640 | 44.9 | - | - | 11.2 | 28.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m.pt) | 640 | 50.2 | - | - | 25.9 | 78.9 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l.pt) | 640 | 52.9 | - | - | 43.7 | 165.2 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x.pt) | 640 | 53.9 | - | - | 68.2 | 257.8 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
<br>Reproduce by `yolo mode=val task=detect data=coco.yaml device=0`
@ -138,13 +138,13 @@ Ultralytics [release](https://github.com/ultralytics/ultralytics/releases) on fi
<details><summary>Segmentation</summary>
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU<br>(ms) | Speed<br><sup>T4 GPU<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| --------------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------- | ---------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | - | - | 3.4 | 12.6 |
| [YOLOv8s](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | - | - | 11.8 | 42.6 |
| [YOLOv8m](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | - | - | 27.3 | 110.2 |
| [YOLOv8l](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | - | - | 46.0 | 220.5 |
| [YOLOv8x](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | - | - | 71.8 | 344.1 |
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Speed<br><sup>CPU<br>(ms) | Speed<br><sup>T4 GPU<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
| ---------------------------------------------------------------------------------------- | --------------------- | -------------------- | --------------------- | ------------------------- | ---------------------------- | ------------------ | ----------------- |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-seg.pt) | 640 | 36.7 | 30.5 | - | - | 3.4 | 12.6 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-seg.pt) | 640 | 44.6 | 36.8 | - | - | 11.8 | 42.6 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-seg.pt) | 640 | 49.9 | 40.8 | - | - | 27.3 | 110.2 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-seg.pt) | 640 | 52.3 | 42.6 | - | - | 46.0 | 220.5 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt) | 640 | 53.4 | 43.4 | - | - | 71.8 | 344.1 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.
<br>Reproduce by `yolo mode=val task=detect data=coco.yaml device=0`
@ -155,13 +155,13 @@ Ultralytics [release](https://github.com/ultralytics/ultralytics/releases) on fi
<details><summary>Classification</summary>
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU<br>(ms) | Speed<br><sup>T4 GPU<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| --------------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------- | ---------------------------- | ------------------ | ------------------------ |
| [YOLOv8n](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | - | - | 2.7 | 4.3 |
| [YOLOv8s](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | - | - | 6.4 | 13.5 |
| [YOLOv8m](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | - | - | 17.0 | 42.7 |
| [YOLOv8l](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | - | - | 37.5 | 99.7 |
| [YOLOv8x](https://github.com/ultralytics/ultralytics/releases/download/v8.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | - | - | 57.4 | 154.8 |
| Model | size<br><sup>(pixels) | acc<br><sup>top1 | acc<br><sup>top5 | Speed<br><sup>CPU<br>(ms) | Speed<br><sup>T4 GPU<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) at 640 |
| ---------------------------------------------------------------------------------------- | --------------------- | ---------------- | ---------------- | ------------------------- | ---------------------------- | ------------------ | ------------------------ |
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n-cls.pt) | 224 | 66.6 | 87.0 | - | - | 2.7 | 4.3 |
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s-cls.pt) | 224 | 72.3 | 91.1 | - | - | 6.4 | 13.5 |
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8m-cls.pt) | 224 | 76.4 | 93.2 | - | - | 17.0 | 42.7 |
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8l-cls.pt) | 224 | 78.0 | 94.1 | - | - | 37.5 | 99.7 |
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-cls.pt) | 224 | 78.4 | 94.3 | - | - | 57.4 | 154.8 |
- **mAP<sup>val</sup>** values are for single-model single-scale on [ImageNet](https://www.image-net.org/) dataset.
<br>Reproduce by `yolo mode=val task=detect data=coco.yaml device=0`

@ -95,7 +95,7 @@ results = model("https://ultralytics.com/images/bus.jpg") # 预测图像
success = YOLO("yolov8n.pt").export(format="onnx") # 将模型导出为 ONNX 格式
```
[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/yolo/v8/models) 会从 Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自动下载。
[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) 会从 Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自动下载。
### 已知问题 / 待办事项
@ -111,7 +111,7 @@ success = YOLO("yolov8n.pt").export(format="onnx") # 将模型导出为 ONNX
所有 YOLOv8 的预训练模型都可以在这里找到。目标检测和分割模型是在 COCO 数据集上预训练的,而分类模型是在 ImageNet 数据集上预训练的。
第一次使用时,[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/yolo/v8/models) 会从 Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自动下载。
第一次使用时,[模型](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models) 会从 Ultralytics [发布页](https://github.com/ultralytics/ultralytics/releases) 自动下载。
<details open><summary>目标检测</summary>

@ -55,16 +55,16 @@ You can override config file entirely by passing a new file. You can create a co
```bash
yolo task=init
```
You can then use special `--cfg name.yaml` command to pass the new config file
You can then use `cfg=name.yaml` command to pass the new config file
```bash
yolo task=detect mode=train {++ --cfg default.yaml ++}
yolo cfg=default.yaml
```
??? example
=== "Command"
```
yolo task=init
yolo task=detect mode=train --cfg default.yaml
yolo cfg=default.yaml
```
=== "Result"
TODO: add terminal output

@ -23,7 +23,7 @@ def test_train_seg():
def test_train_cls():
os.system(f'yolo mode=train task=classify model={CFG}-cls.yaml data=imagenette160 imgsz=32 epochs=1')
os.system(f'yolo mode=train task=classify model={CFG}-cls.yaml data=mnist160 imgsz=32 epochs=1')
# Val checks -----------------------------------------------------------------------------------------------------------

@ -26,8 +26,10 @@ def test_detect():
# predictor
pred = detect.DetectionPredictor(overrides={"imgsz": [640, 640]})
p = pred(source=SOURCE, model="yolov8n.pt")
assert len(p) == 2, "predictor test failed"
i = 0
for _ in pred(source=SOURCE, model="yolov8n.pt"):
i += 1
assert i == 2, "predictor test failed"
overrides["resume"] = trainer.last
trainer = detect.DetectionTrainer(overrides=overrides)
@ -57,8 +59,10 @@ def test_segment():
# predictor
pred = segment.SegmentationPredictor(overrides={"imgsz": [640, 640]})
p = pred(source=SOURCE, model="yolov8n-seg.pt")
assert len(p) == 2, "predictor test failed"
i = 0
for _ in pred(source=SOURCE, model="yolov8n-seg.pt"):
i += 1
assert i == 2, "predictor test failed"
# test resume
overrides["resume"] = trainer.last
@ -73,14 +77,8 @@ def test_segment():
def test_classify():
overrides = {
"data": "imagenette160",
"model": "yolov8n-cls.yaml",
"imgsz": 32,
"epochs": 1,
"batch": 64,
"save": False}
CFG.data = "imagenette160"
overrides = {"data": "mnist160", "model": "yolov8n-cls.yaml", "imgsz": 32, "epochs": 1, "batch": 64, "save": False}
CFG.data = "mnist160"
CFG.imgsz = 32
CFG.batch = 64
# YOLO(CFG_SEG).train(**overrides) # This works
@ -95,5 +93,7 @@ def test_classify():
# predictor
pred = classify.ClassificationPredictor(overrides={"imgsz": [640, 640]})
p = pred(source=SOURCE, model=trained_model)
assert len(p) == 2, "Predictor test failed!"
i = 0
for _ in pred(source=SOURCE, model=trained_model):
i += 1
assert i == 2, "predictor test failed"

@ -32,7 +32,7 @@ def test_model_fuse():
def test_predict_dir():
model = YOLO(MODEL)
model.predict(source=ROOT / "assets")
model.predict(source=ROOT / "assets", return_outputs=False)
def test_val():

@ -56,6 +56,7 @@ class AutoBackend(nn.Module):
fp16 &= pt or jit or onnx or engine or nn_module # FP16
nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH)
stride = 32 # default stride
model = None # TODO: resolves ONNX inference, verify effect on other backends
cuda = torch.cuda.is_available() and device.type != 'cpu' # use CUDA
if not (pt or triton or nn_module):
w = attempt_download(w) # download if not local

@ -6,6 +6,7 @@ from pathlib import Path
import hydra
from ultralytics import hub, yolo
from ultralytics.yolo.configs import get_config
from ultralytics.yolo.utils import DEFAULT_CONFIG, LOGGER, colorstr
DIR = Path(__file__).parent
@ -20,6 +21,9 @@ def cli(cfg):
cfg (DictConfig): Configuration for the task and mode.
"""
# LOGGER.info(f"{colorstr(f'Ultralytics YOLO v{ultralytics.__version__}')}")
if cfg.cfg:
LOGGER.info(f"Overriding default config with {cfg.cfg}")
cfg = get_config(cfg.cfg)
task, mode = cfg.task.lower(), cfg.mode.lower()
# Special case for initializing the configuration
@ -28,7 +32,7 @@ def cli(cfg):
LOGGER.info(f"""
{colorstr("YOLO:")} configuration saved to {Path.cwd() / DEFAULT_CONFIG.name}.
To run experiments using custom configuration:
yolo task='task' mode='mode' --config-name config_file.yaml
yolo cfg=config_file.yaml
""")
return

@ -101,6 +101,7 @@ mixup: 0.0 # image mixup (probability)
copy_paste: 0.0 # segment copy-paste (probability)
# Hydra configs --------------------------------------------------------------------------------------------------------
cfg: null # for overriding defaults.yaml
hydra:
output_subdir: null # disable hydra directory creation
run:

@ -111,7 +111,7 @@ class YOLO:
self.model.fuse()
@smart_inference_mode()
def predict(self, source, **kwargs):
def predict(self, source, return_outputs=True, **kwargs):
"""
Visualize prediction.
@ -127,8 +127,8 @@ class YOLO:
predictor = self.PredictorClass(overrides=overrides)
predictor.args.imgsz = check_imgsz(predictor.args.imgsz, min_dim=2) # check image size
predictor.setup(model=self.model, source=source)
return predictor()
predictor.setup(model=self.model, source=source, return_outputs=return_outputs)
return predictor() if return_outputs else predictor.predict_cli()
@smart_inference_mode()
def val(self, data=None, **kwargs):
@ -212,10 +212,12 @@ class YOLO:
@staticmethod
def _reset_ckpt_args(args):
args.pop("device", None)
args.pop("project", None)
args.pop("name", None)
args.pop("batch", None)
args.pop("epochs", None)
args.pop("cache", None)
args.pop("save_json", None)
# set device to '' to prevent from auto DDP usage
args["device"] = ''

@ -89,6 +89,7 @@ class BasePredictor:
self.vid_path, self.vid_writer = None, None
self.annotator = None
self.data_path = None
self.output = dict()
self.callbacks = defaultdict(list, {k: [v] for k, v in callbacks.default_callbacks.items()}) # add callbacks
callbacks.add_integration_callbacks(self)
@ -104,7 +105,7 @@ class BasePredictor:
def postprocess(self, preds, img, orig_img):
return preds
def setup(self, source=None, model=None):
def setup(self, source=None, model=None, return_outputs=True):
# source
source = str(source if source is not None else self.args.source)
is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
@ -155,16 +156,16 @@ class BasePredictor:
self.imgsz = imgsz
self.done_setup = True
self.device = device
self.return_outputs = return_outputs
return model
@smart_inference_mode()
def __call__(self, source=None, model=None):
def __call__(self, source=None, model=None, return_outputs=True):
self.run_callbacks("on_predict_start")
model = self.model if self.done_setup else self.setup(source, model)
model = self.model if self.done_setup else self.setup(source, model, return_outputs)
model.eval()
self.seen, self.windows, self.dt = 0, [], (ops.Profile(), ops.Profile(), ops.Profile())
self.all_outputs = []
for batch in self.dataset:
self.run_callbacks("on_predict_batch_start")
path, im, im0s, vid_cap, s = batch
@ -194,6 +195,10 @@ class BasePredictor:
if self.args.save:
self.save_preds(vid_cap, i, str(self.save_dir / p.name))
if self.return_outputs:
yield self.output
self.output.clear()
# Print time (inference-only)
LOGGER.info(f"{s}{'' if len(preds) else '(no detections), '}{self.dt[1].dt * 1E3:.1f}ms")
@ -209,7 +214,11 @@ class BasePredictor:
LOGGER.info(f"Results saved to {colorstr('bold', self.save_dir)}{s}")
self.run_callbacks("on_predict_end")
return self.all_outputs
def predict_cli(self, source=None, model=None, return_outputs=False):
# as __call__ is a genertor now so have to treat it like a genertor
for _ in (self.__call__(source, model, return_outputs)):
pass
def show(self, p):
im0 = self.annotator.result()

@ -70,7 +70,7 @@ def select_device(device='', batch_size=0, newline=False):
elif device: # non-cpu device requested
os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable - must be before assert is_available()
assert torch.cuda.is_available() and torch.cuda.device_count() >= len(device.replace(',', '')), \
f"Invalid CUDA '--device {device}' requested, use '--device cpu' or pass valid CUDA device(s)"
f"Invalid CUDA 'device={device}' requested, use 'device=cpu' or pass valid CUDA device(s)"
if not cpu and not mps and torch.cuda.is_available(): # prefer GPU if available
devices = device.split(',') if device else '0' # range(torch.cuda.device_count()) # i.e. 0,1,6,7

@ -39,7 +39,8 @@ class ClassificationPredictor(BasePredictor):
self.annotator = self.get_annotator(im0)
prob = preds[idx].softmax(0)
self.all_outputs.append(prob)
if self.return_outputs:
self.output["prob"] = prob.cpu().numpy()
# Print results
top5i = prob.argsort(0, descending=True)[:5].tolist() # top 5 indices
log_string += f"{', '.join(f'{self.model.names[j]} {prob[j]:.2f}' for j in top5i)}, "
@ -62,7 +63,7 @@ def predict(cfg):
cfg.source = cfg.source if cfg.source is not None else ROOT / "assets"
predictor = ClassificationPredictor(cfg)
predictor()
predictor.predict_cli()
if __name__ == "__main__":

@ -143,6 +143,7 @@ def train(cfg):
cfg.weight_decay = 5e-5
cfg.label_smoothing = 0.1
cfg.warmup_epochs = 0.0
cfg.device = cfg.device if cfg.device is not None else ''
# trainer = ClassificationTrainer(cfg)
# trainer.train()
from ultralytics import YOLO

@ -53,12 +53,15 @@ class DetectionPredictor(BasePredictor):
self.annotator = self.get_annotator(im0)
det = preds[idx]
self.all_outputs.append(det)
if len(det) == 0:
return log_string
for c in det[:, 5].unique():
n = (det[:, 5] == c).sum() # detections per class
log_string += f"{n} {self.model.names[int(c)]}{'s' * (n > 1)}, "
if self.return_outputs:
self.output["det"] = det.cpu().numpy()
# write
gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
for *xyxy, conf, cls in reversed(det):
@ -89,7 +92,7 @@ def predict(cfg):
cfg.imgsz = check_imgsz(cfg.imgsz, min_dim=2) # check image size
cfg.source = cfg.source if cfg.source is not None else ROOT / "assets"
predictor = DetectionPredictor(cfg)
predictor()
predictor.predict_cli()
if __name__ == "__main__":

@ -199,6 +199,7 @@ class Loss:
def train(cfg):
cfg.model = cfg.model or "yolov8n.yaml"
cfg.data = cfg.data or "coco128.yaml" # or yolo.ClassificationDataset("mnist")
cfg.device = cfg.device if cfg.device is not None else ''
# trainer = DetectionTrainer(cfg)
# trainer.train()
from ultralytics import YOLO

@ -58,10 +58,10 @@ class SegmentationPredictor(DetectionPredictor):
return log_string
# Segments
mask = masks[idx]
if self.args.save_txt:
if self.args.save_txt or self.return_outputs:
shape = im0.shape if self.args.retina_masks else im.shape[2:]
segments = [
ops.scale_segments(im0.shape if self.args.retina_masks else im.shape[2:], x, im0.shape, normalize=True)
for x in reversed(ops.masks2segments(mask))]
ops.scale_segments(shape, x, im0.shape, normalize=False) for x in reversed(ops.masks2segments(mask))]
# Print results
for c in det[:, 5].unique():
@ -76,12 +76,17 @@ class SegmentationPredictor(DetectionPredictor):
255 if self.args.retina_masks else im[idx])
det = reversed(det[:, :6])
self.all_outputs.append([det, mask])
if self.return_outputs:
self.output["det"] = det.cpu().numpy()
self.output["segment"] = segments
# Write results
for j, (*xyxy, conf, cls) in enumerate(reversed(det[:, :6])):
for j, (*xyxy, conf, cls) in enumerate(det):
if self.args.save_txt: # Write to file
seg = segments[j].reshape(-1) # (n,2) to (n*2)
seg = segments[j].copy()
seg[:, 0] /= shape[1] # width
seg[:, 1] /= shape[0] # height
seg = seg.reshape(-1) # (n,2) to (n*2)
line = (cls, *seg, conf) if self.args.save_conf else (cls, *seg) # label format
with open(f'{self.txt_path}.txt', 'a') as f:
f.write(('%g ' * len(line)).rstrip() % line + '\n')
@ -106,7 +111,7 @@ def predict(cfg):
cfg.source = cfg.source if cfg.source is not None else ROOT / "assets"
predictor = SegmentationPredictor(cfg)
predictor()
predictor.predict_cli()
if __name__ == "__main__":

@ -144,6 +144,7 @@ class SegLoss(Loss):
def train(cfg):
cfg.model = cfg.model or "yolov8n-seg.yaml"
cfg.data = cfg.data or "coco128-seg.yaml" # or yolo.ClassificationDataset("mnist")
cfg.device = cfg.device if cfg.device is not None else ''
# trainer = SegmentationTrainer(cfg)
# trainer.train()
from ultralytics import YOLO

Loading…
Cancel
Save