Cleanup (#197)
Co-authored-by: Onuralp Sezer <thunderbirdtr@gmail.com> Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>single_channel
parent
840c35a0aa
commit
c8e3c5db4b
@ -1,24 +1,36 @@
|
||||
## Models HUB
|
||||
## Models
|
||||
|
||||
Here are the models that are supported out-of-the-box with Ultralytics. For a detailed view and navigation, visit [model hub](<>) section of the docs.
|
||||
Welcome to the Ultralytics Models directory! Here you will find a wide variety of pre-configured model configuration
|
||||
files (`*.yaml`s) that can be used to create custom YOLO models. The models in this directory have been expertly crafted
|
||||
and fine-tuned by the Ultralytics team to provide the best performance for a wide range of object detection and image
|
||||
segmentation tasks.
|
||||
|
||||
These model configurations cover a wide range of scenarios, from simple object detection to more complex tasks like
|
||||
instance segmentation and object tracking. They are also designed to run efficiently on a variety of hardware platforms,
|
||||
from CPUs to GPUs. Whether you are a seasoned machine learning practitioner or just getting started with YOLO, this
|
||||
directory provides a great starting point for your custom model development needs.
|
||||
|
||||
To get started, simply browse through the models in this directory and find one that best suits your needs. Once you've
|
||||
selected a model, you can use the provided `*.yaml` file to train and deploy your custom YOLO model with ease. See full
|
||||
details at the Ultralytics [Docs](https://docs.ultralytics.com), and if you need help or have any questions, feel free
|
||||
to reach out to the Ultralytics team for support. So, don't wait, start creating your custom YOLO model now!
|
||||
|
||||
### Usage
|
||||
|
||||
You can simply set the `model` parameter to any available yaml config or pretained weights
|
||||
Model `*.yaml` files may be used directly in the Command Line Interface (CLI) with a `yolo` command:
|
||||
|
||||
```bash
|
||||
yolo task=... mode=... model=yolov5n.yaml
|
||||
yolo task=detect mode=train model=yolov8n.yaml data=coco128.yaml epochs=100
|
||||
```
|
||||
|
||||
| Model | Version/ | size (pixels) | mAPval 50-95 | Speed CPU b1 (ms) | params (M) | FLOPs @640 (B) | model file | Pretrained Weights |
|
||||
| ------------------ | -------- | ------------- | ------------ | ----------------- | ---------- | -------------- | ------------- | ------------------ |
|
||||
| YOLOv5n | v6.3 | 640 | 28.0 | 45 | 1.9 | 4.5 | yolov5n.yaml | - |
|
||||
| YOLOv5s | - | 640 | 37.4 | 98 | 7.2 | 16.5 | yolov5s.yaml | - |
|
||||
| YOLOv5m | - | 640 | 45.4 | 224 | 21.2 | 49.0 | yolov5m.yaml | - |
|
||||
| YOLOv5l | - | 640 | 49.0 | 430 | 46.5 | 109.1 | yolov5l.yaml | - |
|
||||
| YOLOv5x | - | 640 | 50.7 | 766 | 86.7 | 205.7 | yolov5x.yaml | - |
|
||||
| YOLOv5n6 | - | 1280 | 36.0 | 153 | 3.2 | 4.6 | yolov5n6.yaml | - |
|
||||
| YOLOv5s6 | - | 1280 | 44.8 | 385 | 12.6 | 16.8 | yolov5s6.yaml | - |
|
||||
| YOLOv5m6 | - | 1280 | 51.3 | 887 | 35.7 | 50.0 | yolov5m6.yaml | - |
|
||||
| YOLOv5l6 | - | 1280 | 53.7 | 1784 | 76.8 | 111.4 | yolov5l6.yaml | - |
|
||||
| YOLOv5x6 + \[TTA\] | - | 1280 1536 | 55.0 55.8 | 3136 - | 140.7 - | 209.8 - | yolov5x6.yaml | - |
|
||||
They may also be used directly in a Python environment, and accepts the same
|
||||
[arguments](https://docs.ultralytics.com/config/) as in the CLI example above:
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO("yolov8n.yaml") # build a YOLOv8n model from scratch
|
||||
|
||||
model.info() # display model information
|
||||
model.train(data="coco128.yaml", epochs=100) # train the model
|
||||
```
|
||||
|
@ -1,48 +0,0 @@
|
||||
# Ultralytics YOLO 🚀, GPL-3.0 license
|
||||
|
||||
from ultralytics.yolo.utils.torch_utils import get_flops, get_num_params
|
||||
|
||||
try:
|
||||
import wandb
|
||||
|
||||
assert hasattr(wandb, '__version__')
|
||||
except (ImportError, AssertionError):
|
||||
wandb = None
|
||||
|
||||
|
||||
def on_pretrain_routine_start(trainer):
|
||||
wandb.init(project=trainer.args.project or "YOLOv8", name=trainer.args.name, config=dict(
|
||||
trainer.args)) if not wandb.run else wandb.run
|
||||
|
||||
|
||||
def on_fit_epoch_end(trainer):
|
||||
wandb.run.log(trainer.metrics, step=trainer.epoch + 1)
|
||||
if trainer.epoch == 0:
|
||||
model_info = {
|
||||
"model/parameters": get_num_params(trainer.model),
|
||||
"model/GFLOPs": round(get_flops(trainer.model), 3),
|
||||
"model/speed(ms)": round(trainer.validator.speed[1], 3)}
|
||||
wandb.run.log(model_info, step=trainer.epoch + 1)
|
||||
|
||||
|
||||
def on_train_epoch_end(trainer):
|
||||
wandb.run.log(trainer.label_loss_items(trainer.tloss, prefix="train"), step=trainer.epoch + 1)
|
||||
wandb.run.log(trainer.lr, step=trainer.epoch + 1)
|
||||
if trainer.epoch == 1:
|
||||
wandb.run.log({f.stem: wandb.Image(str(f))
|
||||
for f in trainer.save_dir.glob('train_batch*.jpg')},
|
||||
step=trainer.epoch + 1)
|
||||
|
||||
|
||||
def on_train_end(trainer):
|
||||
art = wandb.Artifact(type="model", name=f"run_{wandb.run.id}_model")
|
||||
if trainer.best.exists():
|
||||
art.add_file(trainer.best)
|
||||
wandb.run.log_artifact(art)
|
||||
|
||||
|
||||
callbacks = {
|
||||
"on_pretrain_routine_start": on_pretrain_routine_start,
|
||||
"on_train_epoch_end": on_train_epoch_end,
|
||||
"on_fit_epoch_end": on_fit_epoch_end,
|
||||
"on_train_end": on_train_end} if wandb else {}
|
Loading…
Reference in new issue