Update docs (#71)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>single_channel
parent
e629335f6d
commit
d85b44f259
@ -0,0 +1,109 @@
|
||||
## Ultralytics YOLO
|
||||
|
||||
Default training settings and hyperparameters for medium-augmentation COCO training
|
||||
|
||||
### Setting the operation type
|
||||
???+ note "Operation"
|
||||
|
||||
| Key | Value | Description |
|
||||
|--------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| task | `detect` | Set the task via CLI. See Tasks for all supported tasks like - `detect`, `segment`, `classify`.<br> - `init` is a special case that creates a copy of default.yaml configs to the current working dir |
|
||||
| mode | `train` | Set the mode via CLI. It can be `train`, `val`, `predict` |
|
||||
| resume | `False` | Resume last given task when set to `True`. <br> Resume from a given checkpoint is `model.pt` is passed |
|
||||
| model | null | Set the model. Format can differ for task type. Supports `model_name`, `model.yaml` & `model.pt` |
|
||||
| data | null | Set the data. Format can differ for task type. Supports `data.yaml`, `data_folder`, `dataset_name`|
|
||||
|
||||
### Training settings
|
||||
??? note "Train"
|
||||
| Key | Value | Description |
|
||||
|------------------|--------|---------------------------------------------------------------------------------|
|
||||
| device | '' | cuda device, i.e. 0 or 0,1,2,3 or cpu. `''` selects available cuda 0 device |
|
||||
| epochs | 100 | Number of epochs to train |
|
||||
| workers | 8 | Number of cpu workers used per process. Scales automatically with DDP |
|
||||
| batch_size | 16 | Batch size of the dataloader |
|
||||
| img_size | 640 | Image size of data in dataloader |
|
||||
| optimizer | SGD | Optimizer used. Supported optimizer are: `Adam`, `SGD`, `RMSProp` |
|
||||
| single_cls | False | Train on multi-class data as single-class |
|
||||
| image_weights | False | Use weighted image selection for training |
|
||||
| rect | False | Enable rectangular training |
|
||||
| cos_lr | False | Use cosine LR scheduler |
|
||||
| lr0 | 0.01 | Initial learning rate |
|
||||
| lrf | 0.01 | Final OneCycleLR learning rate |
|
||||
| momentum | 0.937 | Use as `momentum` for SGD and `beta1` for Adam |
|
||||
| weight_decay | 0.0005 | Optimizer weight decay |
|
||||
| warmup_epochs | 3.0 | Warmup epochs. Fractions are ok. |
|
||||
| warmup_momentum | 0.8 | Warmup initial momentum |
|
||||
| warmup_bias_lr | 0.1 | Warmup initial bias lr |
|
||||
| box | 0.05 | Box loss gain |
|
||||
| cls | 0.5 | cls loss gain |
|
||||
| cls_pw | 1.0 | cls BCELoss positive_weight |
|
||||
| obj | 1.0 | bj loss gain (scale with pixels) |
|
||||
| obj_pw | 1.0 | obj BCELoss positive_weight |
|
||||
| iou_t | 0.20 | IOU training threshold |
|
||||
| anchor_t | 4.0 | anchor-multiple threshold |
|
||||
| fl_gamma | 0.0 | focal loss gamma |
|
||||
| label_smoothing | 0.0 | |
|
||||
| nbs | 64 | nominal batch size |
|
||||
| overlap_mask | `True` | **Segmentation**: Use mask overlapping during training |
|
||||
| mask_ratio | 4 | **Segmentation**: Set mask downsampling |
|
||||
| dropout | `False`| **Classification**: Use dropout while training |
|
||||
### Prediction Settings
|
||||
??? note "Prediction"
|
||||
| Key | Value | Description |
|
||||
|----------------|----------------------|----------------------------------------------------|
|
||||
| source | `ultralytics/assets` | Input source. Accepts image, folder, video, url |
|
||||
| view_img | `False` | View the prediction images |
|
||||
| save_txt | `False` | Save the results in a txt file |
|
||||
| save_conf | `False` | Save the condidence scores |
|
||||
| save_crop | `Fasle` | |
|
||||
| hide_labels | `False` | Hide the labels |
|
||||
| hide_conf | `False` | Hide the confidence scores |
|
||||
| vid_stride | `False` | Input video frame-rate stride |
|
||||
| line_thickness | `3` | Bounding-box thickness (pixels) |
|
||||
| visualize | `False` | Visualize model features |
|
||||
| augment | `False` | Augmented inference |
|
||||
| agnostic_nms | `False` | Class-agnostic NMS |
|
||||
| retina_masks | `False` | **Segmentation:** High resolution masks |
|
||||
|
||||
|
||||
### Validation settings
|
||||
??? note "Validation"
|
||||
| Key | Value | Description |
|
||||
|-------------|---------|-----------------------------------|
|
||||
| noval | `False` | ??? |
|
||||
| save_json | `False` | |
|
||||
| save_hybrid | `False` | |
|
||||
| conf_thres | `0.001` | Confidence threshold |
|
||||
| iou_thres | `0.6` | IoU threshold |
|
||||
| max_det | `300` | Maximum number of detections |
|
||||
| half | `True` | Use .half() mode. |
|
||||
| dnn | `False` | Use OpenCV DNN for ONNX inference |
|
||||
| plots | `False` | |
|
||||
|
||||
### Augmentation settings
|
||||
??? note "Augmentation"
|
||||
|
||||
| hsv_h | 0.015 | Image HSV-Hue augmentation (fraction) |
|
||||
|-------------|-------|-------------------------------------------------|
|
||||
| hsv_s | 0.7 | Image HSV-Saturation augmentation (fraction) |
|
||||
| hsv_v | 0.4 | Image HSV-Value augmentation (fraction) |
|
||||
| degrees | 0.0 | Image rotation (+/- deg) |
|
||||
| translate | 0.1 | Image translation (+/- fraction) |
|
||||
| scale | 0.5 | Image scale (+/- gain) |
|
||||
| shear | 0.0 | Image shear (+/- deg) |
|
||||
| perspective | 0.0 | Image perspective (+/- fraction), range 0-0.001 |
|
||||
| flipud | 0.0 | Image flip up-down (probability) |
|
||||
| fliplr | 0.5 | Image flip left-right (probability) |
|
||||
| mosaic | 1.0 | Image mosaic (probability) |
|
||||
| mixup | 0.0 | Image mixup (probability) |
|
||||
| copy_paste | 0.0 | Segment copy-paste (probability) |
|
||||
|
||||
### Logging, checkpoints, plotting and file management
|
||||
??? note "files"
|
||||
| Key | Value | Description |
|
||||
|-----------|---------|---------------------------------------------------------------------------------------------|
|
||||
| project: | 'runs' | The project name |
|
||||
| name: | 'exp' | The run name. `exp` gets automatically incremented if not specified, i.e, `exp`, `exp2` ... |
|
||||
| exist_ok: | `False` | ??? |
|
||||
| plots | `False` | **Validation**: Save plots while validation |
|
||||
| nosave | `False` | Don't save any plots, models or files |
|
@ -0,0 +1,5 @@
|
||||
All task Trainers are inherited from `BaseTrainer` class that contains the model training and optimzation routine boilerplate. You can override any function of these Trainers to suit your needs.
|
||||
|
||||
---
|
||||
### BaseTrainer API Reference
|
||||
:::ultralytics.yolo.engine.trainer.BaseTrainer
|
@ -0,0 +1 @@
|
||||
::: ultralytics.yolo.engine.model
|
@ -1,11 +1,70 @@
|
||||
# Python SDK
|
||||
## Using YOLO models
|
||||
This is the simplest way of simply using yolo models in a python environment. It can be imported from the `ultralytics` module.
|
||||
|
||||
We provide 2 pythonic interfaces for YOLO models:
|
||||
!!! example "Usage"
|
||||
=== "Training"
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
<b> Model Interface </b> - To simply build, load, train or run inference on a model in a python application
|
||||
model = YOLO()
|
||||
model.new("n.yaml") # pass any model type
|
||||
model.train(data="coco128.yaml", epochs=5)
|
||||
```
|
||||
|
||||
<b> Trainer Interface </b> - To customize trainier elements depending on the task. Suitable for R&D ideas like architecutres.
|
||||
=== "Training pretrained"
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
______________________________________________________________________
|
||||
model = YOLO()
|
||||
model.load("n.pt") # pass any model type
|
||||
model(...) # inference
|
||||
model.train(data="coco128.yaml", epochs=5)
|
||||
```
|
||||
|
||||
### Model Interface
|
||||
=== "Resume Training"
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO()
|
||||
model.resume(task="detect") # resume last detection training
|
||||
model.resume(task="detect", model="last.pt") # resume from a given model
|
||||
```
|
||||
|
||||
More functionality coming soon
|
||||
|
||||
To know more about using `YOLO` models, refer Model class refernce
|
||||
|
||||
[Model reference](#){ .md-button .md-button--primary}
|
||||
|
||||
---
|
||||
### Customizing Tasks with Trainers
|
||||
`YOLO` model class is a high-level wrapper on the Trainer classes. Each YOLO task has its own trainer that inherits from `BaseTrainer`.
|
||||
You can easily cusotmize Trainers to support custom tasks or explore R&D ideas.
|
||||
|
||||
!!! tip "Trainer Examples"
|
||||
=== "DetectionTrainer"
|
||||
```python
|
||||
from ultralytics import yolo
|
||||
|
||||
trainer = yolo.DetectionTrainer(data=..., epochs=1) # override default configs
|
||||
trainer.train()
|
||||
```
|
||||
|
||||
=== "SegmentationTrainer"
|
||||
```python
|
||||
from ultralytics import yolo
|
||||
|
||||
trainer = yolo.SegmentationTrainer(data=..., epochs=1) # override default configs
|
||||
trainer.train()
|
||||
```
|
||||
=== "ClassificationTrainer"
|
||||
```python
|
||||
from ultralytics import yolo
|
||||
|
||||
trainer = yolo.ClassificationTrainer(data=..., epochs=1) # override default configs
|
||||
trainer.train()
|
||||
```
|
||||
|
||||
Learn more about Customizing `Trainers`, `Validators` and `Predictors` to suit your project needs in the Customization Section. More details about the base engine classes is available in the reference section.
|
||||
|
||||
[Customization tutorials](#){ .md-button .md-button--primary}
|
||||
|
@ -0,0 +1,31 @@
|
||||
th, td {
|
||||
border: 1px solid var(--md-typeset-table-color);
|
||||
border-spacing: 0px;
|
||||
border-bottom: none;
|
||||
border-left: none;
|
||||
border-top: none;
|
||||
}
|
||||
|
||||
.md-typeset__table {
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
.md-typeset__table table:not([class]) {
|
||||
font-size: .74rem;
|
||||
border-right: none;
|
||||
}
|
||||
|
||||
.md-typeset__table table:not([class]) td,
|
||||
.md-typeset__table table:not([class]) th {
|
||||
padding: 15px;
|
||||
}
|
||||
|
||||
/* light mode alternating table bg colors */
|
||||
.md-typeset__table tr:nth-child(2n) {
|
||||
background-color: #f8f8f8;
|
||||
}
|
||||
|
||||
/* dark mode alternating table bg colors */
|
||||
[data-md-color-scheme="slate"] .md-typeset__table tr:nth-child(2n) {
|
||||
background-color: hsla(var(--md-hue),25%,25%,1)
|
||||
}
|
Loading…
Reference in new issue